Sampling Playground: Algorithms and Math Notes

This page documents the algorithms implemented in sampling_playground. The implementation is designed for visualization and pedagogy: the “step-by-step” mode exposes the proposal and the accept/reject logic.

Targets

The playground supports two types of targets.

Shared parameterization

The UI uses a single “scale” parameter $\sigma$ (proposal standard deviation in world coordinates). For Langevin-type steps, we set $$\eta \;:=\; \tfrac{1}{2}\sigma^2,\qquad \sqrt{2\eta}=\sigma.$$ This matches the Euler–Maruyama discretization of overdamped Langevin dynamics.

ULA (Unadjusted Langevin Algorithm)

For $\pi(x) \propto e^{-f(x)}$, ULA is the “noisy gradient descent” update

$$x_{k+1} = x_k - \eta\,\nabla f(x_k) + \sqrt{2\eta}\,z_k,\qquad z_k\sim\mathcal{N}(0,I).$$

ULA is a first-order discretization and is generally biased at finite step size; there is no accept/reject correction.

MRW (Metropolized Random Walk / Random-Walk Metropolis)

MRW uses a Gaussian random-walk proposal and a Metropolis–Hastings correction:

$$y = x + \sigma z,\quad z\sim\mathcal{N}(0,I),\qquad \alpha(x,y)=\min\left\{1,\frac{\pi(y)}{\pi(x)}\right\}.$$

The proposal is symmetric ($q(x,y)=q(y,x)$), so the acceptance ratio only involves the target. For uniform targets, this reduces to “accept iff $y\in X$”.

MALA (Metropolis-Adjusted Langevin Algorithm)

MALA uses the ULA proposal, then performs a Metropolis–Hastings correction. The proposal is

$$y = x - \eta\,\nabla f(x) + \sqrt{2\eta}\,z,\qquad z\sim\mathcal{N}(0,I),$$

with Gaussian transition density

$$q(x,y) \propto \exp\!\left(-\frac{\|y-(x-\eta\nabla f(x))\|^2}{4\eta}\right).$$

The Metropolis–Hastings acceptance probability is

$$\alpha(x,y) = \min\left\{1,\frac{\pi(y)\,q(y,x)}{\pi(x)\,q(x,y)}\right\}.$$

Proximal Sampler and In-and-Out

The proximal sampler can be viewed as the composition of a forward Gaussian channel and a reverse Gaussian (restricted) channel. With parameter $\eta>0$, the conceptual step is:

In the implementation, the reverse step is sampled via a rejection sampler (a standard implementation approach for strongly log-concave/log-smooth targets).

Reverse-step implementation options (RGO)

The “reverse” step $X'\mid Y\sim ilde\pi_Y$ is the only computationally nontrivial component. In general dimensions this is an inner sampling problem, and different implementations trade off correctness, speed, and required oracles.

For the uniform target $\pi\propto\mathbf{1}_{\{x\in X\}}$, the reverse distribution becomes a restricted Gaussian on $X$:

$$X'\mid Y \sim \mathcal{N}(Y,\eta I)\;\text{restricted to }X,$$

and the resulting two-stage walk (out of $X$, then back into $X$) is the In-and-Out algorithm.

References

Note: the UI uses a finite viewing window for visualization. For rigorous sampling, the algorithms are defined on $\mathbb{R}^d$ (or on $X$ for uniform targets).