A Variational Autoencoder (VAE) is a probabilistic extension of the autoencoder that learns latent variable models by optimizing a variational lower bound on data likelihood, combining neural networks with variational Bayesian methods 维基百科. Instead of encoding inputs to fixed latent codes, VAEs learn parameters of a probability distribution (mean and variance) and sample latent vectors via the reparameterization trick, enabling stochastic gradient descent 简单维基百科. The loss comprises a reconstruction term and a Kullback-Leibler (KL) divergence term that regularizes the latent distribution toward a prior (typically standard normal), facilitating smooth latent spaces that support generative applications such as novel image synthesis, interpolation, and anomaly detection across modalities.