Writing

Summaries

Summaries

DJ Rich

This briefly summarizes each part of the series on Probabilistic Graphical Models.

Notation Guide

Notation Guide

DJ Rich

Notation can be confusing. This post is to address it directly.

Part 7: Structure Learning

Part 7: Structure Learning

DJ Rich

Structure learning precedes parameter learning, whereby a graph or similarly abstract structure must be learnd from data. Doing so presents a formidable integration, but with techniques and approximations, a fruitful search over structures can be performed. For theoretical reasons, the task is considerably easier in the Bayesian Network case than in the alternative.

Part 6: Learning Parameters of a Markov Network

Part 6: Learning Parameters of a Markov Network

DJ Rich

The theory of Markov Network parameter learning is intuitive and instructive, but it exposes an intractable normalizer, forbidding the task from reducing to easier ones. Ultimately, the task is hard.

Part 4: Monte Carlo Methods

Part 4: Monte Carlo Methods

DJ Rich

Monte Carlo methods answer the inference task with a set of samples, sampled approximately from the target distribution. In total, they provide a supremely general toolset. However, to use them requires a skill for managing complexities of distributional convergence and autocorrelation.

Part 3: Variational Inference

Part 3: Variational Inference

DJ Rich

Variational Inference, a category of approximate inference algorithms, achieves efficiency by restricting inference to a computationally friendly set of distributions. Using tools from information theory, we may find the distribution that best approximates results of exact inference.

Part 2: Exact Inference

Part 2: Exact Inference

DJ Rich

Given a Probabilistic Graphical Model, exact inference algorithms exploit factorization and caching to answer questions about the system it represents.

Bias-Variance Trade-Off

Bias-Variance Trade-Off

DJ Rich

The bias-variance trade-off is a rare insight into the challenge of generalization.