Writing
Summaries
This briefly summarizes each part of the series on Probabilistic Graphical Models.
Notation Guide
Notation can be confusing. This post is to address it directly.
Part 7: Structure Learning
Structure learning precedes parameter learning, whereby a graph or similarly abstract structure must be learnd from data. Doing so presents a formidable integration, but with techniques and approximations, a fruitful search over structures can be performed. For theoretical reasons, the task is considerably easier in the Bayesian Network case than in the alternative.
Part 6: Learning Parameters of a Markov Network
The theory of Markov Network parameter learning is intuitive and instructive, but it exposes an intractable normalizer, forbidding the task from reducing to easier ones. Ultimately, the task is hard.
Part 5: Learning Parameters of a Bayesian Network
Learning parameters of a Bayesian Network enjoys a decomposition that it makes a much friendly endeavor than that of it's cousin, the Markov Network.
Part 4: Monte Carlo Methods
Monte Carlo methods answer the inference task with a set of samples, sampled approximately from the target distribution. In total, they provide a supremely general toolset. However, to use them requires a skill for managing complexities of distributional convergence and autocorrelation.
Part 3: Variational Inference
Variational Inference, a category of approximate inference algorithms, achieves efficiency by restricting inference to a computationally friendly set of distributions. Using tools from information theory, we may find the distribution that best approximates results of exact inference.
Part 2: Exact Inference
Given a Probabilistic Graphical Model, exact inference algorithms exploit factorization and caching to answer questions about the system it represents.
Part 1: An Introduction to Probabilistic Graphical Models
Probabilistic Graphical Models are born from a remarkable synthesis of probability theory and graph theory.
Bias-Variance Trade-Off
The bias-variance trade-off is a rare insight into the challenge of generalization.