Given a Probabilistic Graphical Model, exact inference algorithms exploit factorization and caching to answer questions about the system it represents.
Probabilistic Graphical Models are born from a remarkable synthesis of probability theory and graph theory. They are among our most powerful tools for managing nature's baffling mixture of uncertainty and complexity.
The bias-variance trade-off is a rare insight into the challenge of generalization.
Entropy and its related concepts quantify the otherwise abstract concept of information. A tour reveals its relationship to information, binary encodings and uncertainty. Most intuitively, we're left with a simple analogy to 2D areas.
A Generalized Linear Model, if viewed without knowledge of their motivation, can be a confusing tool. It's easier to understand if seen as a two knob generalization of linear regression.
A visual makes Jensen's Inequality obvious and intuitive.
The Fisher Information quantifies the information an observation carries for a parameter. The quantification becomes intuitive once we see it measuring a certain geometric quality.
The copula provides a clever means for mixing and matching a set of marginal distributions with the joint-only mechanism of a joint distribution. However, its elegance and utility have been a dangerous lure.
The Matrix Inversion Lemma looks intimidating, but it is easy to know when it applies. Doing so offers considerable computational speed ups.
When optimizing a slow-to-evaluate and non-differentiable function, one may think random sampling is the only option--a naive approach likely to disappoint. However, Bayesian optimization, a clever exploit of the function assumed smoothness, disconfirms these intuitions.