A few comments on the Reinforcement Learning work done by my colleagues at Lyft
I explore when the Delta Method approximation works and fails.
Most of what explains a job well done is the choice of tool.
This briefly summarizes each part of the series on Probabilistic Graphical Models.
Notation can be confusing. This post is to address it directly.
Structure learning precedes parameter learning, whereby a graph or similarly abstract structure must be learnd from data. Doing so presents a formidable integration, but with techniques and approximations, a fruitful search over structures can be performed. For theoretical reasons, the task is considerably easier in the Bayesian Network case than in the alternative.
The theory of Markov Network parameter learning is intuitive and instructive, but it exposes an intractable normalizer, forbidding the task from reducing to easier ones. Ultimately, the task is hard.
Learning parameters of a Bayesian Network enjoys a decomposition that it makes a much friendly endeavor than that of it's cousin, the Markov Network.
Monte Carlo methods answer the inference task with a set of samples, sampled approximately from the target distribution. In total, they provide a supremely general toolset. However, to use them requires a skill for managing complexities of distributional convergence and autocorrelation.
Variational Inference, a category of approximate inference algorithms, achieves efficiency by restricting inference to a computationally friendly set of distributions. Using tools from information theory, we may find the distribution that best approximates results of exact inference.
Given a Probabilistic Graphical Model, exact inference algorithms exploit factorization and caching to answer questions about the system it represents.
Probabilistic Graphical Models are born from a remarkable synthesis of probability theory and graph theory. They are among our most powerful tools for managing nature's baffling mixture of uncertainty and complexity.
The bias-variance trade-off is a rare insight into the challenge of generalization.
Entropy and its related concepts quantify the otherwise abstract concept of information. A tour reveals its relationship to information, binary encodings and uncertainty. Most intuitively, we're left with a simple analogy to 2D areas.
A Generalized Linear Model, if viewed without knowledge of their motivation, can be a confusing tool. It's easier to understand if seen as a two knob generalization of linear regression.
A visual makes Jensen's Inequality obvious and intuitive.
The Fisher Information quantifies the information an observation carries for a parameter. The quantification becomes intuitive once we see it measuring a certain geometric quality.
The copula provides a clever means for mixing and matching a set of marginal distributions with the joint-only mechanism of a joint distribution. However, its elegance and utility have been a dangerous lure.
The Matrix Inversion Lemma looks intimidating, but it is easy to know when it applies. Doing so offers considerable computational speed ups.
When optimizing a slow-to-evaluate and non-differentiable function, one may think random sampling is the only option--a naive approach likely to disappoint. However, Bayesian optimization, a clever exploit of the function assumed smoothness, disconfirms these intuitions.
The Fundamental Law of Active Management decomposes a well known summarizing metric of an investment strategy. The decomposition yields two dimensions along which all strategies may be judged and suggests avenues for improvement.
The exponential family is a generalization of distributions, inclusive of many familiar ones plus a universe of others. The general form brings elegant properties, illuminating all distributions within. In this post, we discuss what it is, how it applies and some of its properties.
We reveal the gini impurity metric as the destination of a few natural steps.
For a class of models, the trace provides a measure of model complexity that's useful for managing the bias variance trade-off.
Utilizing a geometric perspective, we find an efficient algorithm for solving a special kind of system of equations.
A clever and useful technique for inferring distributions over infinite functions using finite observations.
Any linear transformation fixes four subspaces, which delineate outputs that can be reached using inputs that matter.