blargon
  • Home
Sign in Subscribe

More models more problems

In a previous post, I discussed the bias-variance tradeoff in machine learning. We saw that when a machine learning model is too simple, there's a high amount of error in the form of bias, which is the model's tendency to be systematically wrong. This is reduced

Beta-gammatene

One of the reasons that probability theory is so interesting—and challenging—is that simple questions can quickly lead to deep philosophical and mathematical issues. Suppose we have a coin and want to determine the probability of the coin landing heads when flipped. What does "probability" mean in

Leaky faucets and leaky data

As I write this post, my bathtub faucet is dripping ever so slightly in the distance. The plumbing is very old and squeaky, and the valves don't work very well. When you want to change the water temperature, you have to carefully turn both the hot and cold

Algèbre linéaire

I love minutia because they—like Leibniz's monads—so often reflect important features of the larger world around them. This is particularly true in mathematics where minor details of definitions can reflect years of collective thought and refinement by generations of mathematicians, and where changes to those details

Machine and human error

While studying machine learning, I've been struck by the many philosophical parallels to human learning and cognition—particularly in the ways that both humans and machines make errors. We're all familiar with the fact that our human mental models of the world, shaped by our experiences,

  • Sign up
"Why," he asked, "must there be a footer?"