After spending plenty of time jumping back and forth between "machine learning" folks and statisticians, I finally am learning how to translate between these two communities, who do essentially identical things.

**Machine Learning** is a specialty of engineers, and an outgrowth of
artificial intelligence research. People with machine learning expertise
talk about prediction and learning using "features." This learning can
be supervised or unsupervised. Machine learning folks tend to find
brute-force solutions to problems with effective algorithmic
optimizations. Even when Monte Carlo techniques are the enlightened
machine learner's best friend, sampling methodologies don't often enter
into the equation.

**Statisticians** are applied mathematicians, and many are
methodological philosophers. Statisticians see the world in terms of
central tendency and dispersion instead of hits and misses, and we use
variables instead of features. We infer and predict, using regression,
estimation (supervised) or clustering. While using the same basic
techniques and technologies as Machine Learners, we live at the
population level instead of the unit level. That's why sampling methods
are so important to us, why we allocate variance, and why we lean on
mathematical theorems.

The frequentist/Bayesian distinction is trivial compared to the statistician/engineer divide, at least in terms of tradition. But in the end, the tools and techniques overlap hugely. Learning both perspectives is always the best.