This blog post summarizes my notes on Adversarial Examples from Computational Constraints by Sébastien Bubeck, Eric Price, and Ilya Razenshteyn (2018). The paper asks a fundamental question in machine learning: Why are adversarial examples a thing, anyway? What's stopping us from building the robust classifiers of our dreams?