The Cult of Universality in Statistical Learning Theory
The question is frequently raised as to why the theory and practice of machine learning are so divergent. Whereas if you glance at any article about classification, chances are that you will find symbol upon lemma & equation upon inequality, making claims about the bounds on the error rates, that should putatively guide the engineer in the solution of her problem.
However, the situation seems to be that the engineer having been forewarned by her pragmatic colleagues (or having checked a few herself) that these bounds are vacuous for most realistic problems, circumvents them altogether in her search for any useful nuggets in the article.
So why do these oft-ignored analyses still persist in a field that is largely comprised of engineers? From my brief survey of the literature it seems that one (but, by no means, the only) reason is the needless preponderance of worst-case thinking. (Being a panglossian believer of the purity of science and of the intentions of its workers, I am immediately dismissing the cynical suggestion that these analyses are appended to an article only to intimidate the insecure reviewer.)
The cult of universality
An inventive engineer designs a learning algorithm for her problem of classifying birds from the recordings of their calls. She suspects that her algorithm is more generally applicable and sits down to analyze it formally. She vaguely recalls various neat generalization error bounds she learned about during her days at the university, and wonders if they are applicable.
The bounds made claims of the kind
“for my classifier whose complexity is , if trained on examples, then for any distribution that generated the data, it is guaranteed that the
generalization error rate error rate on the training set + some function of (c,m)
with high probability”.
Some widely used measures of the complexity of a classifier are its VC dimension and its Rademacher complexity, both of which measure the ability of the classifier to separate any training set. The intuition is that if the classifier can imitate any arbitrary labeling of a set of vectors, it will generalize poorly.
Because of the phrase “for any distribution” in the statement of the bound, the bound is said to be universally applicable. It is this pursuit of universality which is a deplorable manifestation of worst-case thinking. It is tolerable in mathematicians that delight in pathologies, but can be debilitating in engineers.
The extent of pessimism induced by the requirement of universality is not well appreciated. The following example is designed to illustrate this by relaxing the requirement from “any distribution” to “any smooth distribution”, which is not much of a relaxation at all.
Assume that I have a small training data set in drawn from a continuous distribution . Assume further that is reasonably smooth.
I now build a linear classifier under some loss (say an SVM). I then take all the training examples that are misclassified by the linear classifier and memorize them along with their labels.
For a test vector , if is within of a memorized training example I give it the label of the training example. Otherwise I use the linear classifier to obtain my prediction.
I can make very small and since the training examples will be in general position with probability one, this classification scheme is unambiguous.
This classifier will have zero error on all training sets and therefore will have high complexity according to the usual complexity measures like VC, Rademacher etc. However, if I ignore the contribution of the memorized points (which only play a role for a set of vanishingly small probability), I have a linear classifier.
Therefore, although it is reasonable to expect any analysis to yield very similar bounds on the generalization error for a linear classifier and my linear+memorization classifier, the requirement of universality leads to vacuous bounds for the latter.
Even if I assume nothing more than smoothness, I do not know how to derive reasonable statements with the existing tools. And we almost always know much more about the data distributions!
To reiterate, checking one’s learning algorithm against the worst possible distribution is akin to designing a bicycle and checking how well it serves for holding up one’s pants.
“The medicine bottle rules”
Our engineer ponders these issues, muses about the “no free lunch” results that imply that for any two classifiers there are distributions for which either one of them is better than the other, and wonders about the philosophical distinction between a priori restricting the function space that learning algorithm searches in, and a priori restricting the distributions that the learning algorithm is applicable for.
After a short nap, she decides on a sensible route for her analysis.
1. State the restrictions on the distribution. She shows that her algorithm will perform very well if her assumptions of the data distribution are satisfied. She further argues that the allowed distributions are still broad enough to cover many other problems.
2. State to what extent the assumptions can be violated. She analyzes how the quality of her algorithm degrades when the assumptions are satisfied only approximately.
3. State which assumptions are necessary. She analyzes the situations where her algorithm will definitely fail.
I believe that these are good rules to follow while analyzing classification algorithms. My professor George Nagy calls these the medicine bottle rules, because like on medicine label, we require information on how to administer the drug, what it is for, what is bad for, and perhaps on interesting side effects.
I do not claim to follow this advice unfailingly and I admit to some of the above crimes. I, however, do believe that medicine bottle analysis is vastly more useful than much of what passes for learning theory. I look forward to hearing from you, nimble reader, of your thoughts on the kinds of analyses you would care enough about to read.