Probabilistic Programming for Bayesian Computation: Part I

However, the situation seems to be that the engineer having been forewarned by her pragmatic colleagues (or having checked a few herself) that these bounds are vacuous for most realistic problems, circumvents them altogether in her search for any useful nuggets in the article.

So why do these oft-ignored analyses still persist in a field that is largely comprised of engineers? From my brief survey of the literature it seems that one (but, by no means, the only) reason is the needless preponderance of *worst-case thinking*. (Being a panglossian believer of the purity of science and of the intentions of its workers, I am immediately dismissing the cynical suggestion that these analyses are appended to an article only to intimidate the insecure reviewer.)

**The cult of universality**

An inventive engineer designs a learning algorithm for her problem of classifying birds from the recordings of their calls. She suspects that her algorithm is more generally applicable and sits down to analyze it formally. She vaguely recalls various neat *generalization error *bounds she learned about during her days at the university, and wonders if they are applicable.

The bounds made claims of the kind

“for my classifier whose complexity is , if trained on examples, then for any distribution that generated the data, it is guaranteed that the

generalization error rate error rate on the training set + some function of (c,m)

with high probability”.

Some widely used measures of the complexity of a classifier are its VC dimension and its Rademacher complexity, both of which measure the ability of the classifier to separate *any *training set. The intuition is that if the classifier can imitate any arbitrary labeling of a set of vectors, it will generalize poorly.

Because of the phrase “for any distribution” in the statement of the bound, the bound is said to be *universally* applicable. It is this pursuit of *universality* which is a deplorable manifestation of worst-case thinking. It is tolerable in mathematicians that delight in pathologies, but can be debilitating in engineers.

The extent of pessimism induced by the requirement of universality is not well appreciated. The following example is designed to illustrate this by relaxing the requirement from “any distribution” to “any smooth distribution”, which is not much of a relaxation at all.

Assume that I have a small training data set in drawn from a continuous distribution . Assume further that is reasonably smooth.

I now build a linear classifier under some loss (say an SVM). I then take all the training examples that are misclassified by the linear classifier and memorize them along with their labels.

For a test vector , if is within of a memorized training example I give it the label of the training example. Otherwise I use the linear classifier to obtain my prediction.

I can make very small and since the training examples will be in general position with probability one, this classification scheme is unambiguous.

This classifier will have zero error on all training sets and therefore will have high complexity according to the usual complexity measures like VC, Rademacher etc. However, if I ignore the contribution of the memorized points (which only play a role for a set of vanishingly small probability), I have a linear classifier.

Therefore, although it is reasonable to expect any analysis to yield very similar bounds on the generalization error for a linear classifier and my linear+memorization classifier, the requirement of universality leads to vacuous bounds for the latter.

Even if I assume nothing more than smoothness, I do not know how to derive reasonable statements with the existing tools. And we almost always know much more about the data distributions!

To reiterate, checking one’s learning algorithm against the worst possible distribution is akin to designing a bicycle and checking how well it serves for holding up one’s pants.

**“The medicine bottle rules”**

Our engineer ponders these issues, muses about the “no free lunch” results that imply that for any two classifiers there are distributions for which either one of them is better than the other, and wonders about the philosophical distinction between *a priori* restricting the function space that learning algorithm searches in, and *a priori* restricting the distributions that the learning algorithm is applicable for.

After a short nap, she decides on a sensible route for her analysis.

1. *State the restrictions on the distribution*. She shows that her algorithm will perform very well if her assumptions of the data distribution are satisfied. She further argues that the allowed distributions are still broad enough to cover many other problems.

2. *State to what extent the assumptions can be violated*. She analyzes how the quality of her algorithm degrades when the assumptions are satisfied only approximately.

3. *State which assumptions are necessary*. She analyzes the situations where her algorithm will definitely fail.

I believe that these are good rules to follow while analyzing classification algorithms. My professor George Nagy calls these the *medicine bottle rules*, because like on medicine label, we require information on how to administer the drug, what it is for, what is bad for, and perhaps on interesting side effects.

I do not claim to follow this advice unfailingly and I admit to some of the above crimes. I, however, do believe that medicine bottle analysis is vastly more useful than much of what passes for learning theory. I look forward to hearing from you, nimble reader, of your thoughts on the kinds of analyses you would care enough about to read.

]]>We know that for any p.d. kernel there exists a *deterministic* map that has the aforementioned property but it may be infinite dimensional. The paper presents results indicating that with the randomized map we can get away with only a “small” number of features (at least for a classification setting).

Before applying the method to density estimation let us review the relevant section of the paper briefly.

**Bochner’s Theorem and Random Fourier Features**

Assume that we have data in and a continuous p.d. kernel defined for every pair of points . Assume further that the kernel is shift-invariant, i.e., and that the kernel is scaled so that .

The theorem by Bochner states that under the above conditions must be the Fourier transform of a non-negative measure on . In other words, there exists a probability density function for such that .

where (1) is because is real. Equation (2) says that if we draw a random vector according to and form two vectors and , then the expected value of is .

Therefore, for , if we choose the transformation

with drawn according to , linear inner products in this transformed space will approximate .

**Gaussian RBF Kernel**

The Gaussian radial basis function kernel satisfies all the above conditions and we know that the Fourier transform of the Gaussian is another Gaussian (with the reciprocal variance). Therefore for “linearizing” the Gaussian r.b.f. kernel, we draw samples from a Gaussian distribution for the transformation.

**Parzen Window Density Estimation**

Given a data set , the the so-called Parzen window probability density estimator is defined as follows

where is often a positive, symmetric, shift-invariant kernel and is the bandwidth parameter that controls the scale of influence of the data points.

A common kernel that is used for Parzen window density estimation is the Gaussian density. If we make the same choice we can apply our feature transformation to linearize the procedure. We have

where has been absorbed into the kernel variance.

Therefore all we need to do is take the mean of the transformed data points and estimate the pdf at a new point to be (proportional to) the inner product its transformed feature vector with the mean.

Of course since the kernel value is only approximated by the inner product of the random Fourier features we expect that the estimate pdf will differ from a plain unadorned Parzen window estimate. But different how?

**Experiments**

Below are some pictures showing how the method performs on some synthetic data. I generated a few dozen points from a mixture of Gaussians and plotted contours of the estimated pdf for the region around the points. I did this for several choices of and (the scale parameter for the Gaussian kernel).

First let us check that the method performs as expected for large values of because the kernel value is well approximated by the inner product of the Fourier features. The first 3 pictures are for for various values of .

—————————————————————————

—————————————————————————

Now let us see what happens when we decrease . We expect the error in approximating the kernel would lead to obviously erroneous pdf. This is clearly evident for the case of .

—————————————————————————

—————————————————————————

The following picture for and is even stranger.

—————————————————————————

—————————————————————————

**Discussion**

It seems that even for a simple 2D example, we seem to need to compute a very large number of random Fourier features to make the estimated pdf accurate. (For this small example this is very wasteful, since a plain Parzen window estimate would require less memory and computation.)

However, the pictures do indicate that if the approach is to be used for outlier detection (aka novelty detection) *from a given data set, *we might be able get away with much smaller . That is, even if the estimated pdf has a big error on the entire space, on the points from the data it seems to be reasonably accurate.

**Experiments on Synthetic Data**

I tried the algorithm on some synthetic data and a linear logistic regression model. The results are shown in the figures below.

In both examples, there are examples from two classes (red and blue). Each class is a drawn from a mixture of two normal distributions (i.e., there are two *types* per class).

The types are shown as red squares and red circles, and blue diamonds and blue triangles. Class-conditionally the types have a skewed distribution. There are 9 times as many red squares as red circles, and 9 times as many blue diamonds as triangles.

We would expect a plain logistic regression classifier will minimize the overall “error” on the training data.

However since an adversary may assign a different set of costs to the various types (than those given by the type frequencies) a minimax classifier will hopefully try to avoid incurring a large number of errors on the most confusable types.

**Example 1**

** **

** **

Recall that as gamma decreases to zero, the adversary has more cost vectors at his disposal, meaning that the algorithm optimizes for a worse assignment of costs.

**Example 2**

**Discussion**

1. Notice that the minimax classifier trades off more errors on more frequent types for lower error on the less frequent ones. As we said before, this may be desirable if the type distribution in the training data is not representative of what is expected in the test data.

2. Unfortunately we didn’t quite get it to help on the named-entity recognition problem that motivated the work.

]]>**Cost Sensitive Loss Functions
**

Given a training data set , most learning algorithms learn a classifier that is parametrized by a vector by minimizing a loss function

where is the loss on example and is some function that penalizes complexity. For example for logistic regression the loss function looks like

for some .

If, in addition, the examples came with costs (that somehow specify the importance of minimizing the loss on that particular example), we can perform cost sensitive learning by over/under-sampling the training data or minimize a cost-weighted loss function (see this paper by Zadrozny et. al. )

We further constrain and . So the unweighted learning problem corresponds to the case where all .

**A Game Against An Adversary**

Assume that the learner is playing a game against an adversary that will assign the costs to the training examples that will lead to the worst possible loss for any weight vector the learner produces.

How do we learn in order to minimize this maximum possible loss? The solution is to look for the the *minimax* solution

For any realistic learning problem the above optimization problem does not have a unique solution.

Instead, let us assume that the adversary has to pay a price for assigning his costs, which depends upon how much they deviate from uniform. One way is to make the price proportional to the negative of the entropy of the cost distribution.

where (the Shannon entropy of the cost vector, save the normalization to sum to one).

The new minimax optimization problem can be posed as

subject to the constraints

Note that the regularization term on the cost vector essentially restricts the set of possible cost vectors the adversary has at his disposal.

**Optimization**

For convex loss functions (such as the logistic loss) is convex in for a fixed cost assignment, therefore so is . Furthermore, is concave in and is restricted to a convex and compact set. We can therefore apply Danskin’s theorem to perform the optimization.

The theorem allows us to say that, for a fixed weight vector , if

and if is unique, then

even though is a function of .

**Algorithm**

The algorithm is very simple. Perform until convergence the following

1. At the iteration, for the weight vector find the cost vector the maximizes .

2. Update , where is the learning rate.

The maximization in step 1 is also simple and can be shown to be

As expected, if , the costs remain close to one and as the entire cost budget is allocated to the example with the largest loss.

**Of types and tokens**

This line of work was motivated by the following intuition of my colleague Marc Light about the burstiness of types in language data.

For named entity recognition the training data is often drawn from a small time window and is likely to contain entity types whose distribution is not representative of the data that the recognizer is going see in general.

(The fact that ‘Joe Plumber” occurs so frequently in our data is because we were unlucky enough to collect annotated data in 2008.)

We can build a recognizer that is robust to such misfortunes by optimizing for the worst possible *type* distribution rather than for the observed *token* distribution. One way to accomplish this is to learn the classifier by minimax over the cost assignments for different types.

For type let be the set of all tokens of that type and be the number of tokens of that type. We now estimate by

under the same constraints on as above. Here is the observed type distribution in the training data and is the KL-divergence.

The algorithm is identical to the one above except the maximum over for a fixed is slightly different.

**Related Work and Discussion**

1. The only other work I am aware of that optimizes for a similar notion of robustness is the one on adversarial view for covariate shift by Globerson et. al. and the NIPS paper by Bruckner and Scheffer. Both these papers deal with minimax learning for robustness to additive transformation of feature vectors (or addition/deletion of features). Although it is an obvious extension, I have not seen the regularization term that restricts the domain for the cost vectors. I think it allows for learning models that are not overly pessimistic.

2. If each class is considered to one type, the usual Duda & Hart kind of minimax over class priors can be obtained. Minimax estimation is usually done for optimizing for the worst possible prior over the parameter vectors ( for us) and not for the costs over the examples.

3. For named entity recognition, the choice of how to group examples by types is interesting and requires further theory and experimentation.

4. For information retrieval often the ranker is learned from several example queries. The learning algorithm tries to obtain a ranker that matches human judgments for the document collection for the example queries. Since the queries are usually sampled from the query logs, the learned ranker may perform poorly for a *particular* user. Such a minimax approach may be suitable for optimizing for the worst possible assignment of costs over query types.

In the next post I will present some experimental results on toy examples with synthetic data.

**Acknowledgment**

I am very grateful to Michael Bruckner for clarifying his NIPS paper and some points about the applicability of Danskin’s theorem, and to Marc Light for suggesting the problem.

]]>(As a brief aside, I note that an essentially identical approach was used to sparsify Gaussian Process Regression by Snelson and Gharahmani. For GPR they use gradient ascent on the log-likelihood to learn the prototypes and labels, which is akin to learning the prototypes and betas for logistic regression. The set of prototypes and labels generated by their algorithm can be thought of as a pseudo training set.)

I recently (with the help of my super-competent Java developer colleague Hiroko Bretz) implemented the sparse kernel logistic regression algorithm. The learning is done in an online fashion (i.e., using stochastic gradient descent).

It seems to perform reasonably well on large datasets. Below I’ll show its behavior on some pseudo-randomly generated classification problems.

All the pictures below are for logistic regression with the Gaussian RBF kernel. All data sets have 1000 examples from three classes which are mixtures of Gaussians in 2D (shown in red, blue and green). The left panel is the training data and the right panel are the predictions on the same data set by the learned logistic regression classifier. The prototypes are shown as black squares.

**Example 1 (using 3 prototypes)
**

Although the classifier changes considerably from iteration to iteration, the prototypes do not seem to change much.

**Example 2 (five prototypes)
**

**Example 3 (five prototypes)
**

The right most panel shows the first two “transformed features”, i.e., the kernel values of the examples to the first two prototypes.

**Implementation details and discusssion**

The algorithm runs through the whole data set to update the betas (fixing everything else), then runs over the whole data set again to update the prototypes (fixing the betas and the kernel params), and then another time for the kernel parameter. These three update steps are repeated until convergence.

As an indication of the speed, it takes about 10 minutes until convergence with 50 prototypes, on a data set with a quarter million examples and about 7000 binary features (about 20 non-zero features/example).

I had to make some approximations to make the algorithm fast — the prototypes had to be updated lazily (i.e., only the feature indices that have the feature ON are updated), and the RBF kernel is computed using the distance only along the subspace of the ON features.

The kernel parameter updating worked best when the RBF kernel was re-parametrized as .

The learning rate for betas was annealed, but those of the prototypes and the kernel parameter was fixed at a constant value.

Finally, and importantly, I did not play much with the initial choice of the prototypes. I just picked a random subset from the training data. I think more clever ways of initialization will likely lead to much better classifiers. Even a simple approach like K-means will probably be very effective.

]]>The task was to produce an automatic metric to evaluate machine generated summaries (i.e., *system* summaries) against human generated summaries for the TAC ’09 Update Summarization Task. Clearly the automatic metric is just some function that produces a similarity score between the system summary and the human generated (the so-called *model*) summary.

The proposed metrics were evaluated by comparing their rankings of the system summaries from different peers to that of the ranking produced by human judges.

We use an estimate of the conditional “compressibility” of the model summary given the system summary as the similarity metric. The conditional compressibility is defined as the increase in the compressibility of the model summary when the system summary has been observed.

In order to judge the similarity of the system summary , to the model summary , we propose to use the difference in compressibility of when is not seen to when is given. This metric basically

captures the reduction in the uncertainty in when is known.

We define the compressibility of any string by

and the conditional compressibility of string over an alphabet given another string over the same alphabet as

where is the concatenation of the strings and , is the entropy of string , and is the length of the string .

The fractional increase in compressibility of given can then measured by

.

We use as the similarity metric to measure the similarity of a system summary to the model summary .

Our metric is similar to the one proposed by Li and Vitanyi and is theoretically well-justified from the perspective of algorithmic information theory. One peculiarity is that our similarity is asymmetric.

The only thing that is needed to implement the above similarity metric is an estimate of the entropy for a string . We use the BWT for this estimate.

**BWT-based String Entropy Estimate**

We use the Move-To-Front (MTF) entropy of the Burrows-Wheeler transform of a given string as an estimate for its entropy $H(S)$.

The MTF encoding of a string is performed by traversing the string and assigning to each symbol the position of that symbol in the alphabet and then moving the symbol to the front of the alphabet. Therefore a sequence with a lot of runs will have a lot of zeros in its MTF encoding.

In this paper the MTF coding is used to define the MTF entropy (which the authors also call *local entropy*) of a string as

where is the symbol of the MTF coding of the string .

Now we define , the entropy of string as

where is the BWT of string .

Since the Burrows-Wheeler transform involves just the construction of a suffix array, the computation of our compression based evaluation metric is linear in time and space in the length of the model and system summary strings.

**Some Technical Details**

For our implementation, we considered each word in a string as a separate symbol. Our alphabet of symbols therefore contained all the words in the two strings being compared. The words were normalized by lower casing and removing punctuation. Because BWT needs an ordered alphabet, we used the lexicographic order on the words in the alphabet.

**Results**

The results on the TAC-AESOP task (above) show that the BWT based metric (FraCC in the table) is reasonable for summarization evaluation, especially because there are not very many knobs to tune. I obtained these results from Frank (who will present them at TAC next week). The “best metric” is the AESOP submission that seemed to have high scores across several measures.

]]>BWT massages the original string into being more amenable to compression. Of course the transform doesn’t alter the compressibility (entropy rate) of the original string. All it does is make the string more compressible by algorithms we know.

The reason string permutation by BWT (as opposed to say sorting the string, which makes it *really* compressible) is useful is that the reverse transform (undoing the permutation) can be done with very little additional information. Mark Nelson wrote a nice introduction to the transform. Moreover, the BWT essentially involves the construction of the suffix array for the string, and therefore can be done in time and space linear in the length of the string.

Here is an example of the Burrows-Wheeler tranformation of the first stanza of Yeats’ *Sailing to Byzantium*. I added some newlines to the transformed string, and the underscores represent spaces in the original string. Notice the long runs of characters in the transformed string.

*Original string*

THAT is no country for old men. The young In one another’s arms, birds in the trees – Those dying generations – at their song, The salmon-falls, the mackerel-crowded seas, Fish, flesh, or fowl, commend all summer long Whatever is begotten, born, and dies. Caught in that sensual music all neglect Monuments of unageing intellect.

*BWTransformed string*

rsgnsnlhhs__lntsnH__T__.A____ss.,gt,.-gcd,es s,,,ode,yrgtsgrTredllssrn,edtrln,ntefemnu__fs___eh_hrC___ia__-eennlew_r_nshhhhslldrnbghrttmmgsmhvmnkielto-___nnnnna_ueesstWtTtTttTgsd__ye_teb__Fcweallolgfaaeaa_l

__mumoulr_reoeIiiueao_eouoii_aoeiueon__cm_sliM_

fbhngycrfeoeeoieiteaoctamleen’idit_o__ieu_n_cchaanta

____oa_nnosans_oomeoord_

**A useful property**

Effros et. al. showed that for a string generated by a finite-memory source, the BWT of the string is asymptotically (in the length of the string) indistinguishable from a piece-wise independent and identically distributed (i.i.d.) string. This is not surprising given that symbols with similar contexts appear sequentially in the BWT string, and for finite memory sources the current symbol is generated i.i.d. given a finite length context.

This property can be exploited to easily cluster words according to context by using BWT.

**Word clustering**

In this paper, among other things, Brown et.al. present a word clustering algorithm based on maximizing the average mutual information between the cluster ids of adjacent words. Some results are presented in Table 2 in the paper.

Such word clusters can be useful for feature engineering for sequence tagging tasks such as part-of-speech tagging or named-entity recognition. One of the most commonly used features for such tasks is one which checks if the current word is in a carefully constructed list of words.

Brown et. al. admit that, even after optimizations, their algorithm is slow and resort to approximations. (I realize that computers have gotten much faster since but still their algorithm is cubic in the size of the vocabulary.)

**Word clustering based on BWT**

We will cluster two words together if they appear independently given certain contexts (albeit with different probabilities). We first perform a BW transform on the input string of words (considering each *word* as a symbol, unlike in the example above) and measure whether the two words appear independently in an i.i.d. fragment.

Instead of actually trying to chop the BWT string into i.i.d. fragments before analysis, we adopt a proxy metric. We check if the number of times the two words are next to each other in the BWT string is large compared to what we would expect from their frequencies. We compute this as probability ratio with appropriate smoothing.

Another neat consequence of doing the clustering by BWT is that we only need to consider pairs of words that do appear next to each other in the BWT string. Therefore the selection of candidates for clustering is linear in the length of the string and not quadratic in the size of the vocabulary.

**Some results**

I ran this algorithm on about a month’s worth of New York Times and Wall Street Journal news data and these are the pairs of words with the highest scores.

january february 0.177721578886

january march 0.143172972502

march february 0.142398170589

englandgeoneng jerseyusanj 0.141412321852

news becdnews 0.135642386152

finala final 0.131901568726

finala finalb 0.122728309966

finala finalc 0.113085215849

cafd cea 0.107549686029

february april 0.100734422316

january april 0.0993752546848

has have 0.0967101802923

march april 0.0929933503714

did does 0.0854452561942

has had 0.0833642704346

will would 0.0827179598199

have had 0.0773517518078

january february 0.177721578886

january march 0.143172972502

march february 0.142398170589

englandgeoneng jerseyusanj 0.141412321852

news becdnews 0.135642386152

finala final 0.131901568726

finala finalb 0.122728309966

finala finalc 0.113085215849

cafd cea 0.107549686029

february april 0.100734422316

january april 0.0993752546848

has have 0.0967101802923

march april 0.0929933503714

did does 0.0854452561942

has had 0.0833642704346

will would 0.0827179598199

have had 0.0773517518078

…

I constructed a graph by joining all word pairs that have a score above a threshold and ran a greedy maximal clique algorithm. These are some of the resulting word clusters.

older young younger

announced today yesterday said reported

month today week yesterday

days month months decade year weeks years

decades months decade weeks years

com org www

writing write wrote

directed edited produced

should will probably could would may might can

worries worried concerns

work worked working works

wearing wear wore

win lost losing

man people men

against like to about that by for on in with from of at

under by on with into over from of

baton moulin khmer

daughter husband sister father wife mother son

red green blue black

ice sour whipped

time days months year years day

eastern coast southeastern

bergen orange nassau westchester

east ivory west

goes gone go going went

known seen well

travel review leisure weekly editorial

cultural financial foreign editorial national metropolitan

thursdays wednesdays fridays sundays tuesdays

thursday today monday sunday yesterday wednesday saturday friday tuesday

…

**Discussion**

1. For the above results, I only did the clustering based on right contexts. We can easily extend the word-pair score to take into account left contexts as well by concatenating the BWT of the *reversed* string to the BWT of the original string, and calculating the scores on this double length transformed string.

2. The word clustering algorithm of Brown et. al. proceeds by iteratively merging the best pair of words and replacing the two words in the alphabet (and the string) by a merged word. We can imagine doing something similar with our approach, except, because BWT uses the order on the alphabet, we need to decide where to insert the merged word.

3. One thing that I should have done but didn’t for the above results is to order the alphabet (of words) lexicographically. Instead I assign positive integers to the words based on their first appearance in the string, which is the order BWT uses to sort. Fixing this should improve the results.

]]>(I generated them using a java SVM applet from here, whose reliability I cannot swear to, but have no reason to doubt.) Both SVM boundaries are with Gaussian RBF kernels: the first with and the second with on two different data sets.

Note the segments of the boundary to the east of the blue examples in the bottom figure, and those to the south and to the north-east of the blue examples in the top figure. They seem to violate intuition.

The reason for these anomalous boundaries is of course the large complexity of the function class induced by the RBF kernel with large , which gives the classifier a propensity to make subtle distinctions even in regions of somewhat low example density.

**A possible solution: using complex kernels only where they are needed**

We propose to build a cascaded classifier, which we will call Incremental Complexity SVM (ICSVM), as follows.

We are given a sequence of kernels of increasing complexity. For example the sequence is of polynomial kernels, where is the polynomial kernel with degree .

The learning algorithm first learns an SVM classifier with kernel , that classifies a *reasonable* portion of the examples with a large margin . This can be accomplished by setting the SVM cost parameter to some low value.

Now all the examples outside the margin are thrown out, and another SVM classifier with kernel is learned, so that a reasonable portion of the remaining examples are classified with some large margin .

This procedure is continued until all the examples are classified outside the margin or the set of kernels is exhausted. The final classifier is a combination of all the classifiers .

A test example can be classified as follows. We first apply classifier to the test example, and if it is classified with margin , we output the assigned label and stop. If not we classify it with classifier in a similar fashion, and so on…

Such a scheme will avoid anomalous boundaries as those in the pictures above.

**Discussion**

1. With all the work that has been done on SVMs it is very likely that this idea or something very similar has been thought of, but I haven’t come across it.

2. There is some work on kernel learning where a convex combination of kernels is learned but I think that is a different idea.

3. One nice thing about such a classification scheme is that at run-time it will expend less computational resources on easier examples and more on more difficult ones. As my thesis supervisor used to say, it is silly for most classifiers to insist on acting exactly the same way on both easy and hard cases.

4. The choices of the cost parameters for the SVMs is critical for the accuracy of the final classifier. Is there a way of formulating the choice of the parameters in terms of minimizing some overall upper bound on the generalization error from statistical learning theory?

5. Is there a one-shot SVM formulation with the set of kernels that exactly or approximately acts like our classifier?

6. The weird island-effect and what Ken calls the lava-lamp problem in the boundaries above are not just artifacts of SVMs. We would expect a sparse kernel logistic regression to behave similarly. It would be interesting to do a similar incremental kernel thing with other kernel-based classifiers.

]]>Because it is often impossible to construct the equivalent real-world object from its feature values, almost universally, active learning is *pool-based*. That is we start with a large pool of unlabeled data and the learner (usually sequentially) picks the objects from the pool for which the labels are requested.

One unavoidable effect of active learning is that we end up with a biased training data set. If the true data distribution is , we have data drawn from some distribution (as always is the feature vector and is the class label).

We would like to correct for this bias so it does not lead to learning an incorrect classifier. And furthermore we want to use this biased data set to accurately evaluate the classifier.

In general since is unknown, if is arbitrarily different from it there is nothing that can be done. However, thankfully, the bias caused by active learning is more tame.

**The type of bias**

Assume that marginal feature distribution of the labeled points after active learning is given by . Therefore is the putative distribution from which we can assume the feature vectors with labels have been sampled from.

For every feature vector thus sampled from we request its label from the oracle which returns a label according to the conditional distribution . That is there is *no bias* in the conditional distribution. Therefore . This type of bias has been called *covariate shift*.

**The data**

After actively sampling the labels times, let us say we have the following data — a biased labeled training data set , where the feature vectors come from the original pool of unlabeled feature vectors

Let us define . If is large we expect the feature vector to be under-represented in the labeled data set and if it is small it is over-represented.

Now define for each labeled example for . If we knew the values of we can correct for the bias during training and evaluation.

This paper by Huang *et. al.*, and some of its references deal with the estimation of . *Remark*: This estimation needs take into account that . This implies that the sample mean of the beta values on the labeled data set should be somewhere close to unity. This constraint is explicitly imposed in the estimation method of Huang *et. al*.

**Evaluation of the classifier**

We shall first look at bias-correction for evaluation. Imagine that we are handed a classifier , and we are asked to use the biased labeled data set to evaluate its accuracy. Also assume that we used the above method to estimate . Now fixing the bias for evaluation boils down to just using a weighted average of the errors, where the weights are given by .

If the empirical loss on the biased sample is written as , we write the estimate of the loss on the true distribution as the weighted loss .

Therefore we increase the contribution of the under-represented examples, and decrease that of the over-represented examples, to the overall loss.

**Learning the classifier**

How can the bias be accounted for during learning? The straightforward way is to learn the classifier parameters to minimize the weighted loss (plus some regularization term) as opposed to the un-weighted empirical loss on the labeled data set.

However, a natural question that can be raised is whether *any* bias correction is necessary. Note that the posterior class distribution is unbiased in the labeled sample. This means that any Bayes-consistent diagnostic classifier on will still converge to the Bayes error rate with examples drawn from .

For example imagine constructing a -Nearest Neighbor classifier on the biased labeled dataset. If we let and , the classifier will converge to the Bayes-optimal classifier as , *even if* * is biased*. This is somewhat paradoxical and can be explained by looking at the case of finite .

For finite , the classifier trades off proportionally more errors in low density regions for fewer overall errors. This means that by correcting for the bias by optimizing the weighted loss, we can obtain a lower error rate. Therefore although both the bias-corrected and un-corrected classifiers converge to the Bayes error, the former converges faster.

]]>