## The redundancy of view-redundancy for co-training

Blum and Mitchell’s *co-training* is a (very deservedly) popular semi-supervised learning algorithm that relies on class-conditional feature independence, and view-redundancy (or view-agreement) for semi-supervised learning.

I will argue that the view-redundancy assumption is unnecessary, and along the way show how surrogate learning can be plugged into co-training (which is not all that surprising considering that both are multi-view semi-sup algorithms that rely on class-conditional view-independence).

I’ll first explain co-training with an example.

** Co-training – The setup**

Consider a classification problem on the feature space . I.e., a feature vector can be split into two as .

We make the rather restrictive assumption that and are class-conditionally independent for both classes. I.e., for .

(Note that unlike surrogate learning with mean-independence, both and are allowed to be multi-dimensional.)

Co-training makes an additional assumption that either view is sufficient for classification. This *view-redundancy* assumption basically states that the probability mass in the region of the feature space, where the Bayes optimal classifiers on the two views disagree with each other, is zero.

(The original co-training paper actually relaxes this assumption in the epilogue, but it is unnecessary to begin with, and the assumption has proliferated in later manifestations of co-training.)

We are given some labeled data (or a weak classifier on one of the views) and an large supply of unlabeled data. We are now ready to proceed with co-training to construct a Bayes optimal classifier.

**Co-training – The algorithm**

The algorithm is very simple. We use our weak classifier, say , (which we were given, or which we constructed using the measly labeled data) on the one view () to classify all the unlabeled data. We select the examples classified with high confidence, and use these as labeled examples (using the labels assigned by the weak classifier) to train a classifier on the other view ().

We now classify the unlabeled data with to similarly generate labeled data to retrain . This back-and-forth procedure is repeated until exhaustion.

Under the above assumptions (and with “sufficient” unlabeled data) and converge to the Bayes optimal classifiers on the respective feature views. Since either view is enough for classification, we just pick one of the classifiers and release it into the wild.

**Co-training – Why does it work?**

I’ll try to present an intuitive explanation of co-training using the example depicted in the following figure. Please focus on it intently.

The feature vector in the example is 2-dimensional and both views and are 1-dimensional. The class-conditional distributions are uncorrelated and jointly Gaussian (which means independent) and depicted by their equiprobability contours in the figure. The marginal class-conditional distributions are show along the two axes. Class is shown in red and class is shown in blue. The picture also shows some unlabeled examples.

Assume we have a weak classifier on the first view. If we extend the classification boundary for this classifier to the entire space , the boundary necessarily comprises of lines parallel to the axis. Let’s say there is only one such line and all the examples below that line are assigned class and all the examples above are assigned class .

We now ignore all the examples close to the classification boundary of (i.e., all the examples in the grey band) and project the rest of the points onto the axis.

How will these projected points be distributed along ?

Since the examples that were ignored (in the grey band) were selected based on their values, owing to class-conditional independence, the marginal distribution along for either class will be *exactly* the same as if none of the samples were ignored. This is the key reason for the conditional-independence assumption.

The procedure has two subtle, but largely innocuous, consequences.

First, since we don’t know how many class and class examples are in the grey band the relative ratio of the examples of the two classes in the not-ignored set may not the same as in the original full unlabeled sample set. If the class priors are known, this can easily be corrected for when we learn . If the class priors are unknown other assumptions on are necessary.

Second, when we project the unlabeled examples on to we assign them the labels given to them by which can be erroneous. In the figure above, there will be examples in the region indicated by A that are actually class but have been assigned class , and examples in region B that were from class but were called class .

Again because of the class-conditional independence assumption these erroneously labeled examples will be distributed according to the marginal class-conditional distributions. I.e., in the figure above we imagine, along the axis, a very low amplitude blue distribution with the same shape and location as the red distribution, and a very low amplitude red distribution with the same shape under the blue distribution. (Note . This is the noise in the original co-training paper.)

This amounts to having a labeled training set with label errors but with errors being generated *independently* of the location in the space. That is the number of errors in a region in the space is proportional to the number of examples in that region. These proportionally distributed errors are then washed out by the correctly labeled examples when we learn .

To recap, co-training works because of the following fact. Starting from a weak classifier on , we can generate very accurate and *unbiased* training data to train a classifier on .

**No need for view-redundancy**

Notice that, in the above example, we made no appeal to any kind of view-redundancy (other than whatever we may get gratis from the independence assumption).

The vigilant reader may however level the following two objections against the above argument-by-example.

1. We build and separately. So when the training is done, without view redundancy, we have not shown a way to pick from the two to apply to new test data.

2. At every iteration we need to select unlabeled samples that were classified with *high*-confidence by to feed to the trainer for . Without view-redundancy may be *none* of the samples will be classified with high confidence.

The first objection is easy to respond to. We pick neither nor for new test data. Instead we combine them to obtain a classifier . This is well justified because, under class-conditional independence, .

We react to the second objection by dropping the requirement of classifying with high-confidence altogether.

**Dropping the high-confidence requirement by surrogate learning**

Instead of training with examples that are classified with high confidence by , we train with all the examples (using the scores assigned to them by ).

At some iteration of co-training, define the random variable . Since and are class-conditionally independent, and are also class-conditionally independent. In particular is class-conditionally *mean-independent* of . Furthermore if is even a weakly useful classifier, barring pathologies, it will satisfy .

We can therefore apply surrogate learning under mean-independence to learn the classifier on . (This is essentially the same idea as Co-EM, which was introduced without much theoretical justification.)

**Discussion**

Hopefully the above argument has convinced the reader that the class-conditional view independence assumption obviates the view-redundancy requirement.

A natural question to ask is whether the reverse is true. That is, if we are given view-redundancy, can we completely eliminate the requirement of class-conditional independence? We can immediately see that the answer is no.

For example, we can duplicate all the features for any classification problem so that view-redundancy holds trivially between the two replicates. Moreover, the second replicate will be statistically fully dependent on the first.

Now if we are given a weak classifier on the first view (or replicate) and try to use its predictions on an unlabeled data set to obtain training data for the second, it would be equivalent to feeding back the predictions of a classifier to retrain itself (because the two views are duplicates of one another).

This type of procedure (which is an idea decades old) has been called, among other things, self-learning, self-correction, self-training and decision-directed adaptation. The problem with these approaches is that the training set so generated is *biased* and other assumptions are necessary for the feedback procedure to improve over the original classifier.

Of course this does not mean that the complete statistical independence assumption cannot be relaxed. The above argument only shows that at least *some amount* of independence is necessary.

## A surrogate learning mystery

I’ll present an application of the surrogate learning idea in the previous post. It is mildly surprising at first blush, which I’ll contrive to make more mysterious.

**Murders she induced**

For readers who appreciate this sort of a thing, here’s the grandmother of all murder mysteries. In this one Miss Marple solves a whole bunch of unrelated murders all at once.

Miss Marple having finally grown wise to the unreliability of feminine intuition in solving murder mysteries spent some time learning statistics. She then convinced one of her flat-footed friends at the Scotland Yard to give her the files on all the unsolved murders, that were just sitting around gathering dust and waiting for someone with her imagination and statistical wiles.

She came home with the massive pile of papers and sat down to study them.

The file on each murder carefully listed all the suspects, with their possible motives, their accessibility to the murder weapon, psychological characteristics, previous convictions and many other details. There was a large variety in the number of suspects for different murders.

Furthermore, for every case the investigating officer made a note that the murderer was very likely to be part of the suspect pool. Only, it was not possible to narrow down the pool to one, and none of the suspects confessed.

Thinking statistically, Miss Marple decided to encode the characteristics of each suspect by a feature vector. Drawing on her vast experience in these matters she assigned numeric values to features like motives, psychology, relationship to the victim and access to the weapon.

All that was left was to build a statistical model that predicted the probability of a suspect being the murderer from his feature vector. She would then rank all the suspects of a case by this probability, and then she can have Scotland Yard try to get the most likely suspect to confess.

At this point Miss Marple paused for a moment to rue her oversight in not asking for the files on the solved murders as well. She could have used the suspects (and the known murderer) from the solved murders as labeled examples to train her statistical model. However, she soon realized …

I know you are tense with anticipation of the inevitable twist in the tale. Here it is.

*… she soon realized that she could build her model by training a regressor to predict, from the feature vector of a suspect, the size of the suspect pool he belongs to.*

**The clues**

Let’s re-examine the pertinent facts — 1. every suspect pool contains the murderer, 2. the suspect pools are of varying sizes.

The one, perhaps controversial, assumption we’ll need to make is that the feature vectors for both the murderers and the non-murderers are mean-independent of the size of the suspect pool they come from. (However, if it is good enough for Miss Marple, it is good enough for me.)

In any case, the violation of this assumption would imply that, when whether or not a suspect is a murderer is known, knowing his feature vector allows us to better predict the size of the suspect pool he belongs to. Because this is far-fetched, we can believe that the assumption is true.

In fact it may seem that knowing whether or not a suspect is murderer itself adds nothing to the predictability of the size of the pool from which the suspect is drawn. We will however show that the means for the murderers and non-murderers is different.

Let us now see why Miss Marple’s method works.

**Marple math**

Let the suspect pools be denoted with sizes . Let each suspect be described by the feature vector denoted . We now define the class label if a particular suspect is a murderer and otherwise.

We append the feature to the feature vector of a suspect from pool . That is, the surrogate feature we use is the negative of the pool size. (The negative is for a technical reason to match the previous post. In practice I would recommend , but this makes the argument below a bit more involved.)

The first thing to note is that the assumption we made above translates to for both and .

All we need to show is that and we are home free, because the argument in the previous post implies that can be used to rank suspects in a pool just as well as .

So all we need to do is to build a regressor to predict from the feature vector and apply this regressor to the of all the suspects in each pool and rank them by its output.

So why is ?

First it is clear that (where is the random variable describing the size of the suspect pool) because for each pool there is one murderer and we assign the negative of that pool size as for that murderer.

Now there are a total of non-murderers. For the non-murderers from the pool the assigned is . Therefore the estimated expected value is

If we divide through by and let the number of pools go to infinity, the estimate converges to the true mean, which is

It is easy to show that this quantity is always lower than if the variance of is non-zero (and if ), which we were given by the fact that the pool sizes are variable across murders.

And there you have it.

**A less criminal example**

Let me try to demystify the approach even more.

Assume that we have a database of images of faces of people along with their last names. Now the problem is the following — for a given face-name input pair to find the matching face and name in the database. Of course if all the names are unique this is trivial, but there are some common names like Smith and Lee.

Let us say that our procedure for this matching is to first obtain from the database the face images of the people with the same last name as the input, and then using our state-of-the-art face image comparison features to decide which of those is the right one. Do we need a training set of human-annotated input-database record pairs to train such a ranker? The above discussion suggests not.

Let us say we create a “fake” training set from a lot of unlabeled input image-name pairs by getting the set of records from the database with the matching name and assigning each feature vector a label which is the reciprocal of the size of the set. We then learn to predict this label from the image features.

The reason we would expect this predictor to behave similarly to the predictor of image match is that we can view our fake labeling as a probabilistic labeling of the feature vectors. For a given set our label is the a priori probability of a face image being the right match to the input image.

The independence assumption just makes sure that our probabilistic labeling is not biased.

I am curious about other possible applications.

## Surrogate learning with mean independence

In this paper we showed that if we had a feature that was class-conditionally statistically independent of the rest of the features, denoted , learning a classifier between the two classes and can be transformed into learning a predictor of from and another of from . Since the first predictor can be learned on unlabeled examples and the second is a classifier on a 1-D space, the learning problem becomes easy. In a sense acts as a *surrogate *for .

Similar ideas can be found in Ando and Zhang ’07, Quadrianto et. al. ’08, Blitzer et. al. ’06, and others.

**Derivation from mean-independence**

I’ll now derive a similar surrogate learning algorithm from *mean independence *rather than full statistical independence. Recall that the random variable is mean-independent of the r.v. if . Albeit weaker than full independence, mean-independence is still a pretty strong assumption. In particular it is stronger than the lack of correlation.

We assume that the feature space contains the single feature and the rest of the features . We are still in a two-class situation, i.e., . We further assume

1. is at least somewhat useful for classification, or in other words, .

2. is class-conditionally mean-independent of , i.e., for .

Now let us consider the quantity . We have

Notice that is a convex sum of and .

Now using the fact that we can show after some algebra that

We have succeeded in decoupling and on the right hand side, which results in a simple semi-supervised classification method. We just need the class-conditional means of and a regressor (which can be learned on unlabeled data) to compute . Again acting as a surrogate for is predicted from .

As opposed to the formulation in the paper this formulation easily accommodates continuous valued .

**Discussion**

1. The first thing to note is that we are only able to write an expression for but not . That is, we are able to weaken the independence to mean-independence at the expense of “wasting” feature .

Of course if we have full statistical independence we can use , by using Equation (1) and the fact that, under independence, we have

.

2. If (without loss of generality) we assume that , because lies somewhere between and , Equation (1) says that is a monotonically increasing function of .

This means that itself can be used as the classifier, and labeled examples are needed only to determine a threshold for trading off precision vs. recall. The classifier (or perhaps we should call it a ranker) therefore is built *entirely* from unlabeled samples.

I’ll post a neat little application of this method soon.