# PAC Classification based on PAC Estimates of Label Class
Distributions^{†}^{†}thanks:
This work was supported by EPSRC Grant GR/R86188/01.
This work was supported in part by the IST Programme of the
European Community, under the PASCAL Network of Excellence,
IST-2002-506778. This publication only reflects the authors’ views.

###### Abstract

A standard approach in pattern classification is to estimate the distributions of the label classes, and then to apply the Bayes classifier to the estimates of the distributions in order to classify unlabeled examples. As one might expect, the better our estimates of the label class distributions, the better the resulting classifier will be. In this paper we make this observation precise by identifying risk bounds of a classifier in terms of the quality of the estimates of the label class distributions. We show how PAC learnability relates to estimates of the distributions that have a PAC guarantee on their distance from the true distribution, and we bound the increase in negative log likelihood risk in terms of PAC bounds on the KL-divergence. We give an inefficient but general-purpose smoothing method for converting an estimated distribution that is good under the metric into a distribution that is good under the KL-divergence.

^{1}

^{1}institutetext: Dept. of Computer Science, University of Warwick, Coventry CV4 7AL, U.K.\KL

^{1}

^{1}email: (npalmer—pwg)@dcs.warwick.ac.uk,\KL Research group home page: http://www.dcs.warwick.ac.uk/research/acrg

keywords. Bayes error, Bayes classifier, plug-in decision function

## 1 Introduction

We consider a general approach to pattern classification in which elements of each class are first used to train a probabilistic model via some unsupervised learning method. The resulting models for each class are then used to assign discriminant scores to an unlabeled instance, and a label is chosen to be the one associated with the model giving the highest score. For example [3] uses this approach to classify protein sequences, via training a well-known probabilistic suffix tree model of Ron et al. [18] on each sequence class. Indeed, even where an unsupervised technique is mainly being used to gain insight into the process that generated two or more data sets, it is still sometimes instructive to try out the associated classifier, since the misclassification rate provides a quantitative measure of the accuracy of the estimated distributions.

The work of [18] has led to further related algorithms for learning classes of probabilistic finite state automata (PDFAs) in which the objective of learning has been formalized as the estimation of a true underlying distribution (over strings output by the target PDFA) with a distribution represented by a hypothesis PDFA. The natural discriminant score to assign to a string, is the probability that the hypothesis would generate that string at random.

As one might expect, the better one’s estimates of label class
distributions (the class-conditional densities), the better
should be the associated classifier.
The contribution of this paper is to make precise that observation.
We give bounds on the risk of the associated Bayes
classifier^{1}^{1}1The Bayes classifier associated with two or
more probability distributions is the function that maps an element
of the domain to the label associated with the probability distribution
whose value at is largest. This is of course a well-known approach
for classification, see [7].
in terms of the quality of the estimated distributions.

These results are partly motivated by our interest in the relative merits of estimating a class-conditional distribution using the variation distance, as opposed to the KL-divergence (defined in the next section). In [4] it has been shown how to learn a class of PDFAs using KL-divergence, in time polynomial in a set of parameters that includes the expected length of strings output by the automaton. In [15] we show how to learn this class with respect to variation distance, with a polynomial sample-size bound that is independent of the length of output strings. Furthermore, it can be shown that it is necessary to switch to the weaker criterion of variation distance, in order to achieve this. We show here that this leads to a different—but still useful—performance guarantee for the Bayes classifier.

Abe and Warmuth [2] study the problem of learning probability distributions using the KL-divergence, via classes of probabilistic automata. Their criterion for learnability is that—for an unrestricted input distribution —the hypothesis PDFA should be almost (i.e. within ) as close as possible to . Abe, Takeuchi and Warmuth [1] study the negative log-likelihood loss function in the context of learning stochastic rules, i.e. rules that associate an element of the domain to a probability distribution over the range . We show here that if two or more label class distributions are learnable in the sense of [2], then the resulting stochastic rule (the conditional distribution over given ) is learnable in the sense of [1].

We show that if instead the label class distributions are well estimated using the variation distance, then the associated classifier may not have a good negative log likelihood risk, but will have a misclassification rate that is close to optimal. This result is for general -class classification, where distributions may overlap (i.e. the optimum misclassification rate may be positive). We also incorporate variable misclassification penalties (sometimes one might wish a false positive to cost more than a false negative), and show that this more general loss function is still approximately minimized provided that discriminant likelihood scores are rescaled appropriately.

As a result we show that PAC-learnability and more generally
p-concept^{2}^{2}2p-concepts are functions probabilistically mapping
elements of the domain to 2 classes. learnability [12],
follows from the ability to learn class distributions in the setting
of Kearns et al. [11]. Papers such
as [5, 14, 8] study the problem of learning
various classes of probability distributions with respect to
KL-divergence and variation distance, in this setting.

It is well-known (noted in [12]) that learnability with respect to KL-divergence is stronger than learnability with respect to variation distance. Furthermore, the KL-divergence is usually used (for example in [4, 10]) due to the property that when minimized with respect to an sample, the empirical likelihood of that sample is maximized. An algorithm that learns with respect to variation distance can sometimes be converted to one that learns with respect to KL-divergence by a smoothing technique [5], when the domain is , and is a parameter of the learning problem. In this paper we give a related smoothing rule that applies to the version of the PDFA learning problem where we seem to “need” to use the variation distance. However, the smoothed distribution does not have an efficient representation, and requires the probabilities used in the target PDFA to have limited precision.

### 1.1 Notation and Terminology

In -class classification, labeled examples are generated by distribution over . We consider the problem of predicting the label associated with , where is generated by the marginal distribution of on , . A non-negative cost is incurred for each classification, based either on a cost matrix (where the cost depends upon both the hypothesized label and the true label) or the negative log-likelihood of the true label being assigned. The aim is to optimize the expected cost given by the occurrence of a randomly generated example. We refer to the expected cost associated with any classifier , as risk (as described by Vapnik [17]), denoted as .

Let be restricted to points , . is a mixture , where , and is the class prior of class —the probability that a randomly generated data point has label .

In Section 2 it is shown that if we have upper bounds on the inaccuracy of the estimated distributions of each class label, then we can derive bounds on the risk associated with the classifiers. Suppose and are probability distributions over the same domain . We define the distance as . We usually assume that is a discrete domain, in which case

The KL-divergence from to is defined as

### 1.2 Learning Framework

In the PAC-learning framework an algorithm receives labeled samples generated independently according to distribution over , where distribution is unknown, and where labels are generated by an unknown function from a known class of functions . The algorithm must output a hypothesis from a class of hypotheses , such that with probability at least , , where and are parameters. Notice that in this setting, if , then , where is the error associated with the optimal hypothesis.

We use a variation on the framework used in [12] for learning p-concepts, which adopts performance measures from the PAC model, extending this to learn stochastic rules with classes. Therefore it is the case that . The aim of the learning algorithm in this framework is to output a hypothesis such that with probability of at least , the error of satisfies .

Our notion of learning distributions is similar to that of Kearns et al. [11].

###### Definition 1

Let be a class of distributions. is said to be efficiently learnable if an algorithm exists, such that given and and access to randomly drawn examples (see below) from any unknown target distribution , runs in time polynomial in , and and returns a probability distribution that with probability at least is within -distance (alternatively KL-divergence) of .

We define p-concepts as introduced by Kearns and Shapire [12]. This definition is for 2-class classification, but generalizes in a natural way to more than 2 classes.

###### Definition 2

A Probabilistic Concept (or p-concept) on domain is given by a real-valued function . An observation of consists of some together with a 0/1 label with .

## 2 Results

In Section 2.1 we give bounds on the risk associated with a hypothesis, with respect to the accuracy of the approximation of the underlying distribution generating the instances. In Section 2.2 we show that these bounds are close to optimal, and in Section 2.3 we give corollaries showing what these bounds mean for PAC learnability.

We define the accuracy of an approximate distribution in terms of distance and KL divergence, both of which are commonly used measurements. It is assumed that the class priors of each class label are known.

### 2.1 Bounds on Increase in Risk

First we examine the case where the accuracy of the hypothesis distribution is such that the distribution for each class label is within distance of the true distribution for that label, for some . A cost matrix specifies the cost associated with any classification, where the cost of classifying a data point which has label as some label is denoted as (where ). It is usually the case that for . We introduce the following notation:

Given classifier over discrete domain , , the risk of is given by

Let be the Bayes optimal classifier, i.e. the function with the minimal risk, or optimal expected cost, and is the function with optimal expected cost with respect to alternative distributions . For ,

###### Theorem 2.1

^{3}

^{3}3This result is essentially a generalization of Exercise 2.10 of Devroye et al’s textbook [6], from 2 class to multiple classes, and in addition we show here that variable misclassification costs can be incorporated. This is the closest thing we have found to this Theorem that has already appeared, but we suspect that other related results may have appeared. We would welcome any further information or references on this topic. Theorem 2.2 is another result which we suspect may be known, but likewise we have found no statement of it.

Let be the Bayes optimal classifier and let be the Bayes classifier associated with estimated distributions . Suppose that for each label , . Then

###### Proof

Let be the contribution from towards the total expected cost associated with classifier . For such that ,

Let be the increase in risk for labelling as instead of , so that

(1) |

Note that due to the optimality of on , . In a similar way, the expected contribution to the total cost of from must be less than or equal to that of with respect to – given that is chosen to be optimal on the values. We have:

Rearranging, we have

(2) |

Let be the difference between the probability densities of and at , . Therefore,

In order to bound the expected cost, it is necessary to sum over the range of :

(3) |

Since for all , ie. , it follows from (3) that

This expression gives an upper bound on expected cost for labelling as instead of . By definition,

Therefore it has been shown that

∎

We next prove a corresponding result in terms of KL-divergence, which uses the negative log-likelihood of the correct label as the cost function. We define to be the probability that a data point at has label , such that . Given a function , where is a prediction of the probabilities of having each label (so ), the risk associated with can be expressed as

(4) |

Let output the true class label distribution for an element of . From Equation (4) it can be seen that

(5) |

###### Theorem 2.2

For suppose that is given by (4). If for each label , , then .

###### Proof

Let be the contribution at to the risk associated with classifier , . Therefore .

We define to be the estimated probability that a data point at has label , from distributions , such that .

Let denote the contribution to additional risk incurred from using as opposed to at . From (5) it can be seen that

We define such that . Since it is the case that , can be rewritten as

We define to be the contribution at to the KL-divergence, such that . It follows that

(6) |

We know that the KL divergence between and is bounded by for each label , so (6) can be rewritten as

Due to the fact that the KL-divergence between two distributions is non-negative, an upper bound on the cost can be obtained by letting , so . Therefore it has been proved that .∎

### 2.2 Lower Bounds

In this section we give lower bounds corresponding to the two upper bounds given in Section 2.

###### Example 1

Consider a distribution over domain , from which data is generated with labels and and there is an equal probability of each label being generated (). denotes the probability that a point is generated at given that it has label . and are distributions over , such that at , .

Suppose that and are approximations of and , and that and , where (and is an arbitrarily small constant).

Given the following distributions, assuming that a misclassification results in a cost of and that a correct classification results in no cost, it can be seen that :

Now if we have approximations and as shown below, it can be seen that will misclassify for every value of :

This results in . Therefore .

In this example the risk is only under , since . A similar example can be used to give upper bounds to the lower bound given in Theorem 2.2.

###### Example 2

Consider distributions , , and over domain as defined in Example 1. It can be seen that the KL-divergence between each label’s distribution and its approximated distribution is

The optimal risk, measured in terms of negative log-likelihood, can be expressed as

### 2.3 Learning near-optimal classifiers in the PAC sense

We show that the results of Section 2.1 imply learnability within the framework defined in Section 1.2.

The following corollaries refer to algorithms and . These algorithms generate classifier functions , which label data in a -label classification problem, using distance and -divergence respectively as measurements of accuracy.

Corollary 1 shows (using Theorem 3) that a near optimal classifier can be constructed given that an algorithm exists which approximates a distribution over positive data in polynomial time. We are given cost matrix , and assume knowledge of the class priors .

###### Corollary 1

If an algorithm approximates distributions within distance with probability at least , in time polynomial in and , then an algorithm exists which (with probability ) generates a discriminant function with an associated risk of at most , and is polynomial in and .

###### Proof

is a classification algorithm which uses unsupervised learners to fit a distribution to each label , and then uses the Bayes classifier with respect to these estimated distributions, to label data.

is a PAC algorithm which learns from a sample of positive data to estimate a distribution over that data. generates a sample of data, and divides into sets , such that contains all members of with label . Note that for all labels , .

With a probability of at least , generates an estimate of the distribution over label , such that . Therefore the size of the sample must be polynomial in and . For all , so is polynomial in , , and .

When combines the distributions returned by the iterations of , there is a probability of at least that all of the distributions are within distance of the true distributions (given that each iteration received a sufficiently large sample). We allow a probability of that the initial sample did not contain a good representation of all labels (), and as such – one or more iteration of may not have received a sufficiently large sample to learn the distribution accurately.

Therefore with probability at least , all approximated distributions are within distance of the true distributions. If we use the classifier which is optimal on these approximated distributions, , then the increase in risk associated with using instead of the Bayes Optimal Classifier, , is at most . It has been shown that requires a sample of size polynomial in , , and . It follows that

∎

Corollary 2 shows (using Theorem 2.2) how a near optimal classifier can be constructed given that an algorithm exists which approximates a distribution over positive data in polynomial time.

###### Corollary 2

If an algorithm has a probability of at least of approximating distributions within -divergence, in time polynomial in and , then an algorithm exists which (with probability ) generates a function that maps to a conditional distribution over class labels of , with an associated log-likelihood risk of at most , and is polynomial in and .

###### Proof

is a classification algorithm using the same method as in Corollary 1, whereby a sample is divided into sets , and each set is passed to algorithm where a distribution is estimated over the data in the set.

With a probability of at least , generates an estimate of the distribution over label , such that . Therefore the size of the sample must be polynomial in and . Since , is polynomial in and .

When combines the distributions returned by the iterations of , there is a probability of at least that all of the distributions are within -divergence of the true distributions. We allow a probability of that the initial sample did not contain a good representation of all labels ().

Therefore with probability at least , all approximated distributions are within -divergence of the true distributions. If we use the classifier which is optimal on these approximated distributions, , then the increase in risk associated with using instead of the Bayes Optimal Classifier , is at most . It has been shown that requires a sample of size polynomial in , and . Let be an upper bound on the time and sample size used by . It follows that

∎

### 2.4 Smoothing: from distance to KL-divergence

Given a distribution that has accuracy under the distance, is there a generic way to “smooth” it so that it has similar accuracy under the KL-divergence? From [5] this can be done for , if we are interested in algorithms that are polynomial in in addition to other parameters. Suppose however that the domain is bit strings of unlimited length. Here we give a related but weaker result in terms of bit strings that are used to represent distributions, as opposed to members of the domain. We define class of distributions specified by bit strings, such that each member of is a distribution on discrete domain , represented by a discrete probability scale. Let be the length of the bit string describing distribution . Note that there are at most distributions in represented by strings of length .

###### Lemma 1

Suppose is learnable under distance in time polynomial in , and . Then is learnable under KL-divergence, with polynomial sample size.

###### Proof

Let be a member of class , represented by a bit string of length , and let algorithm be an algorithm which takes an input set (where is polynomial in , and ) of samples generated i.i.d. from distribution , and with probability at least returns a distribution , such that .

Let . We define algorithm such that with probability at least , returns distribution , where . Algorithm runs with sample , where is polynomial in , and (and it should be noted that is polynomial in , and ).

We define to be the unweighted mixture of all distributions in represented by length bit strings, . We now define distribution such that .

By the definition of , . With probability at least , , and therefore with probability at least , .

We define . Members of contribute positively to . Therefore

(7) |

We have shown that , so . Analysing the first term in (7),

Note that for all , . It follows that

Examining the second term in (7),

where , which is a positive quantity for all . Due to the concavity of the logarithm function, it follows that

Therefore, . For values of , it can be seen that . ∎

###### Corollary 3

Consider the problem of learning PDFAs having states, over alphabet , and probabilities represented by bit strings of length . Using sample size (but not time) polynomial in , and (and the PAC parameters and ), a distribution is this class can be estimated within KL distance .

The proof follows from the observation that such a PDFA can be represented using a bit string whose length is polynomial in the parameters.

Consequently we can learn the same class of PDFAs under the KL-divergence that can be learned under the distance in [15], i.e. PDFAs with distinguishable states but no restriction on the expected length of their outputs. However, note that the hypothesis is “inefficient” (a mixture of exponentially many PDFAs).

## 3 Conclusion

We have shown a close relationship between the error of an estimated input distribution (as measured by distance or KL-divergence) and the error rate of the resulting classifier. In situations where we believe that input distributions may be accurately estimated, the resulting information about the data may be more useful than just a near-optimal classifier.

A general issue of interest is the question of when one can obtain good classifier from estimated distributions that satisfy weaker goodness-of-approximation criteria than those considered here. Suppose for example that elements of a 2-element domain are being labeled by the stochastic rule that assigns labels 0 and 1 to either element of the domain, with equal probability. Then any classifier does no better than random labeling, and so we can use arbitrary distributions and as estimates of the distributions and over examples with label 0 and 1 respectively. In [9] we show that in the basic PAC framework we can sometimes design discriminant functions based on unlabeled data sets, that result in PAC classifiers without any guarantee on how well-estimated is the input distribution. Further work should possibly compromise between the distribution-free setting, and the objective—considered here—of approximating the input distributions in a strong sense.

## 4 Acknowledgements

## References

- [1] N. Abe, J. Takeuchi and M. Warmuth. Polynomial learnability of stochastic rules with respect to the KL-divergence and quadratic distance. IEICE Trans. Inf. and Syst., Vol E84-D[3] pp. 299-315 (2001).
- [2] N. Abe, and M. Warmuth. On the Computational Complexity of Approximating Distributions by Probabilistic Automata. Machine Learning, 9, pp. 205-260 (1992).
- [3] G. Bejerano and G. Yona. Variations on probabilistic suffix trees: statistical modeling and prediction of protein families. Bioinformatics, Vol. 17, No. 1, pp. 23-43 (2001).
- [4] A. Clark and F. Thollard. PAC-learnability of probabilistic deterministic finite state automata. Journal of Machine Learning Research, [5] pp. 473-497 (2004).
- [5] M. Cryan and L. A. Goldberg and P. W. Goldberg. Evolutionary trees can be learnt in polynomial time in the two-state general Markov model. SIAM Journal on Computing, 31[2] pp. 375-397 (2001).
- [6] L. Devroye, L. Györfi and G. Lugosi. A Probabilistic Theory of Pattern Recognition Springer (1996).
- [7] R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis. John Wiley and Sons (1973).
- [8] J. Feldman and R. O’Donnell and R. Servedio. Learning Mixtures of Product Distributions over Discrete Domains. 46th Symposium on Foundations of Computer Science (FOCS), pp. 501-510, (2005).
- [9] P.W. Goldberg. Some Discriminant-based PAC Algorithms. Journal of Machine Learning Research, Vol. 7 (2006), pp. 283-306.
- [10] K. Hoffgen. Learning and robust learning of product distributions. In ACM COLT, pp. 77-83 (1993).
- [11] M. Kearns, Y. Mansour, D. Ron, R. Rubinfeld, R. E. Schapire and L. Sellie. On the learnability of discrete distributions. In Proceedings of STOC, pp. 273-282 (1994).
- [12] M. Kearns and R. E. Schapire. Efficient distribution-free learning of probabilistic concepts. Journal of Computer and System Sciences, 48[3] pp. 464-497 (1993).
- [13] M. Kearns, R. E. Schapire and L. M. Sellie. Toward efficient agnostic learning. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, ACM Press, pp. 341-352 (1992).
- [14] E. Mossel and S. Roch.