Kernel Estimation in HighEnergy Physics
Abstract
Kernel Estimation provides an unbinned and nonparametric estimate of the probability density function from which a set of data is drawn. In the first section, after a brief discussion on parametric and nonparametric methods, the theory of Kernel Estimation is developed for univariate and multivariate settings. The second section discusses some of the applications of Kernel Estimation to highenergy physics. The third section provides an overview of the available univariate and multivariate packages. This paper concludes with a discussion of the inherent advantages of kernel estimation techniques and systematic errors associated with the estimation of parent distributions.
keywords:
Kernel Estimation, Multivariate Probability Density Estimation, KEYS, RootPDE, WinPDE, PDE, HEPUKeys,Unbinned, NonParametric1 Introduction
1.1 Overview
Perhaps the most common practical duty of a particle physicist is to analyze various distributions from a set of data . The typical tool used in this analysis is the histogram. The role of the histogram is to serve as an approximation of the parent distribution, or probability density function (pdf) from which the data were drawn. While histograms are straightforward and computationally efficient, there are many more sophisticated techniques which have been developed in the last century. One such method, kernel estimation, grew out of a simple generalization of the histogram and has proved to be particularly wellsuited for particle physics.
In order to produce continuous estimates of the parent distribution from the empirical probability density function , several techniques have been developed. These techniques can be roughly classified as either parametric or nonparametric. Essentially, a parametric method assumes a model dependent on the parameters . The specification of this model is ‘‘entirely a matter for the practical [physicist]^{1}^{1}1From a debate between R.A. Fisher and Karl Pearson”. The goal of a parametric estimate is to optimize the parameters with respect to some goodnessoffit criterion (e.g. , logLikelihood, etc…). Parametric models are powerful because they allow us to infuse our model with our knowledge of physics. While parametric methods are very powerful, they are highly dependent on the specification of the model. Parametric methods are clearly not practical for estimating the distributions from a wide variety of physical phenomena.
The goal of nonparametric methods is to remove the modeldependence of the estimator. Nonparametric estimates are concerned directly with optimizing the estimate . The prototypical nonparametric density estimate is the histogram^{2}^{2}2The name ‘histogram’ was coined by Karl Pearson. Somewhat counterintuitively, nonparametric methods typically involve a large  possibly infinite  number of “parameters” (better thought of as degrees of freedom). Scott and Terrell supplied a more concrete definition of a nonparametric estimator, “Roughly speaking, nonparametric estimators are asymptotically local, while parametric estimators are not.”Scott That is to say, the influence of a data point on the density at should vanish asymptotically (in the limit of an infinite ammount of data) for any in a nonparametric estimate. The purpose of this paper is to introduce the notion of a kernel estimator and the inherent advantages it offers over other parametric and nonparametric estimators.
1.2 Kernel Estimation
The notion of a kernel estimator grew out of the asymptotic limit of Averaged Shifted Histograms (ASH). The ASH is a simple device that reduces the binning effects of traditional histograms. The ASH algorithm is as follows: First, create a family of histograms, , with binwidth , such that the first bin of the histogram is placed at . Because is an artificial parameter, each of the is an equally good approximation of the parent distribution. Thus, an obvious estimate of the parent distribution is simply the average of the , hence the name ‘Average Shifted Histogram’. Note that resulting estimate (with times more bins than the original) is not a true histogram, because the height of a ‘bin’ is not necessarily equal to the number of events falling in that bin. However, it is a superior estimate of the parent distribution, because the dependence of initial bin position is essentially removed. In the limit the ASH is equivalent to placing a triangular shaped kernel of probability about each data point Scott .
1.2.1 Fixed Kernel Estimation
In the univariate case, the general kernel estimate of the parent distribution is given by
(1) 
where represents the data and is the smoothing parameter (also called the bandwidth). Immediately we can see that our estimate is binindependent regardless of our choice of . The role of is to spread out the contribution of each data point in our estimate of the parent distribution. An obvious and natural choice of is a Gaussian with and :
(2) 
Though there are many choices of , Gaussian kernels enjoy the attributes of being positive definite, infinitely differentiable, and defined on an infinite support. For physicists this means that our estimate is smooth and wellbehaved in the tails.
Now we concern ourselves with the choice of the bandwidth . In Equation 1, the bandwidth is constant for all . Thus, is referred to as the fixed kernel estimate. The role of is to set the scale for our kernels. Because the kernel method is a nonparametric method, is completely specified by our data set . In the limit of a large amount () of normally distributed data Scott , the mean integrated squared error of is minimized when
(3) 
Of course, we rarely deal with normally distributed data, and, unfortunately, the optimal bandwidth is not known in general. In the case of highly bimodal data (e.g. the output of a neural network discriminate), the standard deviation of the data is not a good measure for the scale of the true structure of the distribution.
1.2.2 Adaptive Kernel Estimation
An astute reader may object to the choice of given in Equation 3 on the grounds of selfconsistency  nonparametric estimates should only depend on the data locally, and is a global quantity. In order for the estimate to handle a wide variety of distributions as well as depend on the data only locally, we must introduce adaptive kernel estimation. The only difference in the adaptive kernel technique is that our bandwidth parameter is no longer a global quantity. We require a term that acts as in Equation 3. Abramson Abramson proposed an adaptive bandwidth parameter given by the expression
(4) 
Equation 4 reflects the fact that in regions of high density we can accurately estimate the parent distribution with narrow kernels, while in regions of low density we require wide kernels to smooth out statistical fluctuations in our empirical probability density function. Technically we are left with two outstanding issues: ) the expression for given in Equation 4 references the a priori density, which we do not know, and ) the optimal choice of has still not been specified. Clearly, , because of dimensional analysis. Additionally, is our best estimate of the true parent distribution. Thus we obtain
(5) 
with
(6) 
The adaptive kernel estimate can be thought of as a “second iteration” of the general kernel estimation technique. In practice, the adaptive kernel technique almost completely removes any dependence on the original choice of the bandwidth in the fixed kernel estimate . Furthermore, the adaptive kernel deals very well with multimodal distributions. In extreme situations (i.e. when the scale of the local structure of the data is more than about two orders of magnitude smaller than the standard deviation of the data) the factor in Equation 6 should be adjusted from its typical value of unity. In that case
(7) 
We have concluded the construction of a nonparametric estimate of an univariate parent distribution based on the empirical probability density function. Our estimate is binindependent, scale invariant, continuously differentiable, positive definite, and everywhere defined.
1.2.3 Boundary Kernels
Both the fixed and adaptive kernel estimates assume that the domain of the parent distribution is all of . However, the output of a neural network discriminant, for example, is usually bounded by , where . In order to avoid probability from “spilling out” of the boundaries we must introduce the notion of a boundary kernel. Without boundary kernels, our estimate will not be properly normalized and underestimate the true parent distribution close to the boundaries.
Boundary kernels modify our traditional Gaussian kernels so that the total probability in the allowed regions is unity. Clearly, our kernel should smoothly vary back to our original Gaussian kernels as we move far from the boundaries. This constraint quickly reduces the kinds of boundary kernels we need consider. Though a large amount of work has been put forward to introduce kernels which preserve the criteria
(8) 
these methods do not suit themselves well for physics applications. The primary problem is that the parametrized family of boundary kernels may contain kernels that are not positive definite  which negates their applicability to physics. Also, boundary kernels satisfying Equation 8 systematically underestimate the parent distribution at a moderate distance from the boundary and overestimate very near the boundary.
An alternate solution to the boundary problem is to simply reflect the data set about the boundary Scott . In that case, the probability that spills out of the boundary is exactly compensated by its mirror.
1.3 Multivariate Kernel Estimation
The general kernel estimation technique generalizes to ddimensions Scott . One choice for the ddimensional kernel is simply a product of univariate kernels with independent smoothing parameters. The following discussion will be restricted to the context of such product kernels.
1.3.1 Covariance Issues
When dealing with multivariate density estimation, the covariance structure of the data becomes an issue. Because the covariance structure of the data may not match the diagonal covariance structure of our kernels, we must apply a linear transformation which will diagonalize the covariance matrix of the data. Ideally, the transformation would remain a local object; however, in practice such nonlinear transformations may be very difficult to obtain. In the remainder of this paper, the transformation matrix will be referred to as , and the will be assumed to be transformed.
1.3.2 Fixed Kernel Estimation
For product kernels, the fixed kernel estimate is given by
(9) 
In the asymptotic limit of normally distributed data, the mean integrated squared error of is minimized when
(10) 
1.3.3 Adaptive Kernel Estimation
The adaptive kernel estimate is constructed in a similar manner as the univariate case; however, the scaling law is usually left in a general form. Because most multivariate data actually lies on a lower dimensional manifold embedded in the input space, the effective dimensionality must be found by maximizing some measure of performance or making some assumption. Thus the multivariate adaptive bandwidth is usually written
(11) 
Though , the precise value depends on the problem. Note that the form of given in Equation 11 is independent of , thus it produces spherically symmetric kernels. This is clearly not optimal. Furthermore, when the optimal value of may vary wildly. This is because the units are no longer correct and powers of scale factors are introduced by . Both of these problems may be remedied with the introduction of a natural length scale associated with the data: the geometric mean of the standard deviations of the transformed , . In the absence of local covariance information, the best we can do is assume that the are proportional to and inversely proportional to . Thus we arrive at
(12) 
which is produces estimates that are invariant under lineartransformation of the input space when the covariance matrix is diagonalized.
1.3.4 Multivariate Boundary Kernels
Just as in the univariate case, it is possible that the physically
realizable domain of our parent distribution is not all of
, but instead a bounded subspace of .
Typically, this situation arises when one of the components of the
sample vector is bounded in the univariate sense (i.e. ). However, once we diagonalize the covariance matrix of our
data the boundary condition will take on a new form in the transformed
coordinates. In general, any linear boundary in our original
coordinates can be expressed as , where is
the unitnormal to the ()dimensional hyperplane in our
dimensional domain and is the distance between the origin and
the pointofclosest approach. After transforming to a set of
coordinates , in which the have
diagonal covariance, our boundary condition is given by
.
Thus, for each boundary one must introduce a reflected sample
with
(13) 
in order to rectify the probability that spilled into unphysical regions.
1.3.5 EventbyEvent Weighting
In highenergy physics it is often necessary to combine data from heterogeneous sources (e.g. independently produced Monte Carlo data sets which together comprise the Standard Model expectation). In general one would like to estimate the parent distribution from a more general empirical probability density function , where represents the weight or a posteriori probability of the event. In the case of combining various Monte Carlo samples, one must reweight all events of a sample to some common luminosity (say, 1 pb) before combining them. Thus for a Monte Carlo sample with events and crosssection each event must be weighted with
The covariance matrix of the weighted sample must be generalized as follows:
(14) 
where and . Then our estimate is simply given by
(15) 
2 Applications
Kernel estimation techniques are applicable to all situations in which it is necessary or useful to have an estimate of the parent distribution of a set of data. In highenergy physics the use of kernel estimation runs the gamut from event selection to confidence level calculation.
2.1 Confidence Level Calculations
The most widespread use of kernel estimation techniques has been in the context of confidence level calculations, primarily for the Higgs Searches at LEP KEYS ; L3keys; OPALkeys ; DELPHIkeys; LEPHIGGSkeys. Each selected candidate event has associated with it one or several discriminant variables (e.g. reconstructed Higgs mass, neural network output, etc…). Estimates of the distribution of discriminant variables are constructed for the signal and background processes, and respectively. These estimates are used to further discriminate between candidate events via the logLikelihood ratio or some other ‘signal estimator’. With the logLikelihood in hand, confidence level calculations transcend pure number counting and allow for a more powerful statistical tool with which to interpret the data CLFFT .
2.2 Measurements
2.2.1 Maximum Likelihood Fits
Another context in which kernel estimation has been applied is the measurement of physical constants via maximum likelihood fitting. Traditionally, the logLikelihood is maximized with respect to the parameters . In this context, is a parametrized model of the physical situation. In practice not all of the are ‘floated’ or varied in the maximization routine, but instead many parameters are ‘fixed’ from some independent measurement. While this model incorporates empirical or theoretical information, it may make unwanted assumptions about our data.
For an example, let us consider the measurement of at a factory. The probability density of a CP decay recoiling from a tagged () meson is given by
(16) 
where is the time difference between the decay of the CP state and the recoiling () tagged meson with . However, in an experiment we must take into account the mistag rate and the resolution of . The standard prescription is to measure and parametrize the resolution distribution with a single (or double) Gaussian with bias and variance . The final probability distribution is obtained via a convolution with the resolution function and is of the form . Now with , and ‘fixed’ we must ‘float’ to make our measurement BaBarPhysicsBook . Here the form of , while justified, will have a systematic influence on the measured value of . If, on the other hand, the resolution function was estimated via a nonparametric means (i.e. kernel estimation techniques), then there would be no artificial influence on the measurement and nontrivial resolution effects would be taken into account automatically.
2.2.2 NonParametric Parameter Estimation
Knuteson proposed a nonparametric parameter estimation technique based on kernel estimation of the joint density of feature vectors and the parameters to be measured. Via Bayesian statistics the posterior density of the parameters given an observation is obtained. Then one estimates the parameters by simply maximizing the posterior density alphaPDE .
2.3 Discriminant Analysis
The first uses of kernel estimation in highenergy physics were in the context of new particle searches. In order to create a discriminant analysis, one constructs the discriminant according to:
(17) 
It is easily seen that signallike events get mapped to 1 and backgroundlike events are mapped to 0. By placing a cut on a (possibly disconnected) region (bounded by the corresponding dimensional contour) in the input space is selected. Because of the geometrical complexity of the selected region, analyses of this type are more efficient than linear discriminant analyses. Furthermore, because of the intuitive nature of Equation 17, many prefer analyses of this type over Artificial Neural Networks. While these analyses may avoid the blackbox properties of Artificial Neural Networks, their usefulness is limited to due to the curse of dimensionality. However, with more intelligently chosen variables, the practical restriction of is not much of a limitation. Discriminant analyses such as these were met with much success in the search for the top quark at D miettinen1 ; miettinen2 ; miettinen3 .
2.4 Event Selection and Cut Optimization
An obvious application of kernel estimation techniques is the improvement and automation of cut optimization. Given signal and background samples, one typically varies cuts to find a region which maximizes some measure of performance (e.g. , , etc…). Traditionally, the cuts are applied directly to the sample events, thus only discrete values of the performance measure are obtained. This leaves the experimentalist in a bit of a quandary because in the neighborhood of the optimal cut value the performance measure tends to have many local extrema and is constant between sample points. Not surprisingly, the optimal cut value is often just chosen by eye.
The above method of cut optimization may be generalized by estimating the performance not from the raw samples, but instead from the estimates of the signal and background parent distributions, and respectively. Then instead of having discrete values of and we obtain continuous functions and . Finally, we are left with the more welldefined problem of finding the global maxima of a continuously varying performance measure  in the case of we obtain
(18) 
In fact, it should be pointed out that with a finite sample size there is an inherent uncertainty in and . Thus, there is an an inherent uncertainty in . In that case, the optimal selection strategy becomes probabilistic  events on the boundary should be selected with some well defined probability. While this strategy can be carried out in a deterministic fashion (e.g. generating pseudorandom numbers with event and run number as a random seed) the gain in performance does not merit the added complexity of the selection.
3 Available Packages
3.1 Univariate Packages
3.1.1 Keys
In order to implement univariate kernel estimation techniques to produce FORTRAN functions of based on the data in HBOOK ntuples, KEYS was developed. KEYS^{3}^{3}3KEYS is an acronym for Kernel Estimating Your Shapes has been implemented in a Perl script which: 1) parses an input file specifying the input ntuples and output filenames of a set of shapes,^{4}^{4}4The word shape is lingo for the continuous estimate of the parent distribution 2) interfaces with PAW to obtain the data set , 3) produces a FORTRAN function to calculate , 4) compiles and evaluates the FORTRAN function, 5) produces plots (see Figure 2) of the estimate and the KolmogorovSmirnov probability that the is compatible with KS , and 6) produces a second FORTRAN function which linearly interpolates between 300 sample points in order to expedite the evaluation of the function. KEYS has been used extensively for the parametrization of discriminant variable distributions in Higgs searches at LEP KEYS ; ALEPHkeys; OPALkeys ; DELPHIkeys; L3keys; LEPkeys.
3.1.2 HEPUKeys
Another implementation of univariate kernel estimation has been developed by Jeremiah Mans of the L3 collaboration.^{5}^{5}5electronic mail: Jeremy.M This implementation is written in C. The HEPUKeys class executes a Ccompiler, and uses the dynamic loading methods (dlopen, dlclose, dlsym) for interactive or embedded implementations of the kernel estimation technique. This embedded structure allows one to include confidence level calculation, cut optimization, or any other application of kernel estimation directly in the analysis code.
3.2 Multivariate Packages
3.2.1 Pde
The original implementation of multivariate kernel estimation in highenergy physics was developed at Rice University by Holmström, Miettinen^{6}^{6}6electronic mail: and Sain with guidance from Scott around 1994. This VMSbased FORTRAN implementation, called PDE, was applied to searches for the top quark at D miettinen1 ; miettinen2 ; miettinen3 .
3.2.2 RootPDE
In order to produce a more portable and flexible package for kernel estimation, the author developed^{7}^{7}7This work was partially funded by an Envision grant from Rice Universityan object oriented implementation written in C. The core numerical procedures were left in a very general form with polymorphic kernels and minimal user interface. The core procedures have been adopted by two complete packages: RootPDE and WinPDE.
Padley and Askew^{8}^{8}8electronic mail: , interfaced the core numerical procedures described above into a shared Root library called RootPDE. This package was primarily intended for discriminant analyses as described in Section 2.3. However, this package was submitted to D’s CVS repository prior to the development of Equation 12 (see Section 1.3.3) and does not support eventbyevent weighting (see Section 1.3.5) or boundary kernels (see Sections 1.2.3 and 1.3.4).
Since the original release of RootPDE, the author has been including more advanced features and developed methods which allow a kernel estimate object to write out a C function for use in other applications. Furthermore, the interface has been generalized from pure discriminant analysis to a form more amenable to any of the applications described in Section 2. Currently, this development is taking place at Baar; however, it will be made publicly available in the near future.
3.2.3 WinPDE
WinPDE is very similar to the original release of RootPDE described in Section 3.2.2; however, the interface is written in VisualBasic and intended for Microsoft Windows platforms. Furthermore, WinPDE can interface with Microsoft Excel spreadsheets and offers a very intuitive userinterface.
4 Conclusions
4.1 Comparison with SMOOTH
It seems appropriate to put kernel estimation techniques in a proper setting before concluding with a discussion of their inherent benefits. While kernel estimation techniques may be applied to situations in which a parametric estimates are popular, that comparison has essentially been made by Section 1.1. Instead, let us consider perhaps the most widely used nonparametric density estimation technique in highenergy physics: PAW’s SMOOTH utility.
4.1.1 Smooth
A full development of the HQUADF function that is used by PAW’s SMOOTH utility is beyond the scope of this paper. However, a brief outline of the algorithm is presented. First and foremost, it is important to realize that SMOOTH operates on histograms and not on the original data set . Thus, SMOOTH is dependent on the original binning of the data. SMOOTH was introduced by this journal in John Allison’s 1993 paper Allison . We will restrict ourselves to the univariate case. Essentially SMOOTH works by finding the bins of significant variation in the histogram and then using those points to construct a smoothed linear interpolation. Bins of significant variation are those which satisfy , where is a userdefined significance threshold and
(19) 
With the points of significant variation in hand, the smoothed shape is given by
(20) 
where are the radial basis functions. The are userdefined smoothness parameters (radii of curvature). The are found by minimizing the between and the original histogram. As Allison pointed out “lower can be obtained by reducing the cut on at the expense of following more of what might only be statistical fluctuations.” By a different choice of and , the user has the power to magnify or remove statistical fluctuation in the data.
4.1.2 Comparison
Despite the userspecified parameters and , SMOOTH is a nonparametric estimate of a probability density function based on a set of data. The primary differences between SMOOTH and kernel estimates are their approach and their rigor. While kernel estimates are binindependent constructions of the estimate, SMOOTH is a parameterdependent fit of the estimate to a userprovided histogram. Practically speaking, kernel estimates are based on well defined statistical techniques and SMOOTH’s estimates are adjusted by eye allowing for user bias and large systematic uncertainty.
4.2 Systematic Errors
When kernel estimation techniques are applied to confidence level calculations or parameter estimation, systematic effects become of particular importance. One may loosely classify the systematic errors associated with probability density estimation as either inherent or userrelated errors. In its pure form kernel estimation techniques are entirely deterministic and have no userspecified parameters. If one decides to free the value of from its nominal value of unity (see Equation 7) or allow , then userrelated systematic error are introduced. For SMOOTH, the userrelated parameters and can not be avoided. In addition to the possible userrelated systematic errors, there are inherent systematic errors introduced by any probability density estimation technique. For parametric estimates, this inherent systematic is related to the quality of the model; while for nonparametric estimates, this inherent systematic is related to the flexibility of the technique. The development of kernel estimation techniques has been directly focused on flexibility and the minimization of a particular choice of inherent systematic error: the asymptotic mean integrated squared error Scott .
In practice, an experimentalist will want to choose their own estimate of the inherent systematic error (e.g. the effect on the measured value of a parameter or 95% confidence level limit). This can be done in a variety of ways that effectively reduce to producing a family of estimates from independent samples of the same parent distribution. This family may be obtained by simply splitting up the data or via toy Monte Carlo simulation. Because the systematic error introduced by the estimation technique is a function(al) of the sampled parent distribution (which is unknown), the estimate itself is the best available choice of the parent distribution to be sampled in a Monte Carlo study.
4.3 Remarks
Obviously, kernel estimation techniques are very powerful and very relevant to highenergy physics. While these techniques have been applied to a wide range of analyses, they seem to by largely unknown by the community. It is the aim of this paper to describe kernel estimation techniques, their applications, and their current implementations in order to promote their use. It is to the author’s personal gratification to be part of a crosspolination of ideas between fields  in this case statistics and highenergy physics.
5 Acknowledgements
The author would like to thank Hannu Miettinen  without his original guidance this paper would not have been written. The development of KEYS was largely influenced by members of the LEPHiggs Analysis Working Group, most notably Chris Tully, Steve Armstrong, Peter McNamara, Jason Nielsen, Arnulf Quadt, Tom Junk, Peter IgoKemenes, Tom Greening, and Yuanning Gao. Thanks to Paul Padley and Andrew Askew for the development of RootPDE and Stephen McCaul for aid in the design of its core numerical procedures. Karl Berkelman provided very useful discussion on the sensitive issues of systematic errors. Finally, the author is grateful for the support of Sau Lan Wu during a stay at CERN in which the majority of this work was conducted.
References
 (1) D. Scott, Multivariate Density Estimation: Theory, Practice, and Visualization, John Wiley and Sons Inc. (New York, 1992)
 (2) I. Abramson, On Bandwidth Variation in Kernel Estimates: A Square Root Law. Ann. Statist. 10 (1982) 12171223.

(3)
K. Cranmer, Kernel Estimation for Parametrization of
Discriminant Variable Distributions, ALEPH 99144 PHYSICS 99056
(1999)
See http://wwwwisconsin.cern.ch/cranmer/keys.html 
(4)
The OPAL Collaboration, Search for Neutral Higgs Bosons in
Collisions at 192202 GeV, OPAL Physics Note PN426 (2000).  (5) H. Hu and J. Nielsen, Analytic Confidence Level Calculations Using the Likelihood Ratio and Fourier Transform. To be published in High Energy Physics and Nuclear Physics (1999).
 (6) The Baar Collaboration, The Baar Physics Book. See section 5.5.4.
 (7) B. Knuteson, H. Miettinen, and L. Holmström, Mass Analysis and Parameter Estimation with PDE. D Note 003396 (1998).
 (8) L. Holmström, H. Miettinen, and S. R. Sain, A new multivariate technique for top quark search Comp. Phys. Comm. 88 (1995) 195210.
 (9) H. Miettinen and G. Epply, Possible Hint of top + jets. D Note 002145 (1994).
 (10) H. Miettinen, Top Quark Results from D. D Note 002527 (1995).
 (11) A. Frodesen and O. Skjeggestad, Probability and Statistics in Particle Physics (1979) 424427.
 (12) J. Allison, Multiquadric Radial Basis Functions for Representing Multidimensional High Energy Physics Data. Comp. Phys. Comm. 77 (1993) 377395.