úÎB´@¾     &_A least squares support vector machine. The cost represents the relative expense of missing a i training versus a more complicated generating function. The higher this number the better the fit h of the training set, but at a cost of poorer generalization. The LSSVM uses every training point X in the solution and performs least squares regression on the dual of the problem. /The kernel function defines the feature space. (The cost coefficient in the Lagrangian. Any parameters needed by the  . VA support vector machine (SVM) can estimate a function based upon some training data. b Instances of this class need only implement the dual cost and the kernel function. Default ` implementations are given for finding the SVM solution, for simulating a function and for a creating a kernel matrix from a set of training points. All SVMs should return a solution a which contains a list of the support vectors and their dual weigths. dcost represents the c coefficient of the dual cost function. This term gets added to the diagonal elements of the ? kernel matrix and may be different for each type of SVM.  Creates a  ! from the training points in the . If kf is the   5 then the elements of the kernel matrix are given by K[i,j] = kf x[i] x[j],  where the x[i]I are taken from the training points. The kernel matrix is symmetric and U positive semi-definite.Only the bottom half of the kernel matrix is stored. TThe derivative of the cost function is added to the diagonal elements of the kernel b matrix. This places a cost on the norm of the solution, which helps prevent overfitting  of the training data. %This function provides access to the   used by the . This function takes an  produced by the ! passed in, and a list of points e in the space, and it returns a list of valuues y = f(x), where f is the generating function 5 represented by the support vector solution. This function takes a  and feeds it to the . Then it returns the  K which is the support vector solution for the function which generated the ] points in the training set. The function also takes values for epsilon and the max ^ iterations, which are used as stopping criteria in the conjugate gradient algorithm. YEvery kernel function represents an inner product in feature space. The parameters are: Z A list of kernel parameters that can be interpreted differently by each kernel function. ' The first point in the inner product. ( The second point in the inner product. TThe kernel matrix has been implemented as an unboxed array for performance reasons. JThe solution contains the dual weights, the support vectors and the bias. TEach data set is a list of vectors and values which are training points of the form  f(x) = y forall {x,y}. 0The type synonym is simply to save some typing. ^The reciprocal kernel is the result of exponential basis functions, exp(-k*(x+a)). The inner ) product is an integral over all k >= 0. 9This is the kernel when radial basis functions are used. `This is a simple dot product between the two data points, corresponding to a featureless space. +Provides a solution similar to neural net. \The conjugate gradient algorithm is used to find the optimal solution. It will run until a < cutoff delta is reached or for a max number of iterations. ^Matrix multiplication between a kernel matrix and a vector is handled by this funciton. Only c the bottom half of the matrix is stored. This function requires 1 based indices for both of  its arguments. !+Scalar multiplication of an unboxed array. "2A version of zipWith for use with unboxed arrays. #&Sum the elements of an unboxed array. $,Standard dot product of two unboxed arrays. %+Add two unboxed arrays element by element.        &         !" svm-1.0.0.1SVMLSSVMkfcostparamscreateKernelMatrixdcost evalKernelsimulatesolveKernelFunction KernelMatrix SVMSolutionalphasvbiasDataSetpointsvaluesreciprocalKernelFunctionradialKernelFunctionlinearKernelFunctionsplineKernelFunctionpolyKernelFunctionmlpKernelFunction DoubleArraycgamatmult scalarmultmZipWithmSummDotmAdd