Scoring functions commonly used for evaluation of NLP
systems. Most functions in this module work on sequences which are
Foldable, but some take a precomputed table of
Counts. This will give a speedup if you want to compute multiple
scores on the same data. For example to compute the Mutual
Information, Variation of Information and the Adjusted Rand Index
on the same pair of clusterings:
let cs = counts "abcabc" "abaaba"
mapM_ (print . ($ cs)) [mi, ari, vi]
- accuracy :: (Eq a, Fractional c, Traversable t, Foldable s) => t a -> s a -> c
- recipRank :: (Eq a, Fractional b, Foldable t) => a -> t a -> b
- avgPrecision :: (Fractional n, Ord a, Foldable t) => Set a -> t a -> n
- ari :: (Ord a, Ord b) => Counts a b -> Double
- mi :: (Ord a, Ord b) => Counts a b -> Double
- vi :: (Ord a, Ord b) => Counts a b -> Double
- kullbackLeibler :: (Eq a, Floating a, Foldable f, Traversable t) => t a -> f a -> a
- jensenShannon :: (Eq a, Floating a, Traversable t, Traversable u) => t a -> u a -> a
- type Count = Double
- data Counts a b
- counts :: (Ord a, Ord b, Traversable t, Foldable s) => t a -> s b -> Counts a b
- sum :: (Foldable t, Num a) => t a -> a
- mean :: (Foldable t, Fractional n, Real a) => t a -> n
- jaccard :: (Fractional n, Ord a) => Set a -> Set a -> n
- entropy :: (Floating c, Foldable t) => t c -> c
- histogram :: (Num a, Ord k, Foldable t) => t k -> Map k a
- countJoint :: (Ord a, Ord b) => a -> b -> Counts a b -> Count
- countFst :: Ord k => k -> Counts k b -> Count
- countSnd :: Ord k => k -> Counts a k -> Count
- fstElems :: Counts k b -> [k]
- sndElems :: Counts a k -> [k]
Scores for classification and ranking
Accuracy: the proportion of elements in the first sequence equal to elements at corresponding positions in second sequence. Sequences should be of equal lengths.
Reciprocal rank: the reciprocal of the rank at which the first arguments occurs in the sequence given as the second argument.
Average precision. http://en.wikipedia.org/wiki/Information_retrieval#Average_precision
Scores for clustering
Adjusted Rand Index: http://en.wikipedia.org/wiki/Rand_index
Mutual information: MI(X,Y) = H(X) - H(X|Y) = H(Y) - H(Y|X). Also known as information gain.
Variation of information: VI(X,Y) = H(X) + H(Y) - 2 MI(X,Y)
Comparing probability distributions
Kullback-Leibler divergence: KL(X,Y) = SUM_i P(X=i) log_2(P(X=i)/P(Y=i)). The distributions can be unnormalized.
Jensen-Shannon divergence: JS(X,Y) = 12 KL(X,(X+Y)2) + 12 KL(Y,(X+Y)2). The distributions can be unnormalized.
Auxiliary types and functions
Creates count table
Jaccard coefficient J(A,B) = |AB| / |A union B|
Entropy: H(X) = -SUM_i P(X=i) log_2(P(X=i)).
entropy xs is the
entropy of the random variable represented by the sequence
where each element of
xs is the count of the one particular
value the random variable can take. If you need to compute the
entropy from a sequence of outcomes, the following will work:
entropy . elems . histogram
histogram xs is returns the map of the frequency counts of the
elements in sequence