Unique-0.4.2: It provides the functionality like unix "uniq" utility

Data.List.UniqueUnsorted

Description

Library provides functions to find unique and duplicate elements in the list. Unlike Unique or UniqueStrict modules this one uses Data.HashMap.Strict for calculation.

The elements in the list can be unsorted (do not have an instance of Ord class, but Hashable is needed). This implementation is good for ByteStrings.

Synopsis

# Documentation

repeated :: (Hashable a, Eq a) => [a] -> [a] Source

repeated finds only the elements that are present more than once in the list. Example:

repeated  "foo bar" == "o"

repeatedBy :: (Hashable a, Eq a) => (Int -> Bool) -> [a] -> [a] Source

The repeatedBy function behaves just like repeated, except it uses a user-supplied equality predicate.

repeatedBy (>2) "This is the test line" == " stei"

unique :: (Hashable a, Eq a) => [a] -> [a] Source

unique gets only unique elements, that do not have duplicates.

unique  "foo bar" == " abrf"

count :: (Hashable a, Eq a) => [a] -> [(a, Int)] Source

count of each element in the list. Example:

count "This is the test line" == [(' ',4),('s',3),('T',1),('t',3),('e',3),('h',2),('i',3),('l',1),('n',1)]

count_ :: (Hashable a, Eq a) => [a] -> [(a, Int)] Source

count_ of each elements in the list, it sorts by their number. Example:

count_ "This is the test line" == [('n',1),('l',1),('T',1),('h',2),('i',3),('e',3),('t',3),('s',3),(' ',4)]