# text-metrics: Calculate various string metrics efficiently

[ algorithms, bsd3, library, text ] [ Propose Tags ]

Calculate various string metrics efficiently.

[Index]

## Flags

### Manual Flags

NameDescriptionDefault
dev

Turn on development settings.

Disabled

Use -f <flag> to enable a flag, or -f -<flag> to disable that flag. More info

Note: This package has metadata revisions in the cabal description newer than included in the tarball. To unpack the package including the revisions, use 'cabal get'.

#### Maintainer's Corner

For package maintainers and hackage trustees

Candidates

• No Candidates
Versions [RSS] 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.3.2 CHANGELOG.md base (>=4.7 && <5.0), nats (>=1 && <2), text (>=0.2 && <1.3) [details] BSD-3-Clause Mark Karpov Mark Karpov Revision 1 made by mrkkrp at 2017-05-26T15:33:36Z Text, Algorithms https://github.com/mrkkrp/text-metrics https://github.com/mrkkrp/text-metrics/issues head: git clone https://github.com/mrkkrp/text-metrics.git by mrkkrp at 2016-10-09T15:02:46Z Arch:0.3.2, Debian:0.3.0, Fedora:0.3.2, LTSHaskell:0.3.2, NixOS:0.3.2, Stackage:0.3.2, openSUSE:0.3.2 15477 total (130 in the last 30 days) (no votes yet) [estimated by Bayesian average] λ λ λ Docs available Last success reported on 2016-10-09

[back to package description]

# Text Metrics

The library provides efficient implementations of various strings metrics. It works with strict Text values and returns either Natural numbers (because the metrics cannot be negative), or Ratio Natural values because returned values are rational non-negative numbers by definition.

The functions provided here are the fastest implementations available for use in Haskell programs. In fact the functions are implemented in C for maximal efficiency, but this leads to a minor flaw. When we work with Text values in C, they are represented as UTF-16 encoded strings of two-byte values. The algorithms treat the strings as if a character corresponds to one element in such strings, which is true for almost all modern text data. However, there are characters that are represented by two adjoined elements in UTF-16: emoji, historic scripts, less used Chinese ideographs, and some more. If input Text of the functions contains such characters, the functions may return slightly incorrect result. Decide for yourself if this is acceptable for your use case, but chances are you will never run into situations when the functions produce incorrect results.

The current version of the package implements:

TODO list:

## Comparison with the edit-distance package

There is edit-distance package whose scope overlaps with the scope of this package. The differences are:

• edit-distance allows to specify costs for every operation when calculating Levenshtein distance (insertion, deletion, substitution, and transposition). This is rarely needed though in real-world applications, IMO.

• edit-distance only provides single Levenshtein distance, text-metrics aims to provide implementations of most string metrics algorithms.

• edit-distance works on Strings, while text-metrics works on strict Text values.

• As README.md of edit-distance states, “[the algorithms] have been fairly heavily optimized”, which is apparently true, yet the text-metrics is faster for short strings (under 64 characters) and even faster for longer strings (scales better). How much faster? For short strings more than ×3, and about ×4 for longer strings.

## Implementation

All “meat” of the algorithms is written in C in a rather straightforward way. Levenshtein variants are based on the “iterative algorithm with two matrix rows” from Wikipedia with additional improvement that we do not copy current row of distances into previous row, but just swap the pointers (which is OK, since the arrays have equal length and current row will be overwritten in the next iteration anyway).

Normalized versions are defined as thin (inlined) Haskell wrappers.