# replace-attoparsec: Find, replace, and split string patterns with Attoparsec parsers (instead of regex)

[ bsd2, library, parsing ] [ Propose Tags ]

Find text patterns, replace the patterns, split on the patterns. Use Attoparsec monadic parsers instead of regular expressions for pattern matching.

Versions [RSS] [faq] 1.0.0.0, 1.0.1.0, 1.0.2.0, 1.0.3.0, 1.2.0.0, 1.2.1.0, 1.2.2.0, 1.4.0.0, 1.4.1.0, 1.4.2.0, 1.4.4.0, 1.4.5.0 CHANGELOG.md attoparsec, base (>=4.0 && <5.0), bytestring, text [details] BSD-2-Clause James Brock James Brock Parsing https://github.com/jamesdbrock/replace-attoparsec https://github.com/jamesdbrock/replace-attoparsec/issues head: git clone https://github.com/jamesdbrock/replace-attoparsec.git by JamesBrock at 2021-07-31T10:02:59Z LTSHaskell:1.4.5.0, NixOS:1.4.5.0, Stackage:1.4.5.0 2452 total (48 in the last 30 days) (no votes yet) [estimated by Bayesian average] λ λ λ Docs available Last success reported on 2021-07-31

## Modules

[Index] [Quick Jump]

• Replace
• Attoparsec

#### Maintainer's Corner

For package maintainers and hackage trustees

Candidates

[back to package description]

# replace-attoparsec

replace-attoparsec is for finding text patterns, and also replacing or splitting on the found patterns. This activity is traditionally done with regular expressions, but replace-attoparsec uses attoparsec parsers instead for the pattern matching.

replace-attoparsec can be used in the same sort of “pattern capture” or “find all” situations in which one would use Python re.findall or Perl m//, or Unix grep.

replace-attoparsec can be used in the same sort of “stream editing” or “search-and-replace” situations in which one would use Python re.sub, or Perl s///, or Unix sed, or awk.

replace-attoparsec can be used in the same sort of “string splitting” situations in which one would use Python re.split or Perl split.

See replace-megaparsec for the megaparsec version.

## Why would we want to do pattern matching and substitution with parsers instead of regular expressions?

• Haskell parsers have a nicer syntax than regular expressions, which are notoriously difficult to read.

• Regular expressions can do “group capture” on sections of the matched pattern, but they can only return stringy lists of the capture groups. Parsers can construct typed data structures based on the capture groups, guaranteeing no disagreement between the pattern rules and the rules that we're using to build data structures based on the pattern matches.

For example, consider scanning a string for numbers. A lot of different things can look like a number, and can have leading plus or minus signs, or be in scientific notation, or have commas, or whatever. If we try to parse all of the numbers out of a string using regular expressions, then we have to make sure that the regular expression and the string-to-number conversion function agree about exactly what is and what isn't a numeric string. We can get into an awkward situation in which the regular expression says it has found a numeric string but the string-to-number conversion function fails. A typed parser will perform both the pattern match and the conversion, so it will never be in that situation. Parse, don't validate.

• Regular expressions are only able to pattern-match regular grammers. Attoparsec parsers are able pattern-match context-free grammers.

• The replacement expression for a traditional regular expression-based substitution command is usually just a string template in which the Nth “capture group” can be inserted with the syntax \N. With this library, instead of a template, we get an editor function which can perform any computation, including IO.

# Usage Examples

Try the examples in ghci by running cabal v2-repl in the replace-attoparsec/ root directory.

The examples depend on these imports and LANGUAGE OverloadedStrings.

:set -XOverloadedStrings
import Replace.Attoparsec.Text
import Data.Attoparsec.Text as AT
import qualified Data.Text as T
import Data.Either
import Data.Char


## Split strings with splitCap

### Find all pattern matches, capture the matched text and the parsed result

Separate the input string into sections which can be parsed as a hexadecimal number with a prefix "0x", and sections which can't. Parse the numbers.

let hexparser = string "0x" *> hexadecimal :: Parser Integer
splitCap (match hexparser) "0xA 000 0xFFFF"

[Right ("0xA",10), Left " 000 ", Right ("0xFFFF",65535)]


### Pattern match balanced parentheses

Find groups of balanced nested parentheses. This is an example of a “context-free” grammar, a pattern that can't be expressed by a regular expression. We can express the pattern with a recursive parser.

import Data.Functor (void)
import Data.Bifunctor (second)
let parens :: Parser ()
parens = do
char '('
manyTill
(void (satisfy $notInClass "()") <|> void parens) (char ')') pure () second fst <$> splitCap (match parens) "(()) (()())"

[Right "(())",Left " ",Right "(()())"]


## Edit text strings with streamEdit

The following examples show how to search for a pattern in a string of text and then edit the string of text to substitute in some replacement text for the matched patterns.

### Pattern match and replace with a constant

Replace all carriage-return-newline occurances with newline.

streamEdit (string "\r\n") (const "\n") "1\r\n2\r\n"

"1\n2\n"


### Pattern match and edit the matches

Replace alphabetic characters with the next character in the alphabet.

streamEdit (AT.takeWhile isLetter) (T.map succ) "HAL 9000"

"IBM 9000"


### Pattern match and maybe edit the matches, or maybe leave them alone

Find all of the string sections s which can be parsed as a hexadecimal number r, and if r≤16, then replace s with a decimal number. Uses the match combinator.

let hexparser = string "0x" *> hexadecimal :: Parser Integer
streamEdit (match hexparser) (\(s,r) -> if r <= 16 then T.pack (show r) else s) "0xA 000 0xFFFF"

"10 000 0xFFFF"


### Pattern match and edit the matches with IO with streamEditT

Find an environment variable in curly braces and replace it with its value from the environment.

import System.Environment (getEnv)
streamEditT (char '{' *> manyTill anyChar (char '}')) (fmap T.pack . getEnv) "- {HOME} -"

"- /home/jbrock -"


### Pattern match, edit the matches, and count the edits with streamEditT

Find and capitalize no more than three letters in a string, and return the edited string along with the number of letters capitalized. To enable the editor function to remember how many letters it has capitalized, we'll run streamEditT in the State monad from the mtl package. Use this technique to get the same functionality as Python re.subn.

import qualified Control.Monad.State.Strict as MTL
import Data.Char (toUpper)

let editThree :: Char -> MTL.State Int T.Text
editThree x = do
i <- get
if i<3
then do
put $i+1 pure$ T.singleton $toUpper x else pure$ T.singleton x

flip runState 0 $streamEditT (satisfy isLetter) editThree "a a a a a"  ("A A A a a",3)  # In the Shell If we're going to have a viable sed replacement then we want to be able to use it easily from the command line. This Stack script interpreter script will find decimal numbers in a stream and replace them with their double. #!/usr/bin/env stack {- stack script --resolver lts-16 --package attoparsec --package text --package text-show --package replace-attoparsec -} -- https://docs.haskellstack.org/en/stable/GUIDE/#script-interpreter {-# LANGUAGE OverloadedStrings #-} import qualified Data.Text as T import qualified Data.Text.IO as T import TextShow import Data.Attoparsec.Text import Replace.Attoparsec.Text main = T.interact$ streamEdit decimal (showt . (* (2::Integer)))


If you have The Haskell Tool Stack installed then you can just copy-paste this into a file named doubler.hs and run it. (On the first run Stack may need to download the dependencies.)

$chmod u+x doubler.hs$ echo "1 6 21 107" | ./doubler.hs
2 12 42 214


# Alternatives

Some libraries that one might consider instead of this one.

https://github.com/RaminHAL9001/parser-sed-thing

# Benchmarks

These benchmarks are intended to measure the wall-clock speed of everything except the actual pattern-matching. Speed of the pattern-matching is the responsibility of the megaparsec and attoparsec libraries.

The benchmark task is to find all of the one-character patterns x in a text stream and replace them by a function which returns the constant string oo. So, like the regex s/x/oo/g.

We have two benchmark input cases, which we call dense and sparse.

The dense case is one megabyte of alternating spaces and xs like

x x x x x x x x x x x x x x x x x x x x x x x x x x x x


The sparse case is one megabyte of spaces with a single x in the middle like

                         x


Each benchmark program reads the input from stdin, replaces x with oo, and writes the result to stdout. The time elapsed is measured by perf stat, and the best observed time is recorded.

See replace-benchmark for details.

Program dense sparse
Python 3.7.4 re.sub repl function 89.23ms 23.98ms
Perl 5 s///ge 180.65ms 5.02ms
Replace.Megaparsec.streamEdit String 441.94ms 375.04ms
Replace.Megaparsec.streamEdit ByteString 529.99ms 73.76ms
Replace.Megaparsec.streamEdit Text 547.47ms 139.21ms
Replace.Attoparsec.ByteString.streamEdit 394.12ms 41.13ms
Replace.Attoparsec.Text.streamEdit 515.26ms 46.10ms
Text.Regex.Applicative.replace String 1083.98ms 646.40ms
Text.Regex.PCRE.Heavy.gsub Text > 10min 14.29ms
Control.Lens.Regex.ByteString.match > 10min 4.27ms
Control.Lens.Regex.Text.match > 10min 14.74ms

1. Could we write this library for parsec?

No, because the match combinator doesn't exist for parsec. (I can't find it anywhere. Can it be written?)

2. Is this a good idea?

You may have heard it suggested that monadic parsers are better for pattern-matching when the input stream is mostly signal, and regular expressions are better when the input stream is mostly noise.

The premise of this library is that monadic parsers are great for finding small signal patterns in a stream of otherwise noisy text.

Our reluctance to forego the speedup opportunities afforded by restricting ourselves to regular grammars is an old superstition about opportunities which remain mostly unexploited anyway. The performance compromise of allowing stack memory allocation (a.k.a. pushdown automata, a.k.a. context-free grammar) was once considered controversial for general-purpose programming languages. I think we can now resolve that controversy the same way for pattern matching languages.