scalpel-core-0.6.2.2: A high level web scraping library for Haskell.
Safe HaskellSafe-Inferred
LanguageHaskell2010

Text.HTML.Scalpel.Core

Description

Scalpel core provides a subset of the scalpel web scraping library that is intended to have lightweight dependencies and to be free of all non-Haskell dependencies.

Notably this package does not contain any networking support. Users who desire a batteries include solution should depend on scalpel which does include networking support instead of scalpel-core.

More thorough documentation including example code can be found in the documentation of the scalpel package.

Synopsis

Selectors

data Selector Source #

Selector defines a selection of an HTML DOM tree to be operated on by a web scraper. The selection includes the opening tag that matches the selection, all of the inner tags, and the corresponding closing tag.

Instances

Instances details
IsString Selector Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Select.Types

data AttributePredicate Source #

An AttributePredicate is a method that takes a Attribute and returns a Bool indicating if the given attribute matches a predicate.

data AttributeName Source #

The AttributeName type can be used when creating Selectors to specify the name of an attribute of a tag.

Instances

Instances details
IsString AttributeName Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Select.Types

data TagName Source #

The TagName type is used when creating a Selector to specify the name of a tag.

Constructors

AnyTag 
TagString String 

Instances

Instances details
IsString TagName Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Select.Types

Methods

fromString :: String -> TagName #

textSelector :: Selector Source #

A selector which will match all text nodes.

Wildcards

anySelector :: Selector Source #

A selector which will match any node (including tags and bare text).

Tag combinators

(//) :: Selector -> Selector -> Selector infixl 5 Source #

The // operator creates an Selector by nesting one Selector in another. For example, "div" // "a" will create a Selector that matches anchor tags that are nested arbitrarily deep within a div tag.

atDepth :: Selector -> Int -> Selector infixl 6 Source #

The atDepth operator constrains a Selector to only match when it is at depth below the previous selector.

For example, "div" // "a" atDepth 1 creates a Selector that matches anchor tags that are direct children of a div tag.

Attribute predicates

(@:) :: TagName -> [AttributePredicate] -> Selector infixl 9 Source #

The @: operator creates a Selector by combining a TagName with a list of AttributePredicates.

(@=) :: AttributeName -> String -> AttributePredicate infixl 6 Source #

The @= operator creates an AttributePredicate that will match attributes with the given name and value.

If you are attempting to match a specific class of a tag with potentially multiple classes, you should use the hasClass utility function.

(@=~) :: RegexLike re String => AttributeName -> re -> AttributePredicate infixl 6 Source #

The @=~ operator creates an AttributePredicate that will match attributes with the given name and whose value matches the given regular expression.

hasClass :: String -> AttributePredicate Source #

The classes of a tag are defined in HTML as a space separated list given by the class attribute. The hasClass function will match a class attribute if the given class appears anywhere in the space separated list of classes.

match :: (String -> String -> Bool) -> AttributePredicate Source #

The match function allows for the creation of arbitrary AttributePredicates. The argument is a function that takes the attribute key followed by the attribute value and returns a boolean indicating if the attribute satisfies the predicate.

Scrapers

type Scraper str = ScraperT str Identity Source #

A value of Scraper a defines a web scraper that is capable of consuming a list of Tags and optionally producing a value of type a.

data ScraperT str m a Source #

A ScraperT operates like Scraper but also acts as a monad transformer.

Instances

Instances details
MonadError e m => MonadError e (ScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Scrape

Methods

throwError :: e -> ScraperT str m a #

catchError :: ScraperT str m a -> (e -> ScraperT str m a) -> ScraperT str m a #

MonadReader s m => MonadReader s (ScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Scrape

Methods

ask :: ScraperT str m s #

local :: (s -> s) -> ScraperT str m a -> ScraperT str m a #

reader :: (s -> a) -> ScraperT str m a #

MonadState s m => MonadState s (ScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Scrape

Methods

get :: ScraperT str m s #

put :: s -> ScraperT str m () #

state :: (s -> (a, s)) -> ScraperT str m a #

MonadWriter w m => MonadWriter w (ScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Scrape

Methods

writer :: (a, w) -> ScraperT str m a #

tell :: w -> ScraperT str m () #

listen :: ScraperT str m a -> ScraperT str m (a, w) #

pass :: ScraperT str m (a, w -> w) -> ScraperT str m a #

MonadTrans (ScraperT str) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Scrape

Methods

lift :: Monad m => m a -> ScraperT str m a #

Monad m => MonadFail (ScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Scrape

Methods

fail :: String -> ScraperT str m a #

MonadFix m => MonadFix (ScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Scrape

Methods

mfix :: (a -> ScraperT str m a) -> ScraperT str m a #

MonadIO m => MonadIO (ScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Scrape

Methods

liftIO :: IO a -> ScraperT str m a #

Monad m => Alternative (ScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Scrape

Methods

empty :: ScraperT str m a #

(<|>) :: ScraperT str m a -> ScraperT str m a -> ScraperT str m a #

some :: ScraperT str m a -> ScraperT str m [a] #

many :: ScraperT str m a -> ScraperT str m [a] #

Monad m => Applicative (ScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Scrape

Methods

pure :: a -> ScraperT str m a #

(<*>) :: ScraperT str m (a -> b) -> ScraperT str m a -> ScraperT str m b #

liftA2 :: (a -> b -> c) -> ScraperT str m a -> ScraperT str m b -> ScraperT str m c #

(*>) :: ScraperT str m a -> ScraperT str m b -> ScraperT str m b #

(<*) :: ScraperT str m a -> ScraperT str m b -> ScraperT str m a #

Functor m => Functor (ScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Scrape

Methods

fmap :: (a -> b) -> ScraperT str m a -> ScraperT str m b #

(<$) :: a -> ScraperT str m b -> ScraperT str m a #

Monad m => Monad (ScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Scrape

Methods

(>>=) :: ScraperT str m a -> (a -> ScraperT str m b) -> ScraperT str m b #

(>>) :: ScraperT str m a -> ScraperT str m b -> ScraperT str m b #

return :: a -> ScraperT str m a #

Monad m => MonadPlus (ScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Scrape

Methods

mzero :: ScraperT str m a #

mplus :: ScraperT str m a -> ScraperT str m a -> ScraperT str m a #

MonadCont m => MonadCont (ScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Scrape

Methods

callCC :: ((a -> ScraperT str m b) -> ScraperT str m a) -> ScraperT str m a #

Primitives

attr :: (Show str, StringLike str, Monad m) => String -> Selector -> ScraperT str m str Source #

The attr function takes an attribute name and a selector and returns the value of the attribute of the given name for the first opening tag that matches the given selector.

This function will match only the opening tag matching the selector, to match every tag, use attrs.

attrs :: (Show str, StringLike str, Monad m) => String -> Selector -> ScraperT str m [str] Source #

The attrs function takes an attribute name and a selector and returns the value of the attribute of the given name for every opening tag (possibly nested) that matches the given selector.

s = "<div id=\"out\"><div id=\"in\"></div></div>"
scrapeStringLike s (attrs "id" "div") == Just ["out", "in"]

html :: (StringLike str, Monad m) => Selector -> ScraperT str m str Source #

The html function takes a selector and returns the html string from the set of tags described by the given selector.

This function will match only the first set of tags matching the selector, to match every set of tags, use htmls.

htmls :: (StringLike str, Monad m) => Selector -> ScraperT str m [str] Source #

The htmls function takes a selector and returns the html string from every set of tags (possibly nested) matching the given selector.

s = "<div><div>A</div></div>"
scrapeStringLike s (htmls "div") == Just ["<div><div>A</div></div>", "<div>A</div>"]

innerHTML :: (StringLike str, Monad m) => Selector -> ScraperT str m str Source #

The innerHTML function takes a selector and returns the inner html string from the set of tags described by the given selector. Inner html here meaning the html within but not including the selected tags.

This function will match only the first set of tags matching the selector, to match every set of tags, use innerHTMLs.

innerHTMLs :: (StringLike str, Monad m) => Selector -> ScraperT str m [str] Source #

The innerHTMLs function takes a selector and returns the inner html string from every set of tags (possibly nested) matching the given selector.

s = "<div><div>A</div></div>"
scrapeStringLike s (innerHTMLs "div") == Just ["<div>A</div>", "A"]

text :: (StringLike str, Monad m) => Selector -> ScraperT str m str Source #

The text function takes a selector and returns the inner text from the set of tags described by the given selector.

This function will match only the first set of tags matching the selector, to match every set of tags, use texts.

texts :: (StringLike str, Monad m) => Selector -> ScraperT str m [str] Source #

The texts function takes a selector and returns the inner text from every set of tags (possibly nested) matching the given selector.

s = "<div>Hello <div>World</div></div>"
scrapeStringLike s (texts "div") == Just ["Hello World", "World"]

chroot :: (StringLike str, Monad m) => Selector -> ScraperT str m a -> ScraperT str m a Source #

The chroot function takes a selector and an inner scraper and executes the inner scraper as if it were scraping a document that consists solely of the tags corresponding to the selector.

This function will match only the first set of tags matching the selector, to match every set of tags, use chroots.

chroots :: (StringLike str, Monad m) => Selector -> ScraperT str m a -> ScraperT str m [a] Source #

The chroots function takes a selector and an inner scraper and executes the inner scraper as if it were scraping a document that consists solely of the tags corresponding to the selector. The inner scraper is executed for each set of tags (possibly nested) matching the given selector.

s = "<div><div>A</div></div>"
scrapeStringLike s (chroots "div" (pure 0)) == Just [0, 0]

position :: (StringLike str, Monad m) => ScraperT str m Int Source #

The position function is intended to be used within the do-block of a chroots call. Within the do-block position will return the index of the current sub-tree within the list of all sub-trees matched by the selector passed to chroots.

For example, consider the following HTML:

<article>
 <p> First paragraph. </p>
 <p> Second paragraph. </p>
 <p> Third paragraph. </p>
</article>

The position function can be used to determine the index of each <p> tag within the article tag by doing the following.

chroots "article" // "p" $ do
  index   <- position
  content <- text "p"
  return (index, content)

Which will evaluate to the list:

[
  (0, "First paragraph.")
, (1, "Second paragraph.")
, (2, "Third paragraph.")
]

matches :: (StringLike str, Monad m) => Selector -> ScraperT str m () Source #

The matches function takes a selector and returns () if the selector matches any node in the DOM.

Executing scrapers

scrape :: StringLike str => Scraper str a -> [Tag str] -> Maybe a Source #

The scrape function executes a Scraper on a list of Tags and produces an optional value.

scrapeT :: StringLike str => ScraperT str m a -> [Tag str] -> m (Maybe a) Source #

The scrapeT function executes a ScraperT on a list of Tags and produces an optional value. Since ScraperT is a monad transformer, the result is monadic.

scrapeStringLike :: StringLike str => str -> Scraper str a -> Maybe a Source #

The scrapeStringLike function parses a StringLike value into a list of tags and executes a Scraper on it.

scrapeStringLikeT :: (StringLike str, Monad m) => str -> ScraperT str m a -> m (Maybe a) Source #

The scrapeStringLikeT function parses a StringLike value into a list of tags and executes a ScraperT on it. Since ScraperT is a monad transformer, the result is monadic.

Serial Scraping

type SerialScraper str a = SerialScraperT str Identity a Source #

A SerialScraper allows for the application of Scrapers on a sequence of sibling nodes. This allows for use cases like targeting the sibling of a node, or extracting a sequence of sibling nodes (e.g. paragraphs (<p>) under a header (<h2>)).

Conceptually serial scrapers operate on a sequence of tags that correspond to the immediate children of the currently focused node. For example, given the following HTML:

 <article>
   <h1>title</h1>
   <h2>Section 1</h2>
   <p>Paragraph 1.1</p>
   <p>Paragraph 1.2</p>
   <h2>Section 2</h2>
   <p>Paragraph 2.1</p>
   <p>Paragraph 2.2</p>
 </article>

A serial scraper that visits the header and paragraph nodes can be executed with the following:

chroot "article" $ inSerial $ do ...

Each SerialScraper primitive follows the pattern of first moving the focus backward or forward and then extracting content from the new focus. Attempting to extract content from beyond the end of the sequence causes the scraper to fail.

To complete the above example, the article's structure and content can be extracted with the following code:

chroot "article" $ inSerial $ do
    title <- seekNext $ text "h1"
    sections <- many $ do
       section <- seekNext $ text "h2"
       ps <- untilNext (matches "h2") (many $ seekNext $ text "p")
       return (section, ps)
    return (title, sections)

Which will evaluate to:

 ("title", [
   ("Section 1", ["Paragraph 1.1", "Paragraph 1.2"]),
   ("Section 2", ["Paragraph 2.1", "Paragraph 2.2"]),
 ])

data SerialScraperT str m a Source #

Run a serial scraper transforming over a monad m.

Instances

Instances details
MonadError e m => MonadError e (SerialScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Serial

Methods

throwError :: e -> SerialScraperT str m a #

catchError :: SerialScraperT str m a -> (e -> SerialScraperT str m a) -> SerialScraperT str m a #

MonadReader r m => MonadReader r (SerialScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Serial

Methods

ask :: SerialScraperT str m r #

local :: (r -> r) -> SerialScraperT str m a -> SerialScraperT str m a #

reader :: (r -> a) -> SerialScraperT str m a #

MonadState s m => MonadState s (SerialScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Serial

Methods

get :: SerialScraperT str m s #

put :: s -> SerialScraperT str m () #

state :: (s -> (a, s)) -> SerialScraperT str m a #

MonadWriter w m => MonadWriter w (SerialScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Serial

Methods

writer :: (a, w) -> SerialScraperT str m a #

tell :: w -> SerialScraperT str m () #

listen :: SerialScraperT str m a -> SerialScraperT str m (a, w) #

pass :: SerialScraperT str m (a, w -> w) -> SerialScraperT str m a #

MonadTrans (SerialScraperT str) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Serial

Methods

lift :: Monad m => m a -> SerialScraperT str m a #

Monad m => MonadFail (SerialScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Serial

Methods

fail :: String -> SerialScraperT str m a #

MonadFix m => MonadFix (SerialScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Serial

Methods

mfix :: (a -> SerialScraperT str m a) -> SerialScraperT str m a #

MonadIO m => MonadIO (SerialScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Serial

Methods

liftIO :: IO a -> SerialScraperT str m a #

Monad m => Alternative (SerialScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Serial

Methods

empty :: SerialScraperT str m a #

(<|>) :: SerialScraperT str m a -> SerialScraperT str m a -> SerialScraperT str m a #

some :: SerialScraperT str m a -> SerialScraperT str m [a] #

many :: SerialScraperT str m a -> SerialScraperT str m [a] #

Monad m => Applicative (SerialScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Serial

Methods

pure :: a -> SerialScraperT str m a #

(<*>) :: SerialScraperT str m (a -> b) -> SerialScraperT str m a -> SerialScraperT str m b #

liftA2 :: (a -> b -> c) -> SerialScraperT str m a -> SerialScraperT str m b -> SerialScraperT str m c #

(*>) :: SerialScraperT str m a -> SerialScraperT str m b -> SerialScraperT str m b #

(<*) :: SerialScraperT str m a -> SerialScraperT str m b -> SerialScraperT str m a #

Functor m => Functor (SerialScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Serial

Methods

fmap :: (a -> b) -> SerialScraperT str m a -> SerialScraperT str m b #

(<$) :: a -> SerialScraperT str m b -> SerialScraperT str m a #

Monad m => Monad (SerialScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Serial

Methods

(>>=) :: SerialScraperT str m a -> (a -> SerialScraperT str m b) -> SerialScraperT str m b #

(>>) :: SerialScraperT str m a -> SerialScraperT str m b -> SerialScraperT str m b #

return :: a -> SerialScraperT str m a #

Monad m => MonadPlus (SerialScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Serial

Methods

mzero :: SerialScraperT str m a #

mplus :: SerialScraperT str m a -> SerialScraperT str m a -> SerialScraperT str m a #

MonadCont m => MonadCont (SerialScraperT str m) Source # 
Instance details

Defined in Text.HTML.Scalpel.Internal.Serial

Methods

callCC :: ((a -> SerialScraperT str m b) -> SerialScraperT str m a) -> SerialScraperT str m a #

inSerial :: (StringLike str, Monad m) => SerialScraperT str m a -> ScraperT str m a Source #

Executes a SerialScraper in the context of a Scraper. The immediate children of the currently focused node are visited serially.

Primitives

stepNext :: (StringLike str, Monad m) => ScraperT str m a -> SerialScraperT str m a Source #

Move the cursor forward one node and execute the given scraper on the new focused node.

stepBack :: (StringLike str, Monad m) => ScraperT str m a -> SerialScraperT str m a Source #

Move the cursor back one node and execute the given scraper on the new focused node.

seekNext :: (StringLike str, Monad m) => ScraperT str m a -> SerialScraperT str m a Source #

Move the cursor forward until the given scraper is successfully able to execute on the focused node. If the scraper is never successful then the serial scraper will fail.

seekBack :: (StringLike str, Monad m) => ScraperT str m a -> SerialScraperT str m a Source #

Move the cursor backward until the given scraper is successfully able to execute on the focused node. If the scraper is never successful then the serial scraper will fail.

untilNext :: (StringLike str, Monad m) => ScraperT str m a -> SerialScraperT str m b -> SerialScraperT str m b Source #

Create a new serial context by moving the focus forward and collecting nodes until the scraper matches the focused node. The serial scraper is then executed on the collected nodes.

The provided serial scraper is unable to see nodes outside the new restricted context.

untilBack :: (StringLike str, Monad m) => ScraperT str m a -> SerialScraperT str m b -> SerialScraperT str m b Source #

Create a new serial context by moving the focus backward and collecting nodes until the scraper matches the focused node. The serial scraper is then executed on the collected nodes.