higher-leveldb- A rich monadic API for working with leveldb databases.

Safe HaskellNone




Higher LevelDB provides a rich monadic API for working with leveldb (http://code.google.com/p/leveldb) databases. It uses the leveldb-haskell bindings to the C++ library. The LevelDBT transformer is a Reader that maintains a database context with the open database as well as default read and write options. It also manages a concept called a KeySpace, which is a bucket scheme that provides a low (storage) overhead named identifier to segregate data. Finally it wraps a ResourceT which is required for use of leveldb-haskell functions.

The other major feature is the scan function and its ScanQuery structure that provides a map / fold abstraction over the Iterator exposed by leveldb-haskell.



Operations take place within a MonadLevelDB which is built with the LevelDBT transformer; the most basic type would be LevelDBT IO which is type aliased as LevelDB. The basic operations are the same as the underlying leveldb-haskell versions except that the DB and Options arguments are passed along by the LevelDB Reader, and the keys are automatically qualified with the KeySpaceId.

 {-# LANGUAGE OverloadedStrings #-}
 import Database.LevelDB.Higher

 runCreateLevelDB "/tmp/mydb" "MyKeySpace" $ do
     put "key:1" "this is a value"
     get "key:1"

Just "this is a value"

Basic types

type Item = (Key, Value)

The basic unit of storage is a Key/Value pair.

type KeySpace = ByteString

A KeySpace is similar concept to a "bucket" in other libraries and database systems. The ByteString for KeySpace can be arbitrarily long without performance impact because the system maps the KeySpace name to a 4-byte KeySpaceId internally which is preprended to each Key. KeySpaces are cheap and plentiful and indeed with this library you cannot escape them (you can supply an empty ByteString to use a default KeySpace, but it is still used). One intended use case is to use the full Key of a parent as the KeySpace of its children (instance data in a time-series for example). This lets you scan over a range-based key without passing over any unneeded items.

Basic operations

get :: MonadLevelDB m => Key -> m (Maybe Value)

Get a value from the current DB and KeySpace.

put :: MonadLevelDB m => Key -> Value -> m ()

Put a value in the current DB and KeySpace.

delete :: MonadLevelDB m => Key -> m ()

Delete an entry from the current DB and KeySpace.

Batch operations

runBatch :: MonadLevelDB m => WriterT WriteBatch m () -> m ()

Write a batch of operations - use the write and deleteB functions to add operations to the batch list.

putB :: MonadLevelDB m => Key -> Value -> WriterT WriteBatch m ()

Add a Put operation to a WriteBatch -- for use with runBatch.

deleteB :: MonadLevelDB m => Key -> WriterT WriteBatch m ()

Add a Del operation to a WriteBatch -- for use with runBatch.




:: MonadLevelDB m 
=> Key

Key at which to start the scan.

-> ScanQuery a b

query functions to execute -- see ScanQuery docs.

-> m b 

Scan the keyspace, applying functions and returning results. Look at the documentation for ScanQuery for more information.

This is essentially a fold left that will run until the scanWhile condition is met or the iterator is exhausted. All the results will be copied into memory before the function returns.

data ScanQuery a b

Structure containing functions used within the scan function. You may want to start with one of the builder/helper funcions such as queryItems, which is defined as:

queryItems = queryBegins { scanInit = []
                         , scanMap = id
                         , scanFold = (:)




scanInit :: b

starting value for fold/reduce

scanWhile :: Key -> Item -> b -> Bool

scan will continue until this returns false

scanMap :: Item -> a

map or transform an item before it is reduced/accumulated

scanFilter :: Item -> Bool

filter function - return False to leave this Item out of the result

scanFold :: a -> b -> b

accumulator/fold function e.g. (:)

queryItems :: ScanQuery Item [Item]

A basic ScanQuery helper; this query will find all keys that begin the Key argument supplied to scan, and returns them in a list of Item.

Does not require any function overrides.

queryList :: ScanQuery a [a]

a ScanQuery helper with defaults for queryBegins and a list result; requires a map function e.g.:

 scan "encoded-values:" queryList { scanMap = \(_, v) -> decode v }

queryBegins :: ScanQuery a b

A partial ScanQuery helper; this query will find all keys that begin with the Key argument supplied to scan.

Requires an scanInit, a scanMap and a scanFold function.

queryCount :: Num a => ScanQuery a a

a ScanQuery helper to count items beginning with Key argument.

Context modifiers

withKeySpace :: MonadLevelDB m => KeySpace -> m a -> m a

Use a local keyspace for the operation. e.g.:

 runCreateLevelDB "/tmp/mydb" "MyKeySpace" $ do
    put "somekey" "somevalue"
    withKeySpace "Other KeySpace" $ do
        put "somekey" "someother value"
    get "somekey"

 Just "somevalue"

withOptions :: MonadLevelDB m => RWOptions -> m a -> m a

Local Read/Write Options for the action.

withSnapshot :: MonadLevelDB m => m a -> m a

Run a block of get operations based on a single snapshot taken at the beginning of the action. The snapshot will be automatically released when complete.

This means that you can do put operations in the same block, but you will not see those changes inside this computation.

forkLevelDB :: MonadLevelDB m => LevelDB () -> m ThreadId

Fork a LevelDBT IO action and return ThreadId into the current monad. This uses resourceForkIO to handle the reference counting and cleanup resources when the last thread exits.

Monadic Types and Operations

class (Monad m, MonadThrow m, MonadUnsafeIO m, MonadIO m, Applicative m, MonadResource m, MonadBase IO m) => MonadLevelDB m where

MonadLevelDB class used by all the public functions in this module.


withDBContext :: (DBContext -> DBContext) -> m a -> m a

Override context for an action - only usable internally for functions like withKeySpace and withOptions.

liftLevelDB :: LevelDBT IO a -> m a

Lift a LevelDBT IO action into the current monad.

data LevelDBT m a

LevelDBT Transformer provides a context for database operations provided in this module.

This transformer has the same constraints as ResourceT as it wraps ResourceT along with a DBContext Reader.

If you aren't building a custom monad stack you can just use the LevelDB alias.

type LevelDB a = LevelDBT IO a

alias for LevelDBT IO - useful if you aren't building a custom stack.

mapLevelDBT :: (m a -> n b) -> LevelDBT m a -> LevelDBT n b

Map/transform the monad below the LevelDBT



:: MonadResourceBase m 
=> FilePath

path to DB to open/create

-> Options

database options to use

-> RWOptions

default read/write ops; use withOptions to override

-> KeySpace

Bucket in which Keys will be unique

-> LevelDBT m a

The actions to execute

-> m a 

Build a context and execute the actions; uses a ResourceT internally.

tip: you can use the Data.Default (def) method to specify default options e.g.

 runLevelDB "/tmp/mydb" def (def, def{sync = true}) "My Keyspace" $ do



:: MonadResourceBase m 
=> FilePath

path to DB to open/create

-> Options

database options to use

-> RWOptions

default read/write ops; use withOptions to override

-> KeySpace

Bucket in which Keys will be unique

-> LevelDBT m a

The actions to execute

-> ResourceT m a 

Same as runLevelDB but doesn't call runResourceT. This gives you the option to manage that yourself



:: MonadResourceBase m 
=> FilePath

path to DB to open/create

-> KeySpace

Bucket in which Keys will be unique

-> LevelDBT m a

The actions to execute

-> m a 

A helper for runLevelDB using default Options except createIfMissing=True


runResourceT :: MonadBaseControl IO m => ResourceT m a -> m a

Unwrap a ResourceT transformer, and call all registered release actions.

Note that there is some reference counting involved due to resourceForkIO. If multiple threads are sharing the same collection of resources, only the last call to runResourceT will deallocate the resources.

Since 0.3.0

data Options

Options when opening a database




blockRestartInterval :: !Int

Number of keys between restart points for delta encoding of keys.

This parameter can be changed dynamically. Most clients should leave this parameter alone.

Default: 16

blockSize :: !Int

Approximate size of user data packed per block.

Note that the block size specified here corresponds to uncompressed data. The actual size of the unit read from disk may be smaller if compression is enabled.

This parameter can be changed dynamically.

Default: 4k

cacheSize :: !Int

Control over blocks (user data is stored in a set of blocks, and a block is the unit of reading from disk).

If > 0, use the specified cache (in bytes) for blocks. If 0, leveldb will automatically create and use an 8MB internal cache.

Default: 0

comparator :: !(Maybe Comparator)

Comparator used to defined the order of keys in the table.

If Nothing, the default comparator is used, which uses lexicographic bytes-wise ordering.

NOTE: the client must ensure that the comparator supplied here has the same name and orders keys exactly the same as the comparator provided to previous open calls on the same DB.

Default: Nothing

compression :: !Compression

Compress blocks using the specified compression algorithm.

This parameter can be changed dynamically.

Default: Snappy

createIfMissing :: !Bool

If true, the database will be created if it is missing.

Default: False

errorIfExists :: !Bool

It true, an error is raised if the database already exists.

Default: False

maxOpenFiles :: !Int

Number of open files that can be used by the DB.

You may need to increase this if your database has a large working set (budget one open file per 2MB of working set).

Default: 1000

paranoidChecks :: !Bool

If true, the implementation will do aggressive checking of the data it is processing and will stop early if it detects any errors.

This may have unforeseen ramifications: for example, a corruption of one DB entry may cause a large number of entries to become unreadable or for the entire DB to become unopenable.

Default: False

writeBufferSize :: !Int

Amount of data to build up in memory (backed by an unsorted log on disk) before converting to a sorted on-disk file.

Larger values increase performance, especially during bulk loads. Up to to write buffers may be held in memory at the same time, so you may with to adjust this parameter to control memory usage. Also, a larger write buffer will result in a longer recovery time the next time the database is opened.

Default: 4MB

filterPolicy :: !(Maybe (Either BloomFilter FilterPolicy))


data ReadOptions

Options for read operations




verifyCheckSums :: !Bool

If true, all data read from underlying storage will be verified against corresponding checksuyms.

Default: False

fillCache :: !Bool

Should the data read for this iteration be cached in memory? Callers may with to set this field to false for bulk scans.

Default: True

useSnapshot :: !(Maybe Snapshot)

If Just, read as of the supplied snapshot (which must belong to the DB that is being read and which must not have been released). If Nothing, use an implicit snapshot of the state at the beginning of this read operation.

Default: Nothing

data WriteOptions

Options for write operations




sync :: !Bool

If true, the write will be flushed from the operating system buffer cache (by calling WritableFile::Sync()) before the write is considered complete. If this flag is true, writes will be slower.

If this flag is false, and the machine crashes, some recent writes may be lost. Note that if it is just the process that crashes (i.e., the machine does not reboot), no writes will be lost even if sync==false.

In other words, a DB write with sync==false has similar crash semantics as the write() system call. A DB write with sync==true has similar crash semantics to a write() system call followed by fsync().

Default: False

def :: Default a => a

The default value for this type.

class Monad m => MonadUnsafeIO m

A Monad based on some monad which allows running of some IO actions, via unsafe calls. This applies to IO and ST, for instance.

Since 0.3.0

class Monad m => MonadThrow m

A Monad which can throw exceptions. Note that this does not work in a vanilla ST or Identity monad. Instead, you should use the ExceptionT transformer in your stack if you are dealing with a non-IO base monad.

Since 0.3.0

type MonadResourceBase m = (MonadBaseControl IO m, MonadThrow m, MonadUnsafeIO m, MonadIO m, Applicative m)

A Monad which can be used as a base for a ResourceT.

A ResourceT has some restrictions on its base monad:

  • runResourceT requires an instance of MonadBaseControl IO. * MonadResource requires an instance of MonadThrow, MonadUnsafeIO, MonadIO, and Applicative.

While any instance of MonadBaseControl IO should be an instance of the other classes, this is not guaranteed by the type system (e.g., you may have a transformer in your stack with does not implement MonadThrow). Ideally, we would like to simply create an alias for the five type classes listed, but this is not possible with GHC currently.

Instead, this typeclass acts as a proxy for the other five. Its only purpose is to make your type signatures shorter.

Note that earlier versions of conduit had a typeclass ResourceIO. This fulfills much the same role.

Since 0.3.2