leveldb-haskell-0.3.0: Haskell bindings to LevelDB

Copyright(c) 2012-2013 The leveldb-haskell Authors
LicenseBSD3
Maintainerkim.altintop@gmail.com
Stabilityexperimental
Portabilitynon-portable
Safe HaskellNone
LanguageHaskell2010

Database.LevelDB.Base

Contents

Description

LevelDB Haskell binding.

The API closely follows the C-API of LevelDB. For more information, see: http://leveldb.googlecode.com

Synopsis

Exported Types

data DB Source

Database handle

Instances

Eq DB 

data BatchOp Source

Batch operation

Instances

newtype Comparator Source

User-defined comparator

data Compression Source

Compression setting

Constructors

NoCompression 
Snappy 

data Options Source

Options when opening a database

Constructors

Options 

Fields

blockRestartInterval :: !Int

Number of keys between restart points for delta encoding of keys.

This parameter can be changed dynamically. Most clients should leave this parameter alone.

Default: 16

blockSize :: !Int

Approximate size of user data packed per block.

Note that the block size specified here corresponds to uncompressed data. The actual size of the unit read from disk may be smaller if compression is enabled.

This parameter can be changed dynamically.

Default: 4k

cacheSize :: !Int

Control over blocks (user data is stored in a set of blocks, and a block is the unit of reading from disk).

If > 0, use the specified cache (in bytes) for blocks. If 0, leveldb will automatically create and use an 8MB internal cache.

Default: 0

comparator :: !(Maybe Comparator)

Comparator used to defined the order of keys in the table.

If Nothing, the default comparator is used, which uses lexicographic bytes-wise ordering.

NOTE: the client must ensure that the comparator supplied here has the same name and orders keys exactly the same as the comparator provided to previous open calls on the same DB.

Default: Nothing

compression :: !Compression

Compress blocks using the specified compression algorithm.

This parameter can be changed dynamically.

Default: Snappy

createIfMissing :: !Bool

If true, the database will be created if it is missing.

Default: False

errorIfExists :: !Bool

It true, an error is raised if the database already exists.

Default: False

maxOpenFiles :: !Int

Number of open files that can be used by the DB.

You may need to increase this if your database has a large working set (budget one open file per 2MB of working set).

Default: 1000

paranoidChecks :: !Bool

If true, the implementation will do aggressive checking of the data it is processing and will stop early if it detects any errors.

This may have unforeseen ramifications: for example, a corruption of one DB entry may cause a large number of entries to become unreadable or for the entire DB to become unopenable.

Default: False

writeBufferSize :: !Int

Amount of data to build up in memory (backed by an unsorted log on disk) before converting to a sorted on-disk file.

Larger values increase performance, especially during bulk loads. Up to to write buffers may be held in memory at the same time, so you may with to adjust this parameter to control memory usage. Also, a larger write buffer will result in a longer recovery time the next time the database is opened.

Default: 4MB

filterPolicy :: !(Maybe (Either BloomFilter FilterPolicy))
 

Instances

data ReadOptions Source

Options for read operations

Constructors

ReadOptions 

Fields

verifyCheckSums :: !Bool

If true, all data read from underlying storage will be verified against corresponding checksuyms.

Default: False

fillCache :: !Bool

Should the data read for this iteration be cached in memory? Callers may with to set this field to false for bulk scans.

Default: True

useSnapshot :: !(Maybe Snapshot)

If Just, read as of the supplied snapshot (which must belong to the DB that is being read and which must not have been released). If Nothing, use an implicit snapshot of the state at the beginning of this read operation.

Default: Nothing

data Snapshot Source

Snapshot handle

Instances

data WriteOptions Source

Options for write operations

Constructors

WriteOptions 

Fields

sync :: !Bool

If true, the write will be flushed from the operating system buffer cache (by calling WritableFile::Sync()) before the write is considered complete. If this flag is true, writes will be slower.

If this flag is false, and the machine crashes, some recent writes may be lost. Note that if it is just the process that crashes (i.e., the machine does not reboot), no writes will be lost even if sync==false.

In other words, a DB write with sync==false has similar crash semantics as the "write()" system call. A DB write with sync==true has similar crash semantics to a "write()" system call followed by "fsync()".

Default: False

Defaults

Basic Database Manipulations

open :: MonadIO m => FilePath -> Options -> m DB Source

Open a database.

The returned handle should be released with close.

close :: MonadIO m => DB -> m () Source

Close a database.

The handle will be invalid after calling this action and should no longer be used.

put :: MonadIO m => DB -> WriteOptions -> ByteString -> ByteString -> m () Source

Write a key/value pair.

delete :: MonadIO m => DB -> WriteOptions -> ByteString -> m () Source

Delete a key/value pair.

write :: MonadIO m => DB -> WriteOptions -> WriteBatch -> m () Source

Perform a batch mutation.

get :: MonadIO m => DB -> ReadOptions -> ByteString -> m (Maybe ByteString) Source

Read a value by key.

withSnapshot :: MonadIO m => DB -> (Snapshot -> IO a) -> m a Source

Run an action with a Snapshot of the database.

createSnapshot :: MonadIO m => DB -> m Snapshot Source

Create a snapshot of the database.

The returned Snapshot should be released with releaseSnapshot.

releaseSnapshot :: MonadIO m => DB -> Snapshot -> m () Source

Release a snapshot.

The handle will be invalid after calling this action and should no longer be used.

Filter Policy / Bloom Filter

data FilterPolicy Source

User-defined filter policy

data BloomFilter Source

Represents the built-in Bloom Filter

Administrative Functions

data Property Source

Properties exposed by LevelDB

Instances

getProperty :: MonadIO m => DB -> Property -> m (Maybe ByteString) Source

Get a DB property.

destroy :: MonadIO m => FilePath -> Options -> m () Source

Destroy the given LevelDB database.

repair :: MonadIO m => FilePath -> Options -> m () Source

Repair the given LevelDB database.

approximateSize :: MonadIO m => DB -> Range -> m Int64 Source

Inspect the approximate sizes of the different levels.

version :: MonadIO m => m (Int, Int) Source

Return the runtime version of the underlying LevelDB library as a (major, minor) pair.

Iteration