D      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABC None0;= DOA monadic sort implementation derived from the non-monadic one in ghc's PreludeERandomly shuffle items in listFFRepeatedy execute action, collecting results, until it returns NothingG_Apply action to elements one at a time until one succeeds. Throw last error if all fail. Throw H error if list is empty.I{Apply action to elements one at a time until one succeeds. Throw last error if all fail. Throw given error if list is emptyJ2lift IOE monad to ErrorT monad over some MonadIO mK1Change or insert value of key in association listLbit-or all numbers togetherM=Concat first and second together with period in between. Eg. #"hello" <.> "world" = "hello.world"NIs field's value a 1 or True (MongoDB use both Int and Bools for truth values). Error if field not in document or field not a Num or Bool.O`Hexadecimal string representation of a byte string. Each byte yields two hexadecimal characters.P+Two char hexadecimal representation of byteDQRSTEFGIUJKLMNOP(c) Victor Denisov, 2016 Apache 2.0&Victor Denisov denisovenator@gmail.comalphaPOSIXSafe"Abstract transport interface should return V on EOF Make connection from handle    None"#$0;<=>?LNVq>*WdSet when getMore is called but the cursor id is not valid at the server. Returned with zero results.X]Query error. Returned with one document containing an "$err" field holding the error message.YFor backward compatability: Set when the server supports the AwaitData query option. if it doesn't, a replica slave client should sleep a little between getMore'sZ/A reply is a message received in response to a [\0 = cursor finishedTailable means cursor is not closed when the last data is retrieved. Rather, the cursor marks the final object's position. You can resume using the cursor later, from where it was located, if more data were received. Like any "latent cursor", the cursor may become invalid at some point  for example if the final object it references were deleted. Thus, you should be prepared to requery on CursorNotFound exception.]ZAllow query of replica slave. Normally these return an error except for namespace "local".The server normally times out idle cursors after 10 minutes to prevent a memory leak in case a client forgets to close a cursor. Set this option to allow a cursor to live forever until it is closed.Use with TailableCursor. If we are at the end of the data, block for a while rather than returning no data. After a timeout period, we do return as normal. | Exhaust -- ^ Stream the data down full blast in multiple "more" packages, on the assumption that the client will fully read all data queried. Faster when you are pulling a lot of data and know you want to pull it all down. Note: the client is not allowed to not read all the data unless it closes the connection. Exhaust commented out because not compatible with current ^ implementationZGet partial results from a _mongos_ if some shards are down, instead of throwing an error.[+A request is a message that is sent with a Z expected in return_,Number of initial matching documents to skip`The number of document to return in each batch response from the server. 0 means use Mongo default. Negative means close cursor after first batch and use absolute value as batch size.a'[] = return all documents in collectionb[] = return whole documentIf set, the database will remove only the first matching document in the collection. Otherwise all matching documents will be removediIf set, the database will insert the supplied object into the collection if no matching document is foundrIf set, the database will update all matching objects in the collection. Otherwise only updates first matching doccIf set, the database will not stop processing a bulk insert if one fails (eg due to duplicate IDs). This makes bulk insert behave similarly to a series of single inserts, except lastError will be set if any insert fails, not just the last one. (new in 1.9.1)d0A notice is a message that is sent with no replye1A fresh request id is generated for every messagefUDatabase name and collection name with period (.) in between. Eg. "myDb.myCollection"g=Message received from a Mongo server in response to a RequesthA write notice(s) with getLastError request, or just query request. Note, that requestId will be out of order because request ids will be generated for notices after the request id supplied was generated. This is ok because the mongo server does not care about order just uniqueness.2Thread-safe TCP connection with pipelined requests^$Thread-safe and pipelined connectioni=Mutex on handle, so only one thread at a time can write to itjuQueue of threads waiting for responses. Every time a response arrive we pop the next thread and give it the response.k2Create new Pipeline over given handle. You should   pipeline when finished, which will also close handle. If pipeline is not closed but eventually garbage collected, it will be closed along with handle. $Close pipe and underlying connectionl@Listen for responses and supply them to waiting threads in ordermQSend message to destination; the destination must not response (otherwise future nas will get these responses instead of their own). Throw IOError and close pipeline if send failso'Send message to destination and return promise of response from one message only. The destination must reply to the message (otherwise promises will have the wrong responses in them). Throw IOError and closes pipeline if send fails, likewise for promised response.pCreate pipe over handleqCreate pipe over connectionr^Send notices as a contiguous batch to server with no reply. Throw IOError if connection fails.nSend notices and request as a contiguous batch to server and return reply promise, which will block when invoked until reply arrives. This call and resulting promise will throw IOError if connection fails.sWrite message to connectiontread response from a connectionuGenerate fresh request idvONote, does not write message length (first int32), assumes caller will write itwMNote, does not read message length (first int32), assumes it was already readPx yWXYZz{\|} ][~_`abcdf^ij !pqrn yWXYZz{\|} ][ ~_`abcd^ijNone"#$01;<=>?FKNTVj"AA command is a special query or action against the database. See  ,http://www.mongodb.org/display/DOCS/Commands for details.#EResult of running a MapReduce has some stats besides the output. See Dhttp://www.mongodb.org/display/DOCS/MapReduce#MapReduce-Resultobject%/Clear all old data and replace it with new data&DLeave old data but overwrite entries with the same key with new data'MLeave old data but combine entries with the same key via MR's reduce function)Return results directly instead of writing them to an output collection. Results must fit within 16MB limit of a single document*pWrite results to given collection, in other database if specified. Follow merge policy when entry already exists+(key, value) -> final_value. A finalize function may be run after reduction. Such a function is optional and is not necessary for many map/reduce cases. The finalize function takes a key and a value, and returns a finalized value.,(key, [value]) -> value. The reduce function receives a key and an array of values and returns an aggregate result value. The MapReduce engine may invoke reduce functions iteratively; thus, these functions must be idempotent. That is, the following must hold for your reduce function: )reduce(k, [reduce(k,vs)]) == reduce(k,vs). If you need to perform an operation only once, use a finalize function. The output of emit (the 2nd param) and reduce should be the same format to make iterative reduce possible.- () -> void+. The map function references the variable thisK to inspect the current object under consideration. The function must call emit(key,value)N at least once, but may be invoked any number of times, as may be appropriate..Maps every document in collection to a list of (key, value) pairs, then for each unique key reduces all its associated values to a single result. There are additional parameters that may be set to tweak this basic operation. This implements the latest version of map-reduce that requires MongoDB 1.7.4 or greater. To map-reduce against an older server use runCommand directly as described in  -http://www.mongodb.org/display/DOCS/MapReduce.3NOperate on only those documents selected. Default is [] meaning all documents.4Default is [] meaning no sort5Default is 0 meaning no limit6NOutput to a collection with a certain merge policy. Default is no collection ()A). Note, you don't want this default if your result set is large.7GFunction to apply to all the results when finished. Default is Nothing.85Variables (environment) that can be accessed from mapreducefinalize. Default is [].9;Provide statistics on job execution time. Default is False.:!Fields to group by, or function ( doc -> key) returning a "key object" to be used as the grouping key. Use KeyF instead of Key to specify a key that is not an existing member of the object (or, to access embedded members).=JGroups documents in collection by key then reduces (aggregates) each group@Fields to group byA(doc, agg) -> (). The reduce function reduces (aggregates) the objects iterated. Typical operations of a reduce function include summing and counting. It takes two arguments, the current document being iterated over and the aggregation value, and updates the aggregate value.Bagg.. Initial aggregation value supplied to reduceCMCondition that must be true for a row to be considered. [] means always true.Dagg -> () | result . An optional function to be run on each item in the result set just before the item is returned. Can either modify the item (e.g., add an average field given a count and a total) or return a replacement object (returning a new object with just _id and average fields).GThe Aggregate PipelineH&Iterator over results of a query. Use  to iterate or  to get all results. A cursor is closed when it is explicitly closed, all results have been read from it, garbage collected, or not used for over 10 minutes (unless  option was specified in S)). Reading from a closed cursor raises a . Note, a cursor is not closed when the pipe is closed, so you can open another pipe to the same server and continue using the cursor.CursorId = 0 means cursor is finished. Documents is remaining documents to serve in current batch. Limit is number of documents to return. Nothing means no limit.A promised batch which may failOcThe number of document to return in each batch response from the server. 0 means use Mongo default.P<Fields to sort by. Each one is associated with 1 or -1. Eg. ["x" =: 1, "y" =: -1] means sort by x ascending then y descendingQ~Maximum number of documents to return, i.e. cursor will close after iterating over this number of documents. 0 means no limit.R9Fields to return, analogous to the select clause in SQL. []6 means return whole document (analogous to * in SQL). ["x" =: 1, "y" =: 1] means return only x and y fields of each document.  ["x" =: 0] means return all fields except x.SUse `M to create a basic query with defaults, then modify if desired. For example, (select sel col) {limit = 10}U Default = []W[] = all fields. Default = []X9Number of initial matching documents to skip. Default = 0Y@Maximum number of documents to return, 0 = no limit. Default = 0Z6Sort results by this order, [] = no sort. Default = [][If true assures no duplicates are returned, or objects missed, which were present at both the start and end of the query's execution (even if the object were updated). If an object is new during the query, or deleted during the query, it may or may not be returned, even with snapshot mode. Note that short query responses (less than 1MB) are always effectively snapshotted. Default = False\oThe number of document to return in each batch response from the server. 0 means use Mongo default. Default = 0];Force MongoDB to use this index, [] = no hint. Default = []read from master onlyread from slave ok^/Update operations on fields in a document. See Hhttp://www.mongodb.org/display/DOCS/Updating#Updating-ModifierOperationsgSubmit writes without receiving acknowledgments. Fast. Assumes writes succeed even though they may not.Receive an acknowledgment after every write, and raise exception if one says the write failed. This is acomplished by sending the getLastError command, with given | parameters, after every write.`S or bn that selects documents in collection that match selector. The choice of type depends on use, for example, in  (select sel col) it is a Query, and in  (select sel col) it is a Selection.a:Filter for a query, analogous to the where clause in SQL. []& matches all documents in collection. ["x" =: a, "y" =: b] is analogous to where x = a and y = b in SQL. See  ,http://www.mongodb.org/display/DOCS/Querying for full selector syntax.b3Selects documents in collection that match selectorf,Collection name (not prefixed with database)j+Values needed when executing a db operationk%operations query/update this databaselJoperations read/write to this pipelined TCP connection to a MongoDB serverm.read/write operation will use this access modexMongodb server before 2.6 doesn't allow to calculate this value. This field is meaningless if we can't calculate the number of modified documents.|1Parameters for getLastError command. For example  ["w" =: 2]o tells the server to wait for the write to reach at least two servers in replica set before acknowledging. See  7http://www.mongodb.org/display/DOCS/Last+Error+Commands for more options.}#Type of reads and writes to perform~8Read-only action, reading stale data from a slave is OK.>Read-write action, slave not OK, every write is fire & forget.LRead-write action, slave not OK, every write is confirmed with getLastError.-Error code from getLastError or query failureA connection failure, or a read or write exception like cursor expired or inserting a duplicate key. Note, unexpected data from the server is not a Failure, rather it is a programming error (you should call ` in this case) because the client and server are incompatible and requires a programming change.TCP connection (G6) failed. May work if you try again on the same Mongo  Connection which will create a new Pipe.Cursor expired because it wasn't accessed for over 10 minutes, or this cursor came from a different server that the one you are currently connected to (perhaps a fail over happen between servers in a replica set)7Query failed for some reason as described in the string|Error observed by getLastError after a write, error description is in string, index of failed document is the first argumentaWrite concern error. It's reported only by insert, update, delete commands. Not by wire protocol.% found no document matching selection returned an error;When we need to aggregate several failures and report them.FThe structure of the returned documents doesn't match what we expectedbA monad on top of m (which must be a MonadIO) that may access the database and may fail with a DB mRun action against database on server at other end of pipe. Use access mode for any reads and writes. Throw  in case of any error.Same as  []Same as ~Run action with given }%List all databases residing on serverCurrent database in use!Run action against given databaseAuthenticate with the current database (if server is running in secure mode). Return whether authentication was successful or not. Reauthentication is required for every new pipe. SCRAM-SHA-1 will be used for server versions 3.0+, MONGO-CR for lower versions.wAuthenticate with the current database, using the MongoDB-CR authentication mechanism (default in MongoDB server < 3.0)yAuthenticate with the current database, using the SCRAM-SHA-1 authentication mechanism (default in MongoDB server >= 3.0)%List all collections in this databaseeAdd Javascript predicate to selector, in which case a document must match both selector and predicate+Send write to server, and if write-mode is Safe- then include getLastError request and raise  if it reports an error.jInsert document into collection and return its "_id" value, which is created automatically if not suppliedSame as  except don't return _idInsert documents into collection and return their "_id" values, which are created automatically if not supplied. If a document fails to be inserted (eg. due to duplicate key) then remaining docs are aborted, and LastError is set. An exception will be throw if any error occurs.Same as  except don't return _idsInsert documents into collection and return their "_id" values, which are created automatically if not supplied. If a document fails to be inserted (eg. due to duplicate key) then remaining docs are still inserted.Same as  except don't return _idsoInsert documents into collection and return their "_id" values, which are created automatically if not suppliedCThis will fail if the list of documents is bigger than restrictions-Assign a unique value to _id field if missing|Save document to collection, meaning insert it if its new (has no "_id" field) or upsert it if its not new (has "_id" field)7Replace first document in selection with given documentaReplace first document in selection with given document, or insert document if selection is empty`Update first document in selection with given document, or insert document if selection is empty6Update all documents in selection using given modifierBUpdate first document in selection using updater document, unless ? option is supplied then update all documents in selection. If W option is supplied then treat updater as document and insert it if selection is empty.Bulk update operation. If one update fails it will not update the remaining - documents. Current returned value is only a place holder. With mongodb server - before 2.6 it will send update requests one by one. In order to receive - error messages in versions under 2.6 you need to user confirmed writes. - Otherwise even if the errors had place the list of errors will be empty and - the result will be success. After 2.6 it will use bulk update feature in - mongodb.Bulk update operation. If one update fails it will proceed with the - remaining documents. With mongodb server before 2.6 it will send update - requests one by one. In order to receive error messages in versions under - 2.6 you need to use confirmed writes. Otherwise even if the errors had - place the list of errors will be empty and the result will be success. - After 2.6 it will use bulk update feature in mongodb.!Delete all documents in selection"Delete first document in selection Bulk delete operation. If one delete fails it will not delete the remaining - documents. Current returned value is only a place holder. With mongodb server - before 2.6 it will send delete requests one by one. After 2.6 it will use - bulk delete feature in mongodb.Bulk delete operation. If one delete fails it will proceed with the - remaining documents. Current returned value is only a place holder. With - mongodb server before 2.6 it will send delete requests one by one. After 2.6 - it will use bulk delete feature in mongodb.Selects documents in collection that match selector. It uses no query options, projects all fields, does not skip any documents, does not limit result size, uses default batch size, does not sort, does not hint, and does not snapshot. Fetch documents satisfying queryCFetch first document satisfying query or Nothing if none satisfy itSame as  except throw  if none matchruns the findAndModify command as an update without an upsert and new set to true. Returns a single updated document (new option is set to true).see 3 if you want to use findAndModify in a differnt way:runs the findAndModify command, allows more options than +Return performance stats of query execution]Fetch number of documents satisfying query (including effect of skip and/or limit if present)4Fetch distinct values of field in selected documents\Translate Query to Protocol.Query. If first arg is true then add special $explain attribute.AGiven batchSize and limit return P.qBatchSize and remaining limit2Send notices and request and return promised batch!Convert Reply to Batch or Failure6Demand and wait for result, raise failure if exceptionCreate new cursor. If you don't read all results then close it. Cursor will be closed automatically when all results are read from it or when eventually garbage collected.PReturn next batch of documents in query result, which will be empty if finished.=Return next document in query result, or Nothing if finished.1Return next N documents or less if end is reached*Return remaining documents in query result.Runs an aggregate and unpacks the result. See  0http://docs.mongodb.org/manual/core/aggregation/ for details..Runs an aggregate and unpacks the result. See  0http://docs.mongodb.org/manual/core/aggregation/ for details.0Translate Group data into expected document formNExecute group query and return resulting aggregate value for each distinct key4Translate MapReduce data into expected document form+Translate MROut into expected document formMapReduce on collection with given map and reduce functions. Remaining attributes are set to their defaults, which are stated in their comments.aRun MapReduce and return cursor of results. Error if map/reduce fails (because of bad Javascript)Run MapReduce and return a MR result document containing stats and the results if Inlined. Error if the map/reduce failed (because of bad Javascript).6Run command against the database and return its result 'runCommand1 foo = runCommand [foo =: 1]Run code on server& is treated the same as a programming . In other words, don't use it.updates "#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTXZYUVW[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ö}~|jklmnhig fbcdea_`^stuvwxyz{opqrSTUVWXYZ[\] RQPOIJKLMNHGEF=>?@ABCD:;<./0123456789-,+()*$%&'#"$%&'()*. /0123456789:;<=>?@ABCDEFHIJKLMNS TUVWXYZ[\]_`bcdehijklmnopqrstuvwxyz{}~ NoneLNVResult of isMaster command on host in replica set. Returned fields are: setName, ismaster, secondary, hosts, [primary]. primary only present when ismaster = falseRMaintains a connection (created on demand) to each server in the named replica setYRun command against admin database on server connected to pipe. Fail if connection fails.Default MongoDB port = 27017Host on JDisplay host as "host:port" TODO: Distinguish Service and UnixSocket portRead string "hostname:port" as Host hosthame (PortNumber port) or "hostname" as  host hostnameg (default port). Fail if string does not match either syntax. TODO: handle Service and UnixSocket portRead string "hostname:port" as Host hostname (PortNumber port) or "hostname" as  host hostname> (default port). Error if string does not match either syntax. (and Q) fails if it can't connect within this many seconds (default is 6 seconds). Use  'connect\'' (and 'openReplicaSet\'') if you want to ignore this global and specify your own timeout. Note, this timeout only applies to initial connection establishment, not when reading/writing to the connection.nConnect to Host returning pipelined TCP connection. Throw IOError if connection refused or no response within .Connect to Host returning pipelined TCP connection. Throw IOError if connection refused or no response within given number of seconds.name of connected replica setOpen connections (on demand) to servers in replica set. Supplied hosts is seed list. At least one of them must be a live member of the named replica set, otherwise fail. The value of m at the time of this call is the timeout used for future member connect attempts. To use your own value call 'openReplicaSet\'' instead.Open connections (on demand) to servers in replica set. Supplied hosts is seed list. At least one of them must be a live member of the named replica set, otherwise fail. Supplied seconds timeout is used for connect attempts to members.$Close all connections to replica setRReturn connection to current primary of replica set. Fail if no primary available.PReturn connection to a random secondary, or primary if no secondaries available.Return a connection to a host using a user-supplied sorting function, which sorts based on a tuple containing the host and a boolean indicating whether the host is primary.4Primary of replica set or Nothing if there isn't one.Non-arbiter, non-hidden members of replica setAFetch replica info from any server and update members accordinglyReturn new or existing connection to member of replica set. If pipe is already known for host it is given, but we still test if it is open. ! !None"#<Cache the indexes we create so repeatedly calling ensureIndex only hits database the first time. Clear cache every once in a while so if someone else deletes index we will recreate it on ensureIndex.Create collection with given options. You only need to call this to set options, otherwise a collection is created automatically on first use with no options.,Rename first collection to second collectionDelete the given collection! Return True if collection existed (and was deleted); return False if collection did not exist (and no action).This operation takes a whilehSpec of index of ordered keys on collection. Name is generated from keys. Unique and dropDups are False.,Create index if we did not already create one. May be called repeatedly with practically no performance hit, because we remember if we already called this for the same index (although this memory gets wiped out every 15 minutes, in case another client drops the index and we want to create it again).DCreate index on the server. This call goes to the server every time.Remove the index"Get all indexes on this collection#Drop all indexes on this collection@initialize cache and fork thread that clears it every 15 minutes$Get index cache for current database&reset index cache for current database Fetch all users of this databasebAdd user with password with read-only access if bool is True or read-write access if bool is False"admin" databaseQCopy database from given host to the server I am connected to. Fails and returns "ok" = 0W if we don't have permission to read from given server (use copyDatabase in this case).Copy database from given host to the server I am connected to. If username & password is supplied use them to read from given host.Delete the given database! AAttempt to fix any corrupt records. This operation takes a while.+7See currently running operation on the database, if any2      !"#$%&'()*+,-2      !"#$%&'()*+,-     NoneM       !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTXZYUVW[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-None"#$0;<=>?FKNQTV 8UFiles are stored in "buckets". You open a bucket with openDefaultBucket or openBucket The default chunk size is 256 kB;Open the default 8 (named "fs")<Open a 8Get a chunk of a file=Find files in the bucket>Find one file in the bucket?Fetch one file in the bucket@Delete files in the bucketPut a chunk in the bucketA%A producer for the contents of a fileBMA consumer that creates a file in the bucket and puts all consumed data in it5678:9;<=>?@AB89:9:56776;<=>?@AB 56789:(c) Yuras Shumovich, 2016 Apache 2.0&Victor Denisov denisovenator@gmail.com experimentalPOSIXNone"#LCConnect to mongodb using TLSCC     !"##$%&'()*+,-./01234567789:;<=>?@ABCDEEFGHIJKLLMNOPQRSTUVWXYYZ[\]^_`abcdefgdhijklmnnopqrrstuuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_M`abcdefghijklmnopqrstuvwxyz\{|}Y~MN     47&mongoDB-2.3.0.3-CGac1Xic52G4mXFL0g1ATIDatabase.MongoDB.ConnectionDatabase.MongoDB.TransportDatabase.MongoDB.QueryDatabase.MongoDB.AdminDatabase.MongoDB.GridFSDatabase.MongoDB.Transport.TlsDatabase.MongoDB.Internal.Util"Database.MongoDB.Internal.ProtocolDatabase.MongoDB&network-2.6.3.2-Elf6Dxkfz0iKjb1zv5eBTPNetwork UnixSocket PortNumberServicePortID Transportreadwriteflushclose fromHandlePasswordUsername QueryOptionTailableCursorNoCursorTimeout AwaitDataPartial DeleteOption SingleRemove UpdateOptionUpsert MultiUpdatePipe ServerDataisMasterminWireVersionmaxWireVersionmaxMessageSizeBytesmaxBsonObjectSizemaxWriteBatchSizeisClosedCommandMRResultMRMergeReplaceMergeReduceMROutInlineOutput FinalizeFun ReduceFunMapFun MapReducerCollrMaprReducerSelectrSortrLimitrOut rFinalizerScoperVerboseGroupKeyKeyKeyFGroupgCollgKeygReducegInitialgCond gFinalizeAggregateConfigPipelineCursorFindAndModifyOpts FamRemove FamUpdate famUpdatefamNew famUpsert BatchSizeOrderLimit ProjectorQueryoptions selectionprojectskiplimitsortsnapshot batchSizehintModifierSelectselectSelector Selectionselectorcoll CollectionDatabaseHasMongoContext mongoContext MongoContext mongoPipemongoAccessMode mongoDatabaseUpserted upsertedIndex upsertedId WriteResultfailednMatched nModifiednRemovedupserted writeErrorswriteConcernErrors GetLastError AccessMode ReadStaleOkUnconfirmedWrites ConfirmWrites ErrorCodeFailureConnectionFailureCursorNotFoundFailure QueryFailure WriteFailureWriteConcernFailure DocNotFoundAggregateFailureCompoundFailureProtocolFailureActionaccessmasterslaveOk accessModeliftDB allDatabases thisDatabaseuseDbauth authMongoCR authSCRAMSHA1retrieveServerDataallCollectionswhereJSinsertinsert_ insertMany insertMany_ insertAll insertAll_savereplacerepsertupsertmodify updateMany updateAlldelete deleteOne deleteMany deleteAllfindfindOnefetchdefFamUpdateOpts findAndModifyfindAndModifyOptsexplaincountdistinct nextBatchnextnextNrest closeCursorisCursorClosed aggregateaggregateCursorgroup mapReducerunMRrunMR' runCommand runCommand1eval$fResultEither$fHasMongoContextMongoContext$fErrorFailure$fExceptionFailure$fResultWriteResult$fSelectSelection $fSelectQuery$fDefaultAggregateConfig$fShowAccessMode$fShowUpserted$fShowSelection $fEqSelection $fShowFailure $fEqFailure$fShowWriteResult$fShowWriteMode $fEqWriteMode$fShowReadMode $fEqReadMode $fShowQuery $fEqQuery$fShowFindAndModifyOpts$fShowAggregateConfig$fShowGroupKey $fEqGroupKey $fShowGroup $fEqGroup $fShowMRMerge $fEqMRMerge $fShowMROut $fEqMROut$fShowMapReduce $fEqMapReduce ReplicaSetReplicaSetNameSecsHost defaultPorthost showHostPort readHostPortM readHostPortglobalConnectTimeoutconnectconnect' replSetNameopenReplicaSetopenReplicaSet'closeReplicaSetprimary secondaryOk routedHost $fShowHost$fEqHost $fOrdHostOpNumMilliSecProfilingLevelOffSlowAllIndexiColliKeyiNameiUnique iDropDupsiExpireAfterSeconds IndexNameCollectionOptionCapped MaxByteSizeMaxItemscreateCollectionrenameCollectiondropCollectionvalidateCollectionindex ensureIndex createIndex dropIndex getIndexes dropIndexesallUsersaddUser removeUseradmin cloneDatabase copyDatabase dropDatabaserepairDatabaseserverBuildInfo serverVersioncollectionStatsdataSize storageSizetotalIndexSize totalSizegetProfilingLevelsetProfilingLeveldbStats currentOpkillOp serverStatus$fShowCollectionOption$fEqCollectionOption $fShowIndex $fEqIndex$fShowProfilingLevel$fEnumProfilingLevel$fEqProfilingLevelFilebucketdocumentBucketfileschunksopenDefaultBucket openBucketfindFile findOneFile fetchFile deleteFile sourceFilesinkFile mergesortMshuffleloop untilSuccesstransformers-0.5.2.0Control.Monad.Trans.ErrorstrMsg untilSuccess'liftIOE updateAssocsbitOr<.>true1 byteStringHexbyteHex mergesortM' merge_pairsMmergeMwrapwhenJustbytestring-0.10.8.2Data.ByteStringnullCursorNotFound QueryError AwaitCapableReplyRequest rCursorIdSlaveOKqSkip qBatchSize qSelector qProjector KeepGoingNotice RequestIdFullCollectionResponseMessagevStream responseQueue newPipelinelistenpsendcallpcallnewPipe newPipeWithsend writeMessage readMessage genRequestId putHeader getHeaderNonce ResponseFlagrResponseFlags rStartingFrom rDocumentsGetMoreqOptionsqFullCollectiongFullCollection gBatchSize gCursorIdCursorId InsertOptionInsertUpdateDelete KillCursorsiFullCollectioniOptions iDocumentsuFullCollectionuOptions uSelectoruUpdaterdFullCollectiondOptions dSelector kCursorIdsfinished listenThread serverDatapwHashpwKeyBatch DelayedBatchFreshStaleOk NoConfirmConfirmbaseGHC.Errerrorinsert' insertBlockassignIdupdatequery queryRequestbatchSizeRemainingLimitrequest fromReplyfulfill newCursor groupDocument mrDocumentmrOutDocGHC.BasefailReadMode WriteModeResultisFailed ReplicaInfo adminCommand statedPrimary possibleHosts updateMembers connection DbIndexCache dbIndexCachefetchIndexCacheresetIndexCache#bson-0.3.2.3-2LofjE7BVfIH5TwRtpCGx1 Data.Bson genObjectId timestamp typeOfValtypedcastfval=?=:mergeexcludeincludeatvalueAtlookuplook!? showHexLenDocumentField:=valuelabelLabelValueBoolFloatInt32Int64ArrayStringNullFunDocBinUuidMd5UserDefObjIdUTCRegExJavaScrSymStampMinMaxValvalvalListvalMaybecast' cast'List cast'MaybeBinaryFunctionUUIDMD5 UserDefinedRegex JavascriptSymbol MongoStamp MinMaxKeyMinKeyMaxKeyObjectIdOiddefaultChunkSizegetChunkputChunk FileWriter fwChunkSizefwBucket fwFilesId fwChunkIndexfwSizefwAcc fwMd5ContextfwMd5acc