+qs      !"#$%&'()*+,-./0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C DEFGHIJKLMNOPQRSTUVWXYZ[\]^_` abcdefghijklmnopqr Nonestuvstuvstuv Nonewxwxwx Safe-InferredyRWLock1VCache uses LMDB with the MDB_NOLOCK option, mostly because I don't want to deal with the whole issue of OS bound threads or a limit on number of concurrent readers. Without locks, we essentially have one valid snapshot. The writer can begin dismantling earlier snapshots as needed to allocate pages. BRWLock essentially enforces this sort of frame-buffer concurrency.zlGrab the current read-write lock for the duration of an underlying action. This may wait on older readers. {:Grab a read-only lock for the duration of some IO action. )Readers never need to wait on the writer.|}~yz{yz{|}~yz{ Safe-InferredVAn alignment-sensitive peek; will copy data into aligned memory prior to performing .jAn alignment-sensitive poke. Will poke data into aligned memory then copy it into the destination memory.None+:BM%To be utilized with VCache, a value must be serializable as a simple sequence of binary data and child VRefs. Also, to put then get a value must result in equivalent values. Further, values are Typeable to support memory caching of values loaded.:Under the hood, structured data is serialized as the pair:(ByteString,[Either VRef PVar])Developers must ensure that  on the serialization from  returns the same value. And b must be backwards compatible. Developers should consider version wrappers, cf. SafeCopy package.=Serialize a value as a stream of bytes and value references. =Parse a value from its serialized representation into memory.VGet is a parser combinator monad for VCache. Unlike pure binary parsers, VGet supports reads from a stack of VRefs and PVars to directly model structured data.addresses writtencurrent buffer for easy freelocation within buffercurrent limit for inputVPut is a serialization monad akin to Data.Binary or Data.Cereal. However, VPut is not restricted to pure binaries: developers may include VRefs and PVars in the output.Content emitted by VPut will generally be read only by VCache. So it may be worth optimizing some cases, such as lists are written in reverse such that readers won't need to reverse the list.Estimate for cache size is based on random samples with an exponential moving average. It isn't very precise, but it is good enough for the purpose of guiding aggressiveness of the exponential decay model.In addition to the STM transaction, I need to track whether the transaction is durable (such that developers may choose based on internal domain-model concerns) and which variables have been read or written. All PVars involved must be part of the same VSpace.The VTx transactions allow developers to atomically manipulate PVars and STM resources (TVars, TArrays, etc..). VTx is a thin layer above STM, additionally tracking which PVars are written so it can push the batch to a background writer thread upon commit.yVTx supports full ACID semantics (atomic, consistent, isolated, durable), but durability is optional (see markDurable). %The Memory datatype tracks allocations, GC, and ephemeron tables for tracking both PVars and VRefs in Haskell memory. These are combined into one type mostly because typical operations on them are atomic... and STM isn't permitted because vref constructors are used with unsafePerformIO.In-memory VRefsIn-memory PVars$recently GC'd addresses (two frames),recent or pending allocations (three frames)In addition to recent allocations, we track garbage collection. The goal here is to prevent revival of VRefs after we decide to delete them. So, when we try to allocate a VRef, we'll check to see if it's address has been targeted for deletion.To keep this simple, GC is performed by the writer thread. Other threads must worry about reading outdated reference counts. This also means we only need the two frames: a reader of frame N-2 only needs to prevent revival of VRefs GC'd at N-1 or N.The Allocator both tracks the 'bump-pointer'` address for the next allocation, plus in-memory logs for recent and near future allocations.>The log has three frames, based on the following observations:/frames are rotated when the writer lock is held@when the writer lock is held, readers exist for two prior framesAreaders from two frames earlier use log to find allocations from:the previous write framethe current write frame-the next write frame (allocated during write)tEach write frame includes content for both the primary (db_memory) and secondary (db_caddrs or db_vroots) indices. lNormal Data.Map is favored because I want the keys in sorted order when writing into the LMDB layer anyway.VSpace represents the virtual address space used by VCache. Except for loadRootPVar, most operations use VSpace rather than the VCache. VSpace is accessed by vcache_space, vref_space, or pvar_space.Addresses from this space are allocated incrementally, odds to PVars and evens to VRefs, independent of object size. The space is elastic: it isn't a problem to change the size of PVars (even drastically) from one update to another.In theory, VSpace could run out of 64-bit addresses. In practice, this isn't a real concern - a quarter million years at a sustained million allocations per second. VCache supports a filesystem-backed address space plus a set of persistent, named root variables that can be loaded from one run of the application to another. VCache supports a simple filesystem like model to resist namespace collisions between named roots. openVCache :: Int -> FilePath -> IO VCache vcacheSubdir :: ByteString -> VCache -> VCache loadRootPVar :: (VCacheable a) => VCache -> ByteString -> a -> PVar a:The normal use of VCache is to open VCache in the main function, use vcacheSubdir for each major framework, plugin, or independent component that might need persistent storage, then load at most a few root PVars per component. Most domain modeling should be at the data layer, i.e. the type held by the PVar.0See VSpace, VRef, and PVar for more information. virtual address space for VCache A PVar is a mutable variable backed by VCache. PVars can be read or updated transactionally (see VTx), and may store by reference as part of domain data (see VCacheable). A PVar is not cached. If you want memory cached contents, you'll need a PVar that contains one or more VRefs. However, the first read from a PVar is lazy, so merely referencing a PVar does not require loading its contents into memory.Due to how updates are batched, high frequency or bursty updates on a PVar should perform acceptably. Not every intermediate value is written to disk.6Anonymous PVars will be garbage collected if not in use. Persistence requires ultimately tying contents to named roots (cf. loadRootPVar). Garbage collection is based on reference counting, so developers must be cautious when working with cyclic data, i.e. break cycles before disconnecting them from root.}Note: PVars must never contain undefined or error values, nor any value that cannot be serialized by a VCacheable instance. virtual address space for PVar NCache modes are used when deciding, heuristically, whether to clear a value from cache. These modes don't have precise meaning, but there is a general intention: higher numbered modes indicate that VCache should hold onto a value for longer or with greater priority. In the current implementation, CacheMode is used as a pool of  hitpointsv in a gaming metaphor: if an entry would be removed, but its mode is greater than zero, the mode is reduced instead.The default for vref and deref is CacheMode1. Use of vrefc or derefc may specify other modes. Cache mode is monotonic: if the same VRef is deref'd with two different modes, the higher mode will be favored.xNote: Regardless of mode, a VRef that is fully GC'd from the Haskell layer will ensure any cached content is also GC'd.A VRef is an opaque reference to an immutable value backed by a file, specifically via LMDB. The primary motivation for VRefs is to support memory-cached values, i.e. very large data structures that should not be stored in all at once in RAM.-The API involving VRefs is conceptually pure. Evref :: (VCacheable a) => VSpace -> a -> VRef a deref :: VRef a -> aUnder the hood, each VRef has a 64-bit address and a local cache. When dereferenced, the cache is checked or the value is read from the database then cached. Variants of vref and deref control cache behavior.VCacheable values may themselves contain VRefs and PVars, storing just the address. Very large structured data is readily modeled by using VRefs to load just the pieces you need. However, there is one major constraint:-VRefs may only represent acyclic structures. If developers want cyclic structure, they need a PVar in the chain. Alternatively, cycles may be modeled indirectly using explicit IDs.GBesides memory caching, VRefs also utilize structure sharing: all VRefs sharing the same serialized representation will share the same address. Structure sharing enables VRefs to be compared for equality without violating conceptual purity. It also simplifies reasoning about idempotence, storage costs, memoization, etc..address within the cachecached value & weak refs virtual address space for VReftype of value held by VRefparser for this VRef&An address in the VCache address space1VRefs and PVars are divided among odds and evens.1VRefs and PVars are divided among odds and evens.-clear bit 7; adjust cache mode monotonically.<run an arbitrary STM operation as part of a VTx transaction.Iadd a PVar write to the VTxState. (Does not modify the underlying TVar.)         !"#$%&'()*+,-./0123456789:;<=         !"#$%&'()*+,-./0E          !"#$%&'()*+,-./0123456789:;<=NoneEnsure that at least N bytes are available for storage without growing the underlying buffer. Use this before unsafePutWord8 and similar operations. If the buffer must grow, it will grow exponentially to ensure amortized constant allocation costs.fStore an 8 bit word *assuming* enough space has been reserved. This can be used safely together with .Store an 8 bit word.)Put an arbitrary non-negative integer in varint format associated with Google protocol buffers. This takes one byte for values 0..127, two bytes for 128..16k, etc.. Will fail if given a negative argument.Put an arbitrary integer in a varint format associated with Google protocol buffers with zigzag encoding of negative numbers. This takes one byte for values -64..63, two bytes for -8k..8k, three bytes for -1M..1M, etc.. Very useful if most numbers are near 0.>1write a varNat, but reversed (i.e. little-endian)This is only used by VPutFini: the last entry is the size (in bytes) of the children list. But we write backwards so we can later read it from the end of the buffer.Obtain the number of bytes output by this VPut effort so far. This might be useful if you're breaking data structures up by their serialization sizes. This does not include VRefs or PVars, only raw binary data. See also peekChildCount. ?@A>BC >C ?@A>BCNone Store a reference to a value. The value reference must already use the same VCache and addres space as where you're putting it.TStore an identifier for a persistent variable in the same VCache and address space./Put a Word in little-endian or big-endian form.Note: These are mostly included because they're part of the Data.Binary and Data.Cereal APIs. They may be useful in some cases, but putVarInt will frequently be preferable. /Put a Word in little-endian or big-endian form.Note: These are mostly included because they're part of the Data.Binary and Data.Cereal APIs. They may be useful in some cases, but putVarInt will frequently be preferable.#aPut a Data.Storable value, using intermediate storage to ensure alignment when serializing argument. Note that this shouldn't have any pointers, since serialized pointers won't usually be valid when loaded later. Also, the storable type shouldn't have any gaps (unassigned bytes); uninitialized bytes may interfere with structure sharing in VCache.$6Put the contents of a bytestring directly. Unlike the Q method for bytestrings, this does not include size information; just raw bytes.%7Put contents of a lazy bytestring directly. Unlike the Q method for bytestrings, this does not include size information; just raw bytes.& Put a character in UTF-8 format.'=Obtain the total count of VRefs and PVars in the VPut buffer.DE !"#$%F&' !"#$%&' !"#$%&'DE !"#$%F&'None(2Read one byte of data, or fail if not enough data.)ZisEmpty will return True iff there is no available input (neither references nor values).*BGet an integer represented in the Google protocol buffers zigzag varint encoding, e.g. as produced by  putVarInt. +FGet a non-negative number represented in the Google protocol buffers varint encoding, e.g. as produced by  putVarNat. (GH)I*J+KL(GH)I*+L (GH)I*J+KLNone,VCache uses simple heuristics to decide which VRef contents to hold in memory. One heuristic is a target cache size. Developers may tune this to influence how many VRefs are kept in memory. BThe value is specified in bytes, and the default is ten megabytes.VCache size estimates are imprecise, converging on approximate size, albeit not accounting for memory amplification (e.g. from a compact UTF-8 string to Haskell's representation for [Char]). The limit given here is soft, influencing how aggressively content is removed from cache - i.e. there is no hard limit on content held by the cache. Estimated cache size is observable via vcacheStats.If developers need precise control over caching, they should use normal means to reason about GC of values in Haskell (i.e. VRef is cleared from cache upon GC). Or use vref' and deref' to avoid caching and use VCache as a simple serialization layer.-DclearVRefsCache will iterate over cached VRefs in Haskell memory at the time of the call, clearing the cache for each of them. This operation isn't recommended for common use. It is rather hostile to independent libraries working with VCache. But this function may find some use for benchmarks or staged applications..Immediately clear the cache associated with a VRef, allowing any contained data to be GC'd. Normally, VRef cached values are cleared either by a background thread or when the VRef itself is garbage collected from Haskell memory. But sometimes the programmer knows best.,-M.,-.,-.,-M.NoneN1Cache cleanup, and signal writer for old content.OKexponential decay based cleanup. In this case, we attack a random fraction of the cached addresses. Each attack reduces the CacheMode of cached elements. If the CacheMode is zero, the element is removed from the database. Active contents have their CacheMode reset on each use, and cleanup stops when estimated size is good. NPQRSTOUVWXYZ[NNPQRSTOUVWXYZ[None\For VGet from the database, we start with just a pointer and a size. To process the VGet data, we also need to read addresses from a dedicated region. This is encoded from the end, as follows:<(normal data) addressN offset offset offset offset ... bytesHere bytesP is basically a varNat encoded backwards for the number of bytes (not counting bytes) back to the start of the first address. This address is then encoded as a varNat, and any offset is encoded as a varInt with the idea of reducing overhead for encoding addresses near to each other in memory.Addresses are encoded such that the first address to parse is last in the sequence (thereby avoiding a list reverse operation).+To read addresses, we simply read the number of bytes from the end, step back that far, then read the initial address and offsets until we get back to the end. This must be performed before we apply the normal read operation for the VGet state. It must be applied exactly once for a given input.\]^_`a\\]^_`aNonebParse contents at a given address. Returns both the value and the cache weight, or fails. This first tries reading the database, then falls back to reading from recent allocation frames. c,Read a reference count for a given address. d-Zero-copy access to raw bytes for an address.befghicdbcdbefghicdNonej'When we're just about done with VPut, we really have one more task to perform: to output the address list for any contained PVars and VRefs. These addresses are simply concatenated onto the normal byte output, with a final size value (not including itself) to indicate how far to jump back.Actually, we output the first address followed by relative offsets for every following address. This behavior allows us to reduce the serialization costs when addresses are near each other in memory.The address list is output in the reverse order of serialization. (This simplifies reading in the same order as serialization without a list reversal operation.)zIt's important that we finalize exactly once for every serialization, and that this be applied before any hash functions.jklmnjmnjklmnNoneoewhen processing write batches, we'll need to track differences in reference counts for each address.6For deletions, I'll use minBound as a simple sentinel.pUpdate reference counts in the database. This requires, for each older address, reading the old reference count, updating it, then writing the new value. Newer addresses may simply be appended. VCache uses two tables for reference counts. One table just contains zeroes. The other table includes positive counts. This separation makes it easy for the garbage collector to find its targets. Zeroes are also recorded to guarantee that GC can continue after a process crashes.Currently, I assume that all entries older than allocInit should be recorded in the database, i.e. it's an error for both db_refct and db_refct0 to be undefined unless I'm allocating a new address. (Thus newPVar does need a placeholder.):Ephemerons in the Haskell layer are not reference counted.~This operation should never fail. Failure indicates there is a bug in VCache or some external source of database corruption. qGarbage collection in VCache involves selecting addresses with zero references, filtering objects that are held by VRefs and PVars in Haskell memory, then deleting the remainders. GC is incremental. We limiting the amount of work performed in each write step to avoid creating too much latency for writers. To keep up with heavy sustained work loads, GC rate will adapt based on the write rates via the gcLimit argument. "rstuvwoxyz{|}~pqy!rstuvwoxyz{|}~pqNone/*Open a VCache with a given database file. In most cases, a Haskell process should open VCache in the Main module then pass it as an argument to the different libraries, frameworks, plugins, and other software components that require persistent storage. Use vcacheSubdir to progect against namespace collisions. XWhen opening VCache, developers decide the maximum size and the file name. For example: vc <- openVCache 100 "db"This would open a VCache whose file-size limit is 100 megabytes, with the name "db", plus an additional "db-lock" lockfile. An exception will be raised if these files cannot be created, locked, or opened. The size limit is passed to LMDB and is separate from setVRefsCacheSize. Once opened, VCache typically remains open until process halt. If errors are detected, e.g. due to writing an undefined value to a PVar or running out of space, VCache will attempt to halt the process.+Create background threads needed by VCache./// None0Miscellaneous statistics for a VCache instance. These are not necessarily consistent, current, or useful. But they can say a a bit about the liveliness and health of a VCache system.2'estimated database file size (in bytes)3*number of immutable values in the database4'number of mutable PVars in the database5)number of named roots (a subset of PVars)6)number of VRefs in Haskell process memory7)number of PVars in Haskell process memory8(number of addresses with zero references9$address to next be used by allocator:&number of allocations by this process ;target cache size in bytes <estimated cache size in bytes=(number of addresses GC'd by this process>/number of PVar updates to disk (after batching)?0number of sync requests (~ durable transactions)@1number of LMDB-layer transactions by this processACompute some miscellaneous statistics for a VCache instance at runtime. These aren't really useful for anything, except to gain some confidence about activity or comprehension of performance. 0123456789:;<=>?@A0123456789:;<=>?@A0123456789:;<=>?@A0123456789:;<=>?@A NoneBVCache implements a simplified filesystem metaphor. By assigning a different prefix for root PVars loaded by different subprograms, developers can guard against namespace collisions. Each component may have its own persistent roots.While I call it a subdirectory, it really is just a prefix. Using "foo" followed by "bar" is equivalent to using "foobar". Developers should include their own separators if they expect them, i.e. "foo/" and "bar/".Paths are limited to ~500 bytes. For normal use, this limit will not be a problem. If you're creating PVars based on runtime inputs, those should always be dynamic PVars. Root PVar names should never be much larger than fully qualified function names.C>as vcacheSubdir, but returns Nothing if the path is too large.BCBCBCBCNone Obtain a VRef given an address and value. Not initially cached. This operation doesn't touch the persistence layer; it assumes the given address is valid.Obtain a PVar given an address. PVar will lazily load when read. This operation does not try to read the database. It may fail if the requested address has already been loaded with another type.{Construct a new VRef and initialize cache with given value. If cache exists, will touch existing cache as if dereferenced.4Construct a new VRef without initializing the cache.,Allocate a VRef given data and dependencies.We'll try to find an existing match in the database, then from the recent allocations list, skipping addresses that have recently been GC'd (to account for readers running a little behind the writer).If a match is discovered, we'll use the existing address. Otherwise, we'll allocate a new address and leave it to the background writer thread.DCreate a new, anonymous PVar as part of an active transaction. Contents of the new PVar are not serialized unless the transaction commits (though a placeholder is still allocated). ECCreate a new, anonymous PVar via the IO monad. This is similar to n, but not as well motivated: global PVars should almost certainly be constructed as named, persistent roots. FCreate an array of PVars with a given set of initial values. This is equivalent to `mapM newPVar`, but guarantees adjacent addresses in the persistence layer. This is mostly useful when working with large arrays, to simplify reasoning about paging performance.G4Create an array of adjacent PVars via the IO monad. HGlobal, persistent variables may be loaded by name. The name here is prefixed by vcacheSubdir to control namespace collisions between software components. These named variables are roots for GC purposes, and will not be deleted.<Conceptually, the root PVar has always been there. Loading a root is thus a pure computation. At the very least, it's an idempotent operation. If the PVar exists, its value is lazily read from the persistence layer. Otherwise, the given initial value is stored. To reset a root PVar, simply write before reading.;The recommended practice for roots is to use only a few of them for each persistent software component (i.e. each plugin, WAI app, etc.) similarly to how a module might use just a few global variables. If you need a dynamic set of variables, such as one per client, model that explicitly using anonymous PVars. ILoad a root PVar in the IO monad. This is convenient to control where errors are detected or when initialization is performed. See loadRootPVar./DEFGHI DEFGHI*DEFGHINoneJisolate a parser to a subset of bytes and value references. The child parser must process its entire input (all bytes and values) or will fail. If there is not enough available input to isolate, this operation will fail. isolate nBytes nVRefs operationKLoad a VRef, just the reference rather than the content. User must know the type of the value, since getVRef is essentially a typecast. VRef content is not read until deref. RAll instances of a VRef with the same type and address will share the same cache.LLoad a PVar, just the variable. Content is loaded lazily on first read, then kept in memory until the PVar is GC'd. Unlike other Haskell variables, PVars can be serialized to the VCache address space. All PVars for a specific address are collapsed, using the same TVar.Developers must know the type of the PVar, since getPVar will cast to any cacheable type. A runtime error is raised only if you attempt to load the same PVar address with two different types.MQObtain the VSpace associated with content being read. Does not consume any data.N@Read words of size 16, 32, or 64 in little-endian or big-endian.Q@Read words of size 16, 32, or 64 in little-endian or big-endian.TRead a Storable value. In this case, the content should be bytes only, since pointers aren't really meaningful when persisted. Data is copied to an intermediate structure via alloca to avoid alignment issues.ULoad a number of bytes from the underlying object. A copy is performed in this case (typically no copy is performed by VGet, but the underlying pointer is ephemeral, becoming invalid after the current read transaction). Fails if not enough data. O(N)V=Get a lazy bytestring. (Simple wrapper on strict bytestring.)WAccess a given number of bytes without copying them. These bytes are read-only, and are considered to be consumed upon returning. The pointer should be considered invalid after returning from the withBytes computation.XGet a character from UTF-8 format. Assumes a valid encoding. (In case of invalid encoding, arbitrary characters may be returned.)Yrlabel will modify the error message returned from the argument operation; it can help contextualize parse errors.Z8lookAhead will parse a value, but not consume any input.[:lookAheadM will consume input only if it returns `Just a`.\;lookAheadE will consume input only if it returns `Right b`.JKLMNOPQRSTUVWXYZ[\()*+JKLMNOPQRSTUVWXYZ[\KLM(NQORPST+*UVXWJYZ[\)JKLMNOPQRSTUVWXYZ[\None0 !"#$%&'()*+JKLMNOPQRSTUVWXYZ[\None]yrunVTx executes a transaction that may involve both STM TVars (via liftSTM) and VCache PVars (via readPVar, writePVar). ^<Durability for a VTx transaction is optional: it requires an additional wait for the background thread to signal that it has committed content to the persistence layer. Due to how writes are batched, a durable transaction may share its wait with many other transactions that occur at more or less the same time.Developers should mark a transaction durable only if necessary based on domain layer policies. E.g. for a shopping service, normal updates and views of the virtual shopping cart might not be durable while committing to a purchase is durable. _This variation of markDurable makes it easier to short-circuit complex computations to decide durability when the transaction is already durable. If durability is already marked, the boolean is not evaluated.]^_]^_]^_]^_ None`If you use a lot of non-durable transactions, you may wish to ensure they are synchronized to disk at various times. vcacheSync will simply wait for all transactions committed up to this point. This is equivalent to running a durable, read-only transaction.mIt is recommended you perform a vcacheSync as part of graceful shutdown of any application that uses VCache.````Nonea%Read a PVar as part of a transaction.bRead a PVar in the IO monad. This is more efficient than a full transaction. It simply peeks at the underlying TVar with readTVarIO. Durability of the value read is not guaranteed. c&Write a PVar as part of a transaction.dModify a PVar. eModify a PVar, strictly.f(Swap contents of a PVar for a new value.gEach PVar has a stable address in the VCache. This address will be very stable, but is not deterministic and isn't really something you should treat as meaningful information about the PVar. Mostly, this function exists to support hashtables or memoization with PVar keys.7The Show instance for PVars will also show the address.hThis function allows developers to access the reference count for the PVar that is currently recorded in the database. This may be useful for heuristic purposes. However, caveats are needed:First, because the VCache writer operates in a background thread, the reference count returned here may be slightly out of date.Second, it is possible that VCache will eventually use some other form of garbage collection than reference counting. This function should be considered an unstable element of the API.)Root PVars start with one root reference. abcdefgh DEFGHIabcdefgh DFEGHIabcdef gh abcdefghNone iConstruct a reference with the cache initially active, i.e. such that immediate deref can access the value without reading from the database. The given value will be placed in the cache unless the same vref has already been constructed.j9Construct a VRef with an alternative cache control mode. kIn some cases, developers can reasonably assume they won't need a value in the near future. In these cases, use the vref' constructor to allocate a VRef without caching the content. lDereference a VRef, obtaining its value. If the value is not in cache, it will be read into the database then cached. Otherwise, the value is read from cache and the cache is touched to restart any expiration.zAssuming a valid VCacheable instance, this operation should return an equivalent value as was used to construct the VRef.m:Dereference a VRef with an alternative cache control mode.nDereference a VRef. This will read from the cache if the value is available, but will not update the cache. If the value is not cached, it will be read instead from the persistence layer.This can be useful if you know you'll only dereference a value once for a given task, or if the datatype involved is cheap to parse (e.g. simple bytestrings) such that there isn't a strong need to cache the parse result.o?Specialized, zero-copy access to a `VRef ByteString`. Access to the given ByteString becomes invalid after returning. This operation may also block the writer if it runs much longer than a single writer batch (though, writer batches are frequently large enough that this shouldn't be a problem if you're careful).pZero-copy access to the raw encoding for any VRef. The given data becomes invalid after returning. This is provided for mostly for debugging purposes, i.e. so you can peek under the hood and see how things are encoded or eyeball the encoding. qEach VRef has an numeric address in the VSpace. This address is non-deterministic, and essentially independent of the arguments to the vref constructor. This function is unsafe in the sense that it violates the illusion of purity. However, the VRef address will be stable so long as the developer can guarantee it is reachable.?This function may be useful for memoization tables and similar.The . instance for VRef will also show the address.rThis function allows developers to access the reference count for the VRef that is currently recorded in the database. This may be useful for heuristic purposes. However, caveats are needed:First, due to structure sharing, a VRef may share an address with VRefs of other types having the same serialized form. Reference counts are at the address level.Second, because the VCache writer operates in a background thread, the reference count returned here may be slightly out of date.Third, it is possible that VCache will eventually use some other form of garbage collection than reference counting. This function should be considered an unstable element of the API. ijklmnopqr ijklmnopqrilknqr  jmop ijklmnopqrNonep  !"#$%&'()*+/0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqr/ !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHI J J K L M N O P Q R S T U V W X Y Z [ \]^_`abcdefghijklmnopqrstuvwx yz{|}~        !# !*"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnnopqrstuvwxyz{|}~  W vcache-0.2.4Database.VCache.VCacheableDatabase.VCache.VGetDatabase.VCache.VPutDatabase.VCache.VTxDatabase.VCacheDatabase.VCache.PVarDatabase.VCache.VRefDatabase.VCache.CacheDatabase.VCache.StatsDatabase.VCache.PathDatabase.VCache.SyncDatabase.VCache.RefctDatabase.VCache.HashDatabase.VCache.RWLockDatabase.VCache.AlignedDatabase.VCache.TypesDatabase.VCache.VPutAuxDatabase.VCache.VGetAuxDatabase.VCache.CleanDatabase.VCache.VGetInitDatabase.VCache.ReadDatabase.VCache.VPutFiniDatabase.VCache.WriteDatabase.VCache.OpenDatabase.VCache.Alloc VCacheableputgetVGetVPutVTxVSpaceVCache vcache_spacePVar pvar_space CacheMode CacheMode3 CacheMode2 CacheMode1 CacheMode0VRef vref_spaceliftSTM getVTxSpace reservingreserveunsafePutWord8putWord8 putVarNat putVarIntpeekBufferSizeputVRefputPVar putWord16le putWord32le putWord64le putWord16be putWord32be putWord64be putStorable putByteStringputByteStringLazyputcpeekChildCountgetWord8isEmpty getVarInt getVarNatsetVRefsCacheLimitclearVRefsCacheclearVRefCache openVCache VCacheStatsvcstat_file_sizevcstat_vref_countvcstat_pvar_countvcstat_root_countvcstat_mem_vrefsvcstat_mem_pvarsvcstat_eph_countvcstat_alloc_posvcstat_alloc_countvcstat_cache_limitvcstat_cache_sizevcstat_gc_countvcstat_write_pvarsvcstat_write_syncvcstat_write_frames vcacheStats vcacheSubdir vcacheSubdirMnewPVar newPVarIOnewPVars newPVarsIO loadRootPVarloadRootPVarIOisolategetVRefgetPVar getVSpace getWord16le getWord32le getWord64le getWord16be getWord32be getWord64be getStorable getByteStringgetByteStringLazy withBytesgetclabel lookAhead lookAheadM lookAheadErunVTx markDurable markDurableIf vcacheSyncreadPVar readPVarIO writePVar modifyPVar modifyPVar'swapPVarunsafePVarAddrunsafePVarRefctvrefvrefcvref'derefderefcderef' withVRefBytesunsafeVRefEncodingunsafeVRefAddrunsafeVRefRefctRefctreadRefctBytes toRefctByteswriteRefctByteshashhashValRWLock withRWLockwithRdOnlyLockReader releaseReaderFrameframe_reader_next frame_readers frame_onClearFFB rwlock_frames rwlock_writerframe0 newRWLocknewF2newFwithWriterMutexrotateReaderFramesonFrameCleared newReader addReader delReader peekAlignedbaseForeign.Storablepeek pokeAligned peekAligned' pokeAligned'transformers-0.3.0.0 Control.Monad.Trans.State.Strict vput_children vput_buffer vput_target vput_limit CacheSizeEstVTxStateMemory mem_vrefs mem_pvarsmem_gc mem_allocGC Allocator vref_addr vref_cache vref_type vref_parseAddress isVRefAddr isPVarAddr touchCache markForWriteVGetRVGetEVGetS vget_children vget_target vget_limit vget_space_vgetVPutR vput_result vput_statePutChildVPutS vput_space_vputPtrLocPtrEndPtrIniPtr8SyncOpVTxBatchWriteCt wct_frames wct_pvarswct_syncWrites write_data write_syncTxWWriteLogcsze_addr_sizecsze_addr_sqsz vtx_space vtx_writes vtx_durable_vtxGCFrame gc_frm_curr gc_frm_prev Allocation alloc_name alloc_data alloc_addr alloc_deps AllocFrame alloc_list alloc_seek alloc_initalloc_new_addralloc_frm_nextalloc_frm_curralloc_frm_prevvcache_lockfile vcache_db_envvcache_db_memoryvcache_db_vrootsvcache_db_caddrsvcache_db_refctsvcache_db_refct0 vcache_memory vcache_signal vcache_writes vcache_rwlockvcache_signal_writesvcache_ct_writesvcache_alloc_initvcache_gc_startvcache_gc_count vcache_climit vcache_csize vcache_cvrefs vcache_pathRDVPVEphMapPVEph pveph_addr pveph_type pveph_data pvar_addr pvar_data pvar_type pvar_writeCacheCached NotCachedVREphMapVREph vreph_addr vreph_type vreph_cacheaddVREph takeVREph cacheModeBits mkVRefCache cacheWeightaddPVEphallocFrameSearchrecentGC withRdOnlyTxnwithByteStringVal putChildAddr$fMonadPlusVGet$fAlternativeVGet $fMonadVGet$fApplicativeVGet $fFunctorVGet $fMonadVPut$fApplicativeVPut $fFunctorVPut $fEqVSpace $fShowPVar$fEqPVar $fShowVRef$fEqVRef putVarNatRgrow _putVarNatzigZag _putVarNatR peekChildren_putVRef_putPVar_putByteStringgetWord8FromEndpeekBytevgetStateEmptyunZigZag getVarNat' consumingclearVREphCache cleanStepxclnusleep estCacheSizereadCacheAddrCtupdateCacheSizeEst readVREphSizexclnLoop xclnStrike strikeVREphshouldSignalWriter readZeroesCtemptyAllocation signalWritervgetIniteBadAddressRegion readAddrBytesreadAddrBytes' readAddrs readAddrs' readAddrIO readRefctIO withBytesIO withAddrValIOreadValvgetFull assertDone vgetWeightvputFini putChildren putChildren' runVPutIOrunVPut RefctDiffupdateReferenceCountsrunGarbageCollector UpdateNotesUpdSeekGCBatch WriteCell WriteBatchaddrSize writeStepallocFrameStep isNewRoot takeWritesseralizeWriteswriteTxW fnWriteAlloc syncSignalwriteSecondaryIndexesaddrBug emptyNotesupdateVirtualMemoryaddHash subRefcts addRefctsassertValidOldDeps readDataDeps gcCandidates gcSelectFrame gcClearFrame restartGC continueGCgcCellpeekAddrinitVCacheThreadsvcFlags vcRootPath vcAllocStartvcDefaultCacheLimitvcInitCacheSizeEstthreadedopenVC'nextAllocAddressfindLastAddrAllocated initMemory updWriteCtisBlockedOnMVarputErrLnindent maxPathLensubdir addr2vref addr2pvar newVRefIO newVRefIO' allocVRefIO GHC.Conc.Sync newTVarIO AllocPVaralloc_pvar_namealloc_pvar_dataalloc_pvar_depsalloc_pvar_tvarc_memcmp addr2vref' _addr2vrefmkVREph loadVRefCache clearVRef_getVREphCachembrun addr2pvar' _addr2pvarmkPVEph loadPVarTVar eTypeMismatch clearPVar tryDelPVEph _getPVEphTVar initVRefCache listDups' withCursor'candidateVRefInDB addToFrameseekallocPlaceHolderpreAllocPVarIO allocPVars allocPVar _allocPVar_loadRootPVarIO signalAlloc takeExact takeExact' _getStorable_getByteString_getc2_getc3_getc4_c0_ccreplicateReversedcountAndReverse$fVCacheable(,,,,,,)$fVCacheable(,,,,,)$fVCacheable(,,,,)$fVCacheable(,,,)$fVCacheable(,,)$fVCacheable(,)$fVCacheable[]$fVCacheableEither$fVCacheableMaybe$fVCacheable()$fVCacheablePVar$fVCacheableVRef$fVCacheableByteString$fVCacheableByteString0$fVCacheableWord8$fVCacheableChar$fVCacheableBool$fVCacheableInteger$fVCacheableIntrunVTx' updateLog updateSync eBadSpaceGHC.ShowShowreadVRef readVarNat