darcs-2.16.1: a distributed, interactive, smart revision control system

Darcs.Repository.Cache

Synopsis

# Documentation

cacheHash computes the cache hash (i.e. filename) of a packed string.

data Cache Source #

Cache is an abstract type for hiding the underlying cache locations

Instances
 Source # Instance detailsDefined in Darcs.Repository.Cache MethodsshowsPrec :: Int -> Cache -> ShowS #show :: Cache -> String #showList :: [Cache] -> ShowS #

data CacheType Source #

Constructors

 Repo Directory
Instances
 Source # Instance detailsDefined in Darcs.Repository.Cache Methods Source # Instance detailsDefined in Darcs.Repository.Cache MethodsshowList :: [CacheType] -> ShowS #

data CacheLoc Source #

Constructors

 Cache Fields
Instances
 Source # Instance detailsDefined in Darcs.Repository.Cache Methods Source # Instance detailsDefined in Darcs.Repository.Cache MethodsshowList :: [CacheLoc] -> ShowS #

Constructors

 Writable NotWritable
Instances
 Source # Instance detailsDefined in Darcs.Repository.Cache Methods Source # Instance detailsDefined in Darcs.Repository.Cache MethodsshowList :: [WritableOrNot] -> ShowS #

data HashedDir Source #

Constructors

 HashedPristineDir HashedPatchesDir HashedInventoriesDir

unionRemoteCaches merges caches. It tries to do better than just blindly copying remote cache entries:

• If remote repository is accessed through network, do not copy any cache entries from it. Taking local entries does not make sense and using network entries can lead to darcs hang when it tries to get to unaccessible host.
• If remote repository is local, copy all network cache entries. For local cache entries if the cache directory exists and is writable it is added as writable cache, if it exists but is not writable it is added as read-only cache.

This approach should save us from bogus cache entries. One case it does not work very well is when you fetch from partial repository over network. Hopefully this is not a common case.

fetchFileUsingCache cache dir hash receives a list of caches cache, the directory for which that file belongs dir and the hash of the file to fetch. It tries to fetch the file from one of the sources, trying them in order one by one. If the file cannot be fetched from any of the sources, this operation fails.

speculateFileUsingCache cache subdirectory name takes note that the file name is likely to be useful soon: pipelined downloads will add it to the (low-priority) queue, for the rest it is a noop.

speculateFilesUsingCache :: Cache -> HashedDir -> [String] -> IO () Source #

Note that the files are likely to be useful soon: pipelined downloads will add them to the (low-priority) queue, for the rest it is a noop.

writeFileUsingCache cache compression subdir contents write the string contents to the directory subdir, except if it is already in the cache, in which case it is a noop. Warning (?) this means that in case of a hash collision, writing using writeFileUsingCache is a noop. The returned value is the filename that was given to the string.

peekInCache cache subdir hash tells whether cache and contains an object with hash hash in a writable position. Florent: why do we want it to be in a writable position?

hashedFilePath cachelocation subdir hash returns the physical filename of hash hash in the subdir section of cachelocation.

Prints an error message with a list of bad caches.

This keeps only Repo NotWritable entries.