archiver: Archive supplied URLs in WebCite & Internet Archive
archiver is a daemon which will watch a specified text file, each line of which is a URL, and will one by one request that the URLs be archived or spidered by http://www.webcitation.org and http://www.archive.org for future reference.
Because the interface is a simple text file, this can be combined with other scripts; for example, a script using Sqlite to extract visited URLs from Firefox, or a program extracting URLs from Pandoc documents.
|Versions [faq]||0.1, 0.2, 0.3, 0.3.1, 0.4, 0.5, 0.5.1, 0.6.0, 0.6.1, 0.6.2, 0.6.2.1|
|Dependencies||base (==4.*), bytestring, curl, hinotify, HTTP, network [details]|
|Uploaded||by GwernBranwen at Thu Sep 23 15:26:41 UTC 2010|
|Downloads||5014 total (187 in the last 30 days)|
|Rating||(no votes yet) [estimated by rule of succession]|
Docs uploaded by user
Build status unknown [no reports yet]
For package maintainers and hackage trustees