(Retitling as this has drifted..)


I thought this might be useful, since curl is being used for the URL backend, it might be worth checking for it's existence.

diff --git a/configure.hs b/configure.hs
index 772ba54..1a563e0 100644
--- a/configure.hs
+++ b/configure.hs
@@ -13,6 +13,7 @@ tests = [
        , TestCase "uuid generator" $ selectCmd "uuid" ["uuid", "uuidgen"]
        , TestCase "xargs -0" $ requireCmd "xargs_0" "xargs -0 

Well, curl is an optional extra, so requireCmd is too strong. Changed to testCmd and applied, thank you!

I thought about actually using the resulting SysConfig.curl to disable the URL backend if False.. but probably it's better to just let it fail if curl is not available. Although, if we wanted to add a check for wget or something and use it when curl was not available, that might be worth doing. --Joey

I was thinking that is it worth doing a generic "stat", "delete", "get" and "put" options, I do like the idea of having the possibility of being about to use completely arbitrary storage systems or arbitrary transfer systems. If there was the capability of doing so it would be interesting to see possibilities of using aria2 for using something like bittorrent as backend, or using something like irods or some grid storage system as the storage archive. It's just an idea as I have seen it implemented quite well in irods.

I'm unsure about the idea of having a backend where that is parameterized. It would mean that one annex's GENERIC-foo key might be entirely different from another's key with the same backend and details. And a misconfiguration could get data the wrong way and get the wrong data, etc.

I mostly look at the URL backend as an example that can be modified to make this kind of custom backend. You already probably know enough to make a TORRENT backend where keys are the urls to torrents to download with aria2c --follow-torrent=mem.

I am also interested in doing backends that use eg, cloud storage. A S3 backend that could upload files to S3 in addition to downloading them, for example, would be handy. --Joey

So, rather than use backends to do this, it instead made more sense to make them special remotes. The URL backend remains a bit of a special case, and a bittorrent backend that downloaded a file from a bittorrent url would still be a good use of backend, but for storing files in external data stores like S3, making it a remote makes better sense. I think I can close this bug now, done --Joey

also in Backend/URL.hs is it worth making a minor change to the way curl is called (I'm not sure if the following is correct or not)

It's correct, typewise, but I don't see any real reason to bother with the change. But I do appreciate patches, which have been rare so far, probaby because of Haskell.. :) --Joey

heh agreed

diff --git a/Backend/URL.hs b/Backend/URL.hs
index 29dc8fe..4afcf86 100644
--- a/Backend/URL.hs
+++ b/Backend/URL.hs
@@ -50,10 +50,13 @@ dummyFsck _ _ _ = return True
 dummyOk :: Key -> Annex Bool
 dummyOk _ = return True

+curl :: [CommandParam] -> IO Bool
+curl = boolSystem "curl"
+
 downloadUrl :: Key -> FilePath -> Annex Bool
 downloadUrl key file = do
        showNote "downloading"
        showProgress -- make way for curl progress bar
-       liftIO $ boolSystem "curl" [Params "-# -o", File file, File url]
+       liftIO $ curl [Params "-# -o", File file, File url]
        where
                url = join ":" $ drop 1 $ split ":" $ show key 
Comments on this page are closed.