Please describe the problem.
I am running an initial export to my NAS over WebDAV. The repo contains many small files and it already runs for a day. Even the smallest file takes around a second to upload.
Looking at strace and the code, it seems that for each file, git annex not only establishes a complete new TCP connection, it even also reads the credentials from .git/annex/creds for each file.
Are there ways to make WebDAV faster? Could http connections be reused? Could multiple files be uploaded in parallel?
Apparently files are also upload to a temporary location and renamed after successful upload. This adds additional latency and thus parallel uploads could provide a speed up?
Indeed, regular webdav special remote uses prepareDav which sets up a single DAV context that is used for all stores, but export does not and so creates a new context each time.
S3 gets around this using an MVar that contains the S3 handle. Webdav should be able to do the same.
(The upload to a temp location is necessary, otherwise resuming an interrupted upload would not be able to check which files had been fully uploaded yet in some situations. Or something like that. I forget the exact circumstances, but it's documented in a comment where storeExport is defined in Types.Remote.)
(Opened support concurrency for export.)