Recent changes to this wiki:

comment
diff --git a/doc/forum/git-annex_doesn__39__t_build_with_feed-1.0.0.0/comment_1_0be4545b1d93232a721c5ae65030334c._comment b/doc/forum/git-annex_doesn__39__t_build_with_feed-1.0.0.0/comment_1_0be4545b1d93232a721c5ae65030334c._comment
new file mode 100644
index 000000000..2c630f267
--- /dev/null
+++ b/doc/forum/git-annex_doesn__39__t_build_with_feed-1.0.0.0/comment_1_0be4545b1d93232a721c5ae65030334c._comment
@@ -0,0 +1,7 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 1"""
+ date="2017-09-24T19:58:47Z"
+ content="""
+Already fixed in git head.
+"""]]

Added a comment
diff --git a/doc/forum/Add_files_to_direct_mode_repos/comment_1_41d632853d3160899b04da7e4b95475e._comment b/doc/forum/Add_files_to_direct_mode_repos/comment_1_41d632853d3160899b04da7e4b95475e._comment
new file mode 100644
index 000000000..5ca3a4edf
--- /dev/null
+++ b/doc/forum/Add_files_to_direct_mode_repos/comment_1_41d632853d3160899b04da7e4b95475e._comment
@@ -0,0 +1,11 @@
+[[!comment format=mdwn
+ username="Gus"
+ avatar="http://cdn.libravatar.org/avatar/665626c67ab3ee7e842183f6f659e120"
+ subject="comment 1"
+ date="2017-09-24T13:14:53Z"
+ content="""
+You should reat the [dirct mode page](https://git-annex.branchable.com/direct_mode/#index5h2).
+
+In short, use `git-annex sync` as a replacement for `git commit`.
+
+"""]]

How to add files to direct mode git-annex repos and have them propagate to other repos?
diff --git a/doc/forum/Add_files_to_direct_mode_repos.mdwn b/doc/forum/Add_files_to_direct_mode_repos.mdwn
new file mode 100644
index 000000000..a5cb2d691
--- /dev/null
+++ b/doc/forum/Add_files_to_direct_mode_repos.mdwn
@@ -0,0 +1 @@
+I started a git annex repo in a crippled filesystem (FAT32). I git-annex-add'ed some files but then I learned one cannot then git-commit them if on a direct mode repo (“fatal: This operation must be run in a work tree”). How do I add files in a crippled filesystem and then have them propagate to other repost if numcopies > 1?

Added a comment
diff --git a/doc/forum/Shared_directory_with_non_git-annex_users/comment_1_a56cb4993982e030eb9fd4cdb3b0c368._comment b/doc/forum/Shared_directory_with_non_git-annex_users/comment_1_a56cb4993982e030eb9fd4cdb3b0c368._comment
new file mode 100644
index 000000000..698967e7a
--- /dev/null
+++ b/doc/forum/Shared_directory_with_non_git-annex_users/comment_1_a56cb4993982e030eb9fd4cdb3b0c368._comment
@@ -0,0 +1,50 @@
+[[!comment format=mdwn
+ username="http://xgm.de/oid/"
+ nickname="Florian"
+ avatar="http://cdn.libravatar.org/avatar/4c5c0e290374d76c713f482e41f60a3cbee0fa64bb94c6da94e5a61a50824811"
+ subject="comment 1"
+ date="2017-09-24T04:21:11Z"
+ content="""
+Ok, I tried it using ```GIT_DIR``` and ```GIT_WORK_TREE```:
+
+    Current directory is ~/git-annex, ./work exists and is populated with some files.
+
+    % GIT_DIR=~/git-annex/git GIT_WORK_TREE=~/git-annex/work git init
+    % GIT_DIR=~/git-annex/git GIT_WORK_TREE=~/git-annex/work git annex init \"server\"
+    % GIT_DIR=~/git-annex/git GIT_WORK_TREE=~/git-annex/work git annex direct
+    % GIT_DIR=~/git-annex/git GIT_WORK_TREE=~/git-annex/work git annex add .
+    [... file are addded ...]
+    % GIT_DIR=~/git-annex/git GIT_WORK_TREE=~/git-annex/work git annex sync
+    [... file are synced ...]
+    
+    % git clone git remote
+    % cd remote
+    % git annex init \"remote\"
+    % git annex sync
+    
+    
+    % git annex get a.out
+    get a.out
+      Unable to access these remotes: origin
+    
+      Try making some of these repositories available:
+            ed208c9f-a963-4000-a505-c3fe9dab0042 -- server [origin]
+    failed
+    git-annex: get: 1 failed
+    
+    
+    % git annex whereis a.out
+    whereis a.out (1 copy)
+            ed208c9f-a963-4000-a505-c3fe9dab0042 -- server [origin]
+    ok
+
+
+The remote is of course available, it's all local.
+
+What is still wrong?
+
+Thanks,
+Florian
+
+
+"""]]

Added a comment: How to fix corrupt SQLite database?
diff --git a/doc/forum/Problem_with_corrupt_SQLite_DB/comment_5_12331afc6c4bd8087f000c6af3414e6a._comment b/doc/forum/Problem_with_corrupt_SQLite_DB/comment_5_12331afc6c4bd8087f000c6af3414e6a._comment
new file mode 100644
index 000000000..b962530a3
--- /dev/null
+++ b/doc/forum/Problem_with_corrupt_SQLite_DB/comment_5_12331afc6c4bd8087f000c6af3414e6a._comment
@@ -0,0 +1,13 @@
+[[!comment format=mdwn
+ username="https://me.yahoo.com/a/kj4_2rEl2YnZENPxus3ZJlo4L31K#1b084"
+ nickname="Ammie"
+ avatar="http://cdn.libravatar.org/avatar/9584e25fd8ed1dc5798173f367136ff5780952c97a4147d78f19ca2615398857"
+ subject="How to fix corrupt SQLite database?"
+ date="2017-09-22T10:57:20Z"
+ content="""
+Some useful information has been shared here in this article - **http://wordpress.semnaitik.com/2017/02/01/repair-sqlite-database/**
+
+Refer to the above article and learn how to repair SQLite database. 
+
+Thanks.
+"""]]

diff --git a/doc/forum/git-annex_doesn__39__t_build_with_feed-1.0.0.0.mdwn b/doc/forum/git-annex_doesn__39__t_build_with_feed-1.0.0.0.mdwn
new file mode 100644
index 000000000..bdcb7e0f2
--- /dev/null
+++ b/doc/forum/git-annex_doesn__39__t_build_with_feed-1.0.0.0.mdwn
@@ -0,0 +1,209 @@
+    [470 of 574] Compiling Command.ImportFeed ( Command/ImportFeed.hs, dist/build/git-annex/git-annex-tmp/Command/ImportFeed.dyn_o )
+
+    Command/ImportFeed.hs:139:61: error:
+        • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+        Expected type: URLString
+            Actual type: Text.Atom.Feed.URI
+        • In the first argument of ‘Enclosure’, namely ‘enclosureurl’
+        In the second argument of ‘($)’, namely ‘Enclosure enclosureurl’
+        In the second argument of ‘($)’, namely
+            ‘ToDownload f u i $ Enclosure enclosureurl’
+        |
+    139 |                         Just $ ToDownload f u i $ Enclosure enclosureurl
+        |                                                             ^^^^^^^^^^^^
+
+    Command/ImportFeed.hs:142:49: error:
+        • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+        Expected type: URLString
+            Actual type: Data.Text.Internal.Text
+        • In the first argument of ‘quviSupported’, namely ‘link’
+        In the first argument of ‘ifM’, namely ‘(quviSupported link)’
+        In the expression:
+            ifM
+            (quviSupported link)
+            (return $ Just $ ToDownload f u i $ QuviLink link, return Nothing)
+        |
+    142 |                 Just link -> ifM (quviSupported link)
+        |                                                 ^^^^
+
+    Command/ImportFeed.hs:143:71: error:
+        • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+        Expected type: URLString
+            Actual type: Data.Text.Internal.Text
+        • In the first argument of ‘QuviLink’, namely ‘link’
+        In the second argument of ‘($)’, namely ‘QuviLink link’
+        In the second argument of ‘($)’, namely
+            ‘ToDownload f u i $ QuviLink link’
+        |
+    143 |                         ( return $ Just $ ToDownload f u i $ QuviLink link
+        |                                                                       ^^^^
+
+    Command/ImportFeed.hs:214:54: error:
+        • Couldn't match type ‘[Char]’ with ‘Data.Text.Internal.Text’
+        Expected type: S.Set Data.Text.Internal.Text
+            Actual type: S.Set ItemId
+        • In the second argument of ‘S.member’, namely ‘(knownitems cache)’
+        In the expression: S.member itemid (knownitems cache)
+        In a case alternative:
+            Just (_, itemid) -> S.member itemid (knownitems cache)
+        |
+    214 |                 Just (_, itemid) -> S.member itemid (knownitems cache)
+        |                                                      ^^^^^^^^^^^^^^^^
+
+    Command/ImportFeed.hs:279:42: error:
+        • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+        Expected type: Maybe [Char]
+            Actual type: Maybe Text.RSS.Syntax.DateString
+        • In the second argument of ‘(<$>)’, namely
+            ‘getItemPublishDateString itm’
+        In the expression: replace "/" "-" <$> getItemPublishDateString itm
+        In a case alternative:
+            _ -> replace "/" "-" <$> getItemPublishDateString itm
+        |
+    279 |                 _ -> replace "/" "-" <$> getItemPublishDateString itm
+        |                                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+    Command/ImportFeed.hs:293:44: error:
+        • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+        Expected type: String
+            Actual type: Data.Text.Internal.Text
+        • In the first argument of ‘toMetaValue’, namely ‘itemid’
+        In the second argument of ‘($)’, namely ‘toMetaValue itemid’
+        In the second argument of ‘M.singleton’, namely
+            ‘(S.singleton $ toMetaValue itemid)’
+        |
+    293 |                 (S.singleton $ toMetaValue itemid)
+        |                                            ^^^^^^
+
+    Command/ImportFeed.hs:299:26: error:
+        • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+        Expected type: Maybe String
+            Actual type: Maybe Data.Text.Internal.Text
+        • In the expression: feedtitle
+        In the expression: [feedtitle]
+        In the expression: ("feedtitle", [feedtitle])
+        |
+    299 |         [ ("feedtitle", [feedtitle])
+        |                          ^^^^^^^^^
+
+    Command/ImportFeed.hs:300:26: error:
+        • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+        Expected type: Maybe String
+            Actual type: Maybe Data.Text.Internal.Text
+        • In the expression: itemtitle
+        In the expression: [itemtitle]
+        In the expression: ("itemtitle", [itemtitle])
+        |
+    300 |         , ("itemtitle", [itemtitle])
+        |                          ^^^^^^^^^
+
+    Command/ImportFeed.hs:301:27: error:
+        • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+        Expected type: Maybe String
+            Actual type: Maybe Data.Text.Internal.Text
+        • In the expression: feedauthor
+        In the expression: [feedauthor]
+        In the expression: ("feedauthor", [feedauthor])
+        |
+    301 |         , ("feedauthor", [feedauthor])
+        |                           ^^^^^^^^^^
+
+    Command/ImportFeed.hs:302:27: error:
+        • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+        Expected type: Maybe String
+            Actual type: Maybe Data.Text.Internal.Text
+        • In the expression: itemauthor
+        In the expression: [itemauthor]
+        In the expression: ("itemauthor", [itemauthor])
+        |
+    302 |         , ("itemauthor", [itemauthor])
+        |                           ^^^^^^^^^^
+
+    Command/ImportFeed.hs:303:28: error:
+        • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+        Expected type: Maybe String
+            Actual type: Maybe Data.Text.Internal.Text
+        • In the expression: getItemSummary $ item i
+        In the expression: [getItemSummary $ item i]
+        In the expression: ("itemsummary", [getItemSummary $ item i])
+        |
+    303 |         , ("itemsummary", [getItemSummary $ item i])
+        |                            ^^^^^^^^^^^^^^^^^^^^^^^
+
+    Command/ImportFeed.hs:304:32: error:
+        • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+        Expected type: Maybe String
+            Actual type: Maybe Data.Text.Internal.Text
+        • In the expression: getItemDescription $ item i
+        In the expression: [getItemDescription $ item i]
+        In the expression:
+            ("itemdescription", [getItemDescription $ item i])
+        |
+    304 |         , ("itemdescription", [getItemDescription $ item i])
+        |                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+    Command/ImportFeed.hs:305:27: error:
+        • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+        Expected type: Maybe String
+            Actual type: Maybe Data.Text.Internal.Text
+        • In the expression: getItemRights $ item i
+        In the expression: [getItemRights $ item i]
+        In the expression: ("itemrights", [getItemRights $ item i])
+        |
+    305 |         , ("itemrights", [getItemRights $ item i])
+        |                           ^^^^^^^^^^^^^^^^^^^^^^
+
+    Command/ImportFeed.hs:306:23: error:
+        • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+        Expected type: Maybe String
+            Actual type: Maybe Data.Text.Internal.Text
+        • In the expression: snd <$> getItemId (item i)
+        In the expression: [snd <$> getItemId (item i)]
+        In the expression: ("itemid", [snd <$> getItemId (item i)])
+        |
+    306 |         , ("itemid", [snd <$> getItemId (item i)])
+        |                       ^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+    Command/ImportFeed.hs:307:22: error:
+        • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+        Expected type: Maybe String
+            Actual type: Maybe Data.Text.Internal.Text
+        • In the expression: itemtitle
+        In the expression: [itemtitle, feedtitle]
+        In the expression: ("title", [itemtitle, feedtitle])
+        |
+    307 |         , ("title", [itemtitle, feedtitle])
+        |                      ^^^^^^^^^
+
+    Command/ImportFeed.hs:307:33: error:
+        • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+        Expected type: Maybe String
+            Actual type: Maybe Data.Text.Internal.Text
+        • In the expression: feedtitle
+        In the expression: [itemtitle, feedtitle]
+        In the expression: ("title", [itemtitle, feedtitle])
+        |
+    307 |         , ("title", [itemtitle, feedtitle])
+        |                                 ^^^^^^^^^
+
+    Command/ImportFeed.hs:308:23: error:
+        • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+        Expected type: Maybe String
+            Actual type: Maybe Data.Text.Internal.Text
+        • In the expression: itemauthor
+        In the expression: [itemauthor, feedauthor]

(Diff truncated)
Added a comment
diff --git a/doc/bugs/Data_loss_when_copying_files_with_running_assistant/comment_4_c16ecf8fabd0c63a7b59927fe0bb6e2c._comment b/doc/bugs/Data_loss_when_copying_files_with_running_assistant/comment_4_c16ecf8fabd0c63a7b59927fe0bb6e2c._comment
new file mode 100644
index 000000000..e55d4538e
--- /dev/null
+++ b/doc/bugs/Data_loss_when_copying_files_with_running_assistant/comment_4_c16ecf8fabd0c63a7b59927fe0bb6e2c._comment
@@ -0,0 +1,94 @@
+[[!comment format=mdwn
+ username="michalrus"
+ avatar="http://cdn.libravatar.org/avatar/83c0b6e7f9d20f09a892263c4903bbae"
+ subject="comment 4"
+ date="2017-09-21T12:56:30Z"
+ content="""
+Look. What’s happening here? Reverse order. First, a file is saved and then assistant running on other machines makes a decision to remove it. Where does it come from? :o
+
+```
+commit cc61f6db3273c749ac2e4bdfb489457af919472c
+Author: Elzbieta Rus <rus.elzbieta@gmail.com>
+Date:   Sat Sep 2 14:23:13 2017 +0200
+
+    git-annex in elzbietarus
+
+ Dokumenty/Przepisy/SERNIK NA ZIMNO.docx | 1 -
+ 1 file changed, 1 deletion(-)
+
+commit 30965f29f756f02ad5d617d6fd07ec7013e01226
+Merge: d69d69901f ec07710223
+Author: Robert Rus <rusrob@poczta.onet.pl>
+Date:   Sat Sep 2 14:17:20 2017 +0200
+
+    Merge remote-tracking branch 'refs/remotes/origin/master'
+
+commit d69d69901fc2a4c958077cfbf8b27f595465627c
+Author: Robert Rus <rusrob@poczta.onet.pl>
+Date:   Sat Sep 2 14:17:17 2017 +0200
+
+    git-annex in robertrus-asus-1225c
+
+ Dokumenty/Przepisy/SERNIK NA ZIMNO.odt | 1 +
+ 1 file changed, 1 insertion(+)
+
+commit ec07710223770d1458de76004ec9baab5c20d508
+Merge: a60c530e9b 1cb58f2783
+Author: Robert Rus <rusrob@poczta.onet.pl>
+Date:   Sat Sep 2 14:17:17 2017 +0200
+
+    Merge remote-tracking branch 'refs/remotes/origin/master'
+
+commit 1e29dee284b5b9f4593cd7c0484c194ac2a9644c
+Merge: 1a36f09308 1cb58f2783
+Author: Robert Rus <rusrob@poczta.onet.pl>
+Date:   Sat Sep 2 14:17:16 2017 +0200
+
+    Merge remote-tracking branch 'refs/remotes/origin/master'
+
+commit a60c530e9b167c17b625e66f42e7282a1e275703
+Author: Robert Rus <rusrob@poczta.onet.pl>
+Date:   Sat Sep 2 14:17:16 2017 +0200
+
+    git-annex in robertrus-acer
+
+ Dokumenty/Przepisy/SERNIK NA ZIMNO.odt | 1 +
+ 1 file changed, 1 insertion(+)
+
+commit 1a36f09308c4134646b7074f2e827c909ee7033a
+Author: Robert Rus <rusrob@poczta.onet.pl>
+Date:   Sat Sep 2 14:17:14 2017 +0200
+
+    git-annex in robertrus-asus-1225c
+
+ Dokumenty/Przepisy/SERNIK NA ZIMNO.odt | 1 -
+ 1 file changed, 1 deletion(-)
+
+commit 1cb58f2783485e83d369ab9acb41f7e83cc3314c
+Author: Mikolaj Rus <mikolaj.rus6@gmail.com>
+Date:   Sat Sep 2 14:17:12 2017 +0200
+
+    git-annex in mikolajrus
+
+ Dokumenty/Przepisy/SERNIK NA ZIMNO.odt | 1 +
+ 1 file changed, 1 insertion(+)
+
+commit 3542add42a050c5331cbfa67bffee38a05ce6bc6
+Author: Mikolaj Rus <mikolaj.rus6@gmail.com>
+Date:   Sat Sep 2 14:17:11 2017 +0200
+
+    git-annex in mikolajrus
+
+ Dokumenty/Przepisy/SERNIK NA ZIMNO.odt | 1 -
+ 1 file changed, 1 deletion(-)
+
+commit 956243cab7b865f9e633f51d4376154a3478210a
+Author: Elzbieta Rus <rus.elzbieta@gmail.com>
+Date:   Sat Sep 2 14:17:00 2017 +0200
+
+    git-annex in elzbietarus
+
+ Dokumenty/Przepisy/SERNIK NA ZIMNO.odt | 1 +
+ 1 file changed, 1 insertion(+)
+```
+"""]]

devblog
diff --git a/doc/devblog/day_475__assistant_exports.mdwn b/doc/devblog/day_475__assistant_exports.mdwn
new file mode 100644
index 000000000..789fa0728
--- /dev/null
+++ b/doc/devblog/day_475__assistant_exports.mdwn
@@ -0,0 +1,4 @@
+Got the git-annex assistant updating exports. The assistant is pretty
+complicated, so that took most of the day.
+
+Exports are done!

split out todo for webapp export config UI; close export todo!
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index c33078668..7c3bf0ee6 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -13,14 +13,5 @@ exported data.
 Would this be able to reuse the existing `storeKey` interface, or would
 there need to be a new interface in supported remotes?
 
---[[Joey]]
+> [[done]]! --[[Joey]]
 
-Work is in progress. Todo list:
-
-* Support configuring export in the assistant
-  (when eg setting up a S3 special remote).
-
-  This is similar to the little-used preferreddir= preferred content
-  setting and the "public" repository group. The assistant uses
-  those for IA, which could be replaced with setting up an export
-  tracking branch.
diff --git a/doc/todo/webapp_export_remote_configuration_interface.mdwn b/doc/todo/webapp_export_remote_configuration_interface.mdwn
new file mode 100644
index 000000000..d6870a68e
--- /dev/null
+++ b/doc/todo/webapp_export_remote_configuration_interface.mdwn
@@ -0,0 +1,11 @@
+Support configuring export remotes in the webapp
+
+Remember the little-used preferreddir= preferred content
+setting and the "public" repository group. The webapp uses
+those for IA, which could be replaced with setting up an export
+tracking branch.
+
+The UI for S3, WebDAV, directory special remote setup could also have a way
+to make it an export, and configure the directory to export. This would
+complicate the UI, so needs thought. --[[Joey]]
+

add exporter thread to assistant
This is similar to the pusher thread, but a separate thread because git
pushes can be done in parallel with exports, and updating a big export
should not prevent other git pushes going out in the meantime.
The exportThread only runs at most every 30 seconds, since updating an
export is more expensive than pushing. This may need to be tuned.
Added a separate channel for export commits; the committer records a
commit in that channel.
Also, reconnectRemotes records a dummy commit, to make the exporter
thread wake up and make sure all exports are up-to-date. So,
connecting a drive with a directory special remote export will
immediately update it, and getting online will automatically
update S3 and WebDAV exports.
The transfer queue is not involved in exports. Instead, failed
exports are retried much like failed pushes.
This commit was sponsored by Ewen McNeill.
diff --git a/Assistant.hs b/Assistant.hs
index ea9967610..81aa036f6 100644
--- a/Assistant.hs
+++ b/Assistant.hs
@@ -18,6 +18,7 @@ import Assistant.Threads.DaemonStatus
 import Assistant.Threads.Watcher
 import Assistant.Threads.Committer
 import Assistant.Threads.Pusher
+import Assistant.Threads.Exporter
 import Assistant.Threads.Merger
 import Assistant.Threads.TransferWatcher
 import Assistant.Threads.Transferrer
@@ -152,6 +153,8 @@ startDaemon assistant foreground startdelay cannotrun listenhost startbrowser =
 #endif
 				, assist pushThread
 				, assist pushRetryThread
+				, assist exportThread
+				, assist exportRetryThread
 				, assist mergeThread
 				, assist transferWatcherThread
 				, assist transferPollerThread
diff --git a/Assistant/Commits.hs b/Assistant/Commits.hs
index c82f8f4c7..255648c94 100644
--- a/Assistant/Commits.hs
+++ b/Assistant/Commits.hs
@@ -21,3 +21,12 @@ getCommits = (atomically . getTList) <<~ commitChan
 {- Records a commit in the channel. -}
 recordCommit :: Assistant ()
 recordCommit = (atomically . flip consTList Commit) <<~ commitChan
+
+{- Gets all unhandled export commits.
+ - Blocks until at least one export commit is made. -}
+getExportCommits :: Assistant [Commit]
+getExportCommits = (atomically . getTList) <<~ exportCommitChan
+
+{- Records an export commit in the channel. -}
+recordExportCommit :: Assistant ()
+recordExportCommit = (atomically . flip consTList Commit) <<~ exportCommitChan
diff --git a/Assistant/Monad.hs b/Assistant/Monad.hs
index e52983915..403ee16a8 100644
--- a/Assistant/Monad.hs
+++ b/Assistant/Monad.hs
@@ -62,7 +62,9 @@ data AssistantData = AssistantData
 	, transferSlots :: TransferSlots
 	, transferrerPool :: TransferrerPool
 	, failedPushMap :: FailedPushMap
+	, failedExportMap :: FailedPushMap
 	, commitChan :: CommitChan
+	, exportCommitChan :: CommitChan
 	, changePool :: ChangePool
 	, repoProblemChan :: RepoProblemChan
 	, branchChangeHandle :: BranchChangeHandle
@@ -80,6 +82,8 @@ newAssistantData st dstatus = AssistantData
 	<*> newTransferSlots
 	<*> newTransferrerPool (checkNetworkConnections dstatus)
 	<*> newFailedPushMap
+	<*> newFailedPushMap
+	<*> newCommitChan
 	<*> newCommitChan
 	<*> newChangePool
 	<*> newRepoProblemChan
diff --git a/Assistant/Pushes.hs b/Assistant/Pushes.hs
index 7b4de450f..61891ea28 100644
--- a/Assistant/Pushes.hs
+++ b/Assistant/Pushes.hs
@@ -17,24 +17,21 @@ import qualified Data.Map as M
 {- Blocks until there are failed pushes.
  - Returns Remotes whose pushes failed a given time duration or more ago.
  - (This may be an empty list.) -}
-getFailedPushesBefore :: NominalDiffTime -> Assistant [Remote]
-getFailedPushesBefore duration = do
-	v <- getAssistant failedPushMap
-	liftIO $ do
-		m <- atomically $ readTMVar v
-		now <- getCurrentTime
-		return $ M.keys $ M.filter (not . toorecent now) m
+getFailedPushesBefore :: NominalDiffTime -> FailedPushMap -> Assistant [Remote]
+getFailedPushesBefore duration v = liftIO $ do
+	m <- atomically $ readTMVar v
+	now <- getCurrentTime
+	return $ M.keys $ M.filter (not . toorecent now) m
   where
 	toorecent now time = now `diffUTCTime` time < duration
 
 {- Modifies the map. -}
-changeFailedPushMap :: (PushMap -> PushMap) -> Assistant ()
-changeFailedPushMap a = do
-	v <- getAssistant failedPushMap
-	liftIO $ atomically $ store v . a . fromMaybe M.empty =<< tryTakeTMVar v
+changeFailedPushMap :: FailedPushMap -> (PushMap -> PushMap) -> Assistant ()
+changeFailedPushMap v f = liftIO $ atomically $
+	store . f . fromMaybe M.empty =<< tryTakeTMVar v
   where
 	{- tryTakeTMVar empties the TMVar; refill it only if
 	 - the modified map is not itself empty -}
-	store v m
+	store m
 		| m == M.empty = noop
 		| otherwise = putTMVar v $! m
diff --git a/Assistant/Sync.hs b/Assistant/Sync.hs
index aba90f64c..c6460e9ed 100644
--- a/Assistant/Sync.hs
+++ b/Assistant/Sync.hs
@@ -33,6 +33,7 @@ import Assistant.Threads.Watcher (watchThread, WatcherControl(..))
 import Assistant.TransferSlots
 import Assistant.TransferQueue
 import Assistant.RepoProblem
+import Assistant.Commits
 import Types.Transfer
 
 import Data.Time.Clock
@@ -48,10 +49,10 @@ import Control.Concurrent
  - it's sufficient to requeue failed transfers.
  -
  - Also handles signaling any connectRemoteNotifiers, after the syncing is
- - done.
+ - done, and records an export commit to make any exports be updated.
  -}
 reconnectRemotes :: [Remote] -> Assistant ()
-reconnectRemotes [] = noop
+reconnectRemotes [] = recordExportCommit
 reconnectRemotes rs = void $ do
 	rs' <- liftIO $ filterM (Remote.checkAvailable True) rs
 	unless (null rs') $ do
@@ -60,6 +61,7 @@ reconnectRemotes rs = void $ do
 			whenM (liftIO $ Remote.checkAvailable False r) $
 				repoHasProblem (Remote.uuid r) (syncRemote r)
 		mapM_ signal $ filter (`notElem` failedrs) rs'
+	recordExportCommit
   where
 	gitremotes = filter (notspecialremote . Remote.repo) rs
 	(_xmppremotes, nonxmppremotes) = partition Remote.isXMPPRemote rs
@@ -143,9 +145,11 @@ pushToRemotes' now remotes = do
 				then retry currbranch g u failed
 				else fallback branch g u failed
 
-	updatemap succeeded failed = changeFailedPushMap $ \m ->
-		M.union (makemap failed) $
-			M.difference m (makemap succeeded)
+	updatemap succeeded failed = do
+		v <- getAssistant failedPushMap 
+		changeFailedPushMap v $ \m ->
+			M.union (makemap failed) $
+				M.difference m (makemap succeeded)
 	makemap l = M.fromList $ zip l (repeat now)
 
 	retry currbranch g u rs = do
diff --git a/Assistant/Threads/Committer.hs b/Assistant/Threads/Committer.hs
index 3680349be..aa57d26a8 100644
--- a/Assistant/Threads/Committer.hs
+++ b/Assistant/Threads/Committer.hs
@@ -67,6 +67,7 @@ commitThread = namedThread "Committer" $ do
 				void $ alertWhile commitAlert $
 					liftAnnex $ commitStaged msg
 				recordCommit
+				recordExportCommit
 				let numchanges = length readychanges
 				mapM_ checkChangeContent readychanges
 				return numchanges
diff --git a/Assistant/Threads/Exporter.hs b/Assistant/Threads/Exporter.hs
new file mode 100644
index 000000000..747e919da
--- /dev/null
+++ b/Assistant/Threads/Exporter.hs
@@ -0,0 +1,78 @@
+{- git-annex assistant export updating thread
+ -
+ - Copyright 2017 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU GPL version 3 or higher.
+ -}
+
+module Assistant.Threads.Exporter where
+
+import Assistant.Common
+import Assistant.Commits
+import Assistant.Pushes
+import Assistant.DaemonStatus
+import Annex.Concurrent
+import Utility.ThreadScheduler
+import qualified Annex
+import qualified Remote
+import qualified Types.Remote as Remote
+import qualified Command.Sync
+
+import Control.Concurrent.Async
+import Data.Time.Clock
+import qualified Data.Map as M
+
+{- This thread retries exports that failed before. -}
+exportRetryThread :: NamedThread
+exportRetryThread = namedThread "ExportRetrier" $ runEvery (Seconds halfhour) <~> do
+	-- We already waited half an hour, now wait until there are failed
+	-- exports to retry.
+	toexport <- getFailedPushesBefore (fromIntegral halfhour) 
+		=<< getAssistant failedExportMap
+	unless (null toexport) $ do
+		debug ["retrying", show (length toexport), "failed exports"]
+		void $ exportToRemotes toexport
+  where

(Diff truncated)
devblog
diff --git a/doc/devblog/day_474__tracking_exports.mdwn b/doc/devblog/day_474__tracking_exports.mdwn
new file mode 100644
index 000000000..7b06027c4
--- /dev/null
+++ b/doc/devblog/day_474__tracking_exports.mdwn
@@ -0,0 +1,26 @@
+Built a way to make an export track changes to a branch.
+
+	git annex export --tracking master --to myexport
+
+That ties in nicely with `git annex sync`:
+
+	joey@darkstar:~/tmp/bench/a> echo hello > foo
+	joey@darkstar:~/tmp/bench/a> git annex add
+	add foo ok
+	joey@darkstar:~/tmp/bench/a> git annex sync --content
+	commit  
+	[master 8edbc6f] git-annex in joey@darkstar:~/tmp/bench/a
+	 1 file changed, 1 insertion(+)
+	 create mode 120000 foo
+	ok
+	export myexport foo 
+	ok                          
+	joey@darkstar:~/tmp/bench/a> git mv foo bar
+	joey@darkstar:~/tmp/bench/a> git annex sync --content
+	commit  
+	[master 3ab6e73] git-annex in joey@darkstar:~/tmp/bench/a
+	 1 file changed, 0 insertions(+), 0 deletions(-)
+	 rename foo => bar (100%)
+	ok
+	rename myexport foo -> .git-annex-tmp-content-SHA256E-s6--5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 ok
+	rename myexport .git-annex-tmp-content-SHA256E-s6--5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 -> bar ok

export --fast sets up but does not populate export
sync --content finishes
diff --git a/Command/Export.hs b/Command/Export.hs
index 0afcc3af1..f2bbcaf01 100644
--- a/Command/Export.hs
+++ b/Command/Export.hs
@@ -82,7 +82,8 @@ seek o = do
 		db <- openDb (uuid r)
 		ea <- exportActions r
 		changeExport r ea db new
-		void $ fillExport r ea db new
+		unlessM (Annex.getState Annex.fast) $
+			void $ fillExport r ea db new
 		closeDb db
 
 -- | Changes what's exported to the remote. Does not upload any new
diff --git a/doc/git-annex-export.mdwn b/doc/git-annex-export.mdwn
index 98fb40c57..4807a6434 100644
--- a/doc/git-annex-export.mdwn
+++ b/doc/git-annex-export.mdwn
@@ -50,6 +50,12 @@ And, git-annex will never trust an export to retain the content of a key.
   the branch. `git annex sync --content` and the git-annex assistant
   will update exports when it commits to the branch they are tracking.
 
+* `--fast`
+
+  This sets up an export of a tree, but avoids any expensive file uploads to
+  the remote. You can later run `git annex sync --content` to upload
+  the files to the export.
+
 # EXAMPLE
 
 	git annex initremote myexport type=directory directory=/mnt/myexport \

git annex sync --content to exports
Assistant still todo.
This commit was sponsored by Boyd Stephen Smith Jr. on Patreon
diff --git a/Annex/Export.hs b/Annex/Export.hs
index 0afe3cdcc..6565c257b 100644
--- a/Annex/Export.hs
+++ b/Annex/Export.hs
@@ -10,8 +10,11 @@ module Annex.Export where
 import Annex
 import Annex.CatFile
 import Types.Key
+import Types.Remote
 import qualified Git
 
+import qualified Data.Map as M
+
 -- An export includes both annexed files and files stored in git.
 -- For the latter, a SHA1 key is synthesized.
 data ExportKey = AnnexKey Key | GitKey Key
@@ -33,3 +36,8 @@ exportKey sha = mk <$> catKey sha
 		, keyChunkSize = Nothing
 		, keyChunkNum = Nothing
 		}
+
+exportTree :: RemoteConfig -> Bool
+exportTree c = case M.lookup "exporttree" c of
+	Just "yes" -> True
+	_ -> False
diff --git a/Command/Export.hs b/Command/Export.hs
index 81013ad47..0afcc3af1 100644
--- a/Command/Export.hs
+++ b/Command/Export.hs
@@ -5,7 +5,7 @@
  - Licensed under the GNU GPL version 3 or higher.
  -}
 
-{-# LANGUAGE TupleSections #-}
+{-# LANGUAGE TupleSections, BangPatterns #-}
 
 module Command.Export where
 
@@ -33,6 +33,7 @@ import Utility.Tmp
 
 import qualified Data.ByteString.Lazy as L
 import qualified Data.Map as M
+import Control.Concurrent
 
 cmd :: Command
 cmd = command "export" SectionCommon
@@ -70,23 +71,27 @@ seek o = do
 	r <- getParsed (exportRemote o)
 	unlessM (isExportSupported r) $
 		giveup "That remote does not support exports."
-	withExclusiveLock (gitAnnexExportLock (uuid r)) (seek' o r)
-
-seek' :: ExportOptions -> Remote -> CommandSeek
-seek' o r = do
+	when (exportTracking o) $
+		setConfig (remoteConfig r "export-tracking")
+			(fromRef $ exportTreeish o)
 	new <- fromMaybe (giveup "unknown tree") <$>
 		-- Dereference the tree pointed to by the branch, commit,
 		-- or tag.
 		inRepo (Git.Ref.tree (exportTreeish o))
+	withExclusiveLock (gitAnnexExportLock (uuid r)) $ do
+		db <- openDb (uuid r)
+		ea <- exportActions r
+		changeExport r ea db new
+		void $ fillExport r ea db new
+		closeDb db
+
+-- | Changes what's exported to the remote. Does not upload any new
+-- files, but does delete and rename files already exported to the remote.
+changeExport :: Remote -> ExportActions Annex -> ExportHandle -> Git.Ref -> CommandSeek
+changeExport r ea db new = do
 	old <- getExport (uuid r)
-	db <- openDb (uuid r)
-	ea <- exportActions r
 	recordExportBeginning (uuid r) new
 	
-	when (exportTracking o) $
-		setConfig (remoteConfig r "export-tracking")
-			(fromRef $ exportTreeish o)
-
 	-- Clean up after incomplete export of a tree, in which
 	-- the next block of code below may have renamed some files to
 	-- temp files. Diff from the incomplete tree to the new tree,
@@ -150,13 +155,6 @@ seek' o r = do
 			{ oldTreeish = map exportedTreeish old
 			, newTreeish = new
 			}
-
-	-- Export everything that is not yet exported.
-	(l, cleanup') <- inRepo $ Git.LsTree.lsTree new
-	seekActions $ pure $ map (startExport r ea db) l
-	void $ liftIO cleanup'
-
-	closeDb db
   where
 	mapdiff a oldtreesha newtreesha = do
 		(diff, cleanup) <- inRepo $
@@ -187,11 +185,22 @@ mkDiffMap old new db = do
 		| sha == nullSha = return Nothing
 		| otherwise = Just <$> exportKey sha
 
-startExport :: Remote -> ExportActions Annex -> ExportHandle -> Git.LsTree.TreeItem -> CommandStart
-startExport r ea db ti = do
+-- | Upload all exported files that are not yet in the remote,
+-- Returns True when files were uploaded.
+fillExport :: Remote -> ExportActions Annex -> ExportHandle -> Git.Ref -> Annex Bool
+fillExport r ea db new = do
+	(l, cleanup) <- inRepo $ Git.LsTree.lsTree new
+	cvar <- liftIO $ newMVar False
+	seekActions $ pure $ map (startExport r ea db cvar) l
+	void $ liftIO $ cleanup
+	liftIO $ takeMVar cvar
+
+startExport :: Remote -> ExportActions Annex -> ExportHandle -> MVar Bool -> Git.LsTree.TreeItem -> CommandStart
+startExport r ea db cvar ti = do
 	ek <- exportKey (Git.LsTree.sha ti)
 	stopUnless (liftIO $ notElem loc <$> getExportedLocation db (asKey ek)) $ do
-		showStart "export" f
+		showStart ("export " ++ name r) f
+		liftIO $ modifyMVar_ cvar (pure . const True)
 		next $ performExport r ea db ek (Git.LsTree.sha ti) loc
   where
 	loc = mkExportLocation f
@@ -234,7 +243,7 @@ startUnexport r ea db f shas = do
 	if null eks
 		then stop
 		else do
-			showStart "unexport" f'
+			showStart ("unexport " ++ name r) f'
 			next $ performUnexport r ea db eks loc
   where
 	loc = mkExportLocation f'
@@ -242,7 +251,7 @@ startUnexport r ea db f shas = do
 
 startUnexport' :: Remote -> ExportActions Annex -> ExportHandle -> TopFilePath -> ExportKey -> CommandStart
 startUnexport' r ea db f ek = do
-	showStart "unexport" f'
+	showStart ("unexport " ++ name r) f'
 	next $ performUnexport r ea db [ek] loc
   where
 	loc = mkExportLocation f'
@@ -276,7 +285,7 @@ startRecoverIncomplete r ea db sha oldf
 	| otherwise = do
 		ek <- exportKey sha
 		let loc = exportTempName ek
-		showStart "unexport" (fromExportLocation loc)
+		showStart ("unexport " ++ name r) (fromExportLocation loc)
 		liftIO $ removeExportedLocation db (asKey ek) oldloc
 		next $ performUnexport r ea db [ek] loc
   where
@@ -285,7 +294,7 @@ startRecoverIncomplete r ea db sha oldf
 
 startMoveToTempName :: Remote -> ExportActions Annex -> ExportHandle -> TopFilePath -> ExportKey -> CommandStart
 startMoveToTempName r ea db f ek = do
-	showStart "rename" (f' ++ " -> " ++ fromExportLocation tmploc)
+	showStart ("rename " ++ name r) (f' ++ " -> " ++ fromExportLocation tmploc)
 	next $ performRename r ea db ek loc tmploc
   where
 	loc = mkExportLocation f'
@@ -296,7 +305,7 @@ startMoveFromTempName :: Remote -> ExportActions Annex -> ExportHandle -> Export
 startMoveFromTempName r ea db ek f = do
 	let tmploc = exportTempName ek
 	stopUnless (liftIO $ elem tmploc <$> getExportedLocation db (asKey ek)) $ do
-		showStart "rename" (fromExportLocation tmploc ++ " -> " ++ f')
+		showStart ("rename " ++ name r) (fromExportLocation tmploc ++ " -> " ++ f')
 		next $ performRename r ea db ek tmploc loc
   where
 	loc = mkExportLocation f'
diff --git a/Command/Sync.hs b/Command/Sync.hs
index d460679ba..3a838c8a9 100644
--- a/Command/Sync.hs
+++ b/Command/Sync.hs
@@ -46,14 +46,19 @@ import Annex.Wanted
 import Annex.Content
 import Command.Get (getKey')
 import qualified Command.Move
+import qualified Command.Export
 import Annex.Drop
 import Annex.UUID
 import Logs.UUID
+import Logs.Export
 import Annex.AutoMerge
 import Annex.AdjustedBranch
 import Annex.Ssh
 import Annex.BloomFilter
 import Annex.UpdateInstead
+import Annex.Export
+import Annex.LockFile
+import qualified Database.Export as Export
 import Utility.Bloom
 import Utility.OptParse
 
@@ -153,7 +158,8 @@ seek o = allowConcurrentOutput $ do
 
 	remotes <- syncRemotes (syncWith o)
 	let gitremotes = filter Remote.gitSyncableRemote remotes
-	dataremotes <- filter (\r -> Remote.uuid r /= NoUUID)
+	(exportremotes, dataremotes) <- partition (exportTree . Remote.config)
+		. filter (\r -> Remote.uuid r /= NoUUID)

(Diff truncated)
configuration and docs for tracking exports
Not yet handled by sync or assistant.
This commit was sponsored by Nick Daly on Patreon.
diff --git a/Command/Export.hs b/Command/Export.hs
index 02c64eadf..81013ad47 100644
--- a/Command/Export.hs
+++ b/Command/Export.hs
@@ -28,6 +28,7 @@ import Logs.Location
 import Logs.Export
 import Database.Export
 import Messages.Progress
+import Config
 import Utility.Tmp
 
 import qualified Data.ByteString.Lazy as L
@@ -41,16 +42,22 @@ cmd = command "export" SectionCommon
 data ExportOptions = ExportOptions
 	{ exportTreeish :: Git.Ref
 	, exportRemote :: DeferredParse Remote
+	, exportTracking :: Bool
 	}
 
 optParser :: CmdParamsDesc -> Parser ExportOptions
 optParser _ = ExportOptions
 	<$> (Git.Ref <$> parsetreeish)
 	<*> (parseRemoteOption <$> parseToOption)
+	<*> parsetracking
   where
 	parsetreeish = argument str
 		( metavar paramTreeish
 		)
+	parsetracking = switch
+		( long "tracking"
+		<> help ("track changes to the " ++ paramTreeish)
+		)
 
 -- To handle renames which swap files, the exported file is first renamed
 -- to a stable temporary name based on the key.
@@ -75,6 +82,10 @@ seek' o r = do
 	db <- openDb (uuid r)
 	ea <- exportActions r
 	recordExportBeginning (uuid r) new
+	
+	when (exportTracking o) $
+		setConfig (remoteConfig r "export-tracking")
+			(fromRef $ exportTreeish o)
 
 	-- Clean up after incomplete export of a tree, in which
 	-- the next block of code below may have renamed some files to
diff --git a/Config.hs b/Config.hs
index 783f07238..66808571a 100644
--- a/Config.hs
+++ b/Config.hs
@@ -18,6 +18,7 @@ import Config.Cost
 import Config.DynamicConfig
 import Types.Availability
 import Git.Types
+import qualified Types.Remote as Remote
 
 type UnqualifiedConfigKey = String
 data ConfigKey = ConfigKey String
@@ -55,6 +56,9 @@ instance RemoteNameable Git.Repo where
 instance RemoteNameable RemoteName where
 	 getRemoteName = id
 
+instance RemoteNameable Remote where
+	getRemoteName = Remote.name
+
 {- A per-remote config setting in git config. -}
 remoteConfig :: RemoteNameable r => r -> UnqualifiedConfigKey -> ConfigKey
 remoteConfig r key = ConfigKey $
diff --git a/Types/GitConfig.hs b/Types/GitConfig.hs
index d523c745a..05b5623a6 100644
--- a/Types/GitConfig.hs
+++ b/Types/GitConfig.hs
@@ -199,6 +199,7 @@ data RemoteGitConfig = RemoteGitConfig
 	, remoteAnnexPush :: Bool
 	, remoteAnnexReadOnly :: Bool
 	, remoteAnnexVerify :: Bool
+	, remoteAnnexExportTracking :: Maybe Git.Ref
 	, remoteAnnexTrustLevel :: Maybe String
 	, remoteAnnexStartCommand :: Maybe String
 	, remoteAnnexStopCommand :: Maybe String
@@ -247,6 +248,8 @@ extractRemoteGitConfig r remotename = do
 		, remoteAnnexPush = getbool "push" True
 		, remoteAnnexReadOnly = getbool "readonly" False
 		, remoteAnnexVerify = getbool "verify" True
+		, remoteAnnexExportTracking = Git.Ref
+			<$> notempty (getmaybe "export-tracking")
 		, remoteAnnexTrustLevel = notempty $ getmaybe "trustlevel"
 		, remoteAnnexStartCommand = notempty $ getmaybe "start-command"
 		, remoteAnnexStopCommand = notempty $ getmaybe "stop-command"
diff --git a/doc/git-annex-export.mdwn b/doc/git-annex-export.mdwn
index 8958e7233..98fb40c57 100644
--- a/doc/git-annex-export.mdwn
+++ b/doc/git-annex-export.mdwn
@@ -6,6 +6,8 @@ git-annex export - export content to a remote
 
 git annex export `treeish --to remote`
 
+git annex export `--tracking treeish --to remote`
+
 # DESCRIPTION
 
 Use this command to export a tree of files from a git-annex repository.
@@ -36,6 +38,18 @@ verification of content downloaded from an export. Some types of keys,
 that are not based on checksums, cannot be downloaded from an export.
 And, git-annex will never trust an export to retain the content of a key.
 
+# OPTIONS
+
+* `--to=remote`
+
+  Specify the special remote to export to.
+
+* `--tracking`
+
+  This makes the export track changes that are committed to
+  the branch. `git annex sync --content` and the git-annex assistant
+  will update exports when it commits to the branch they are tracking.
+
 # EXAMPLE
 
 	git annex initremote myexport type=directory directory=/mnt/myexport \
@@ -56,6 +70,10 @@ That updates /mnt/myexport to reflect the renamed file.
 That updates /mnt/myexport, to contain only the files in the "subdir"
 directory of the master branch.
 
+	git annex export --tracking master --to myexport
+
+That makes myexport track changes that are committed to the master branch.
+
 # EXPORT CONFLICTS
 
 If two different git-annex repositories are both exporting different trees
@@ -81,6 +99,8 @@ export`, it will detect the export conflict, and resolve it.
 
 [[git-annex-initremote]](1)
 
+[[git-annex-sync]](1)
+
 # AUTHOR
 
 Joey Hess <id@joeyh.name>
diff --git a/doc/git-annex-sync.mdwn b/doc/git-annex-sync.mdwn
index 2aa009cf8..7b03a2ed1 100644
--- a/doc/git-annex-sync.mdwn
+++ b/doc/git-annex-sync.mdwn
@@ -82,6 +82,10 @@ by running "git annex sync" on the remote.
   This behavior can be overridden by configuring the preferred content
   of a repository. See [[git-annex-preferred-content]](1).
 
+  When a special remote is configured as an export and is tracking a branch,
+  the export will be updated to the current content of the branch.
+  See [[git-annex-export]](1).
+
 * `--content-of=path` `-C path`
 
   While --content operates on all annexed files in the work tree,
diff --git a/doc/git-annex.mdwn b/doc/git-annex.mdwn
index 544baafa1..1e8155988 100644
--- a/doc/git-annex.mdwn
+++ b/doc/git-annex.mdwn
@@ -1210,6 +1210,14 @@ Here are all the supported configuration settings.
   from remotes. If you trust a remote and don't want the overhead
   of these checksums, you can set this to `false`.
 
+* `remote.<name>.annex-export-tracking`
+
+  When set to a branch name or other treeish, this makes what's exported
+  to the special remote track changes to the branch. See
+  [[git-annex-export]](1). `git-annex sync --content` and the 
+  git-annex assistant update exports when changes have been
+  committed to the tracking branch.
+
 * `remote.<name>.annexUrl`
 
   Can be used to specify a different url than the regular `remote.<name>.url`

break out separate todo for later
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index 4c707a779..876c54c77 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -26,15 +26,3 @@ Work is in progress. Todo list:
   setting and the "public" repository group. The assistant uses
   those for IA, which could be replaced with setting up an export
   tracking branch.
-
-Low priority:
-
-* When there are two pairs of duplicate files, and the filenames are
-  swapped around, the current rename handling renames both dups to a single
-  temp file, and so the other file in the pair gets re-uploaded
-  unncessarily. This could be improved.
-
-  Perhaps: Find pairs of renames that swap content between two files.
-  Run each pair in turn. Then run the current rename code. Although this
-  still probably misses cases, where eg, content cycles amoung 3 files, and
-  the same content amoung 3 other files. Is there a general algorythm?
diff --git a/doc/todo/export_paired_rename_innefficenctcy.mdwn b/doc/todo/export_paired_rename_innefficenctcy.mdwn
new file mode 100644
index 000000000..9284b74d9
--- /dev/null
+++ b/doc/todo/export_paired_rename_innefficenctcy.mdwn
@@ -0,0 +1,10 @@
+`git annex export` can efficiently handle renames, including renames that swap
+content between files. However, when there are two pairs of duplicate files,
+and the filenames are swapped around, the current rename handling renames both
+dups to a single  temp file, and so the other file in the pair gets re-uploaded
+unncessarily. This could be improved.
+
+Perhaps: Find pairs of renames that swap content between two files.
+Run each pair in turn. Then run the current rename code. Although this
+still probably misses cases, where eg, content cycles amoung 3 files, and
+the same content amoung 3 other files. Is there a general algorythm?

fix bug that prevented db being written to disk in SingleWriter mode
The bug occurred when closeDb was not called, and garbage collection of
the DbHandle didn't give the workerThread time to shut down. Fixed by
exiting the runSqlite action when a commit is made.
(MultiWriter mode already forked off a runSqlite action, so avoided the
problem.)
This commit was sponsored by Brock Spratlen on Patreon.
diff --git a/Database/Handle.hs b/Database/Handle.hs
index f5a0a5dda..5670acb99 100644
--- a/Database/Handle.hs
+++ b/Database/Handle.hs
@@ -142,11 +142,15 @@ data Job
 	| CloseJob
 
 workerThread :: T.Text -> TableName -> MVar Job -> IO ()
-workerThread db tablename jobs =
-	catchNonAsync (runSqliteRobustly tablename db loop) showerr
+workerThread db tablename jobs = go
   where
-  	showerr e = hPutStrLn stderr $
-		"sqlite worker thread crashed: " ++ show e
+	go = do
+		v <- tryNonAsync (runSqliteRobustly tablename db loop)
+		case v of
+			Left e -> hPutStrLn stderr $
+				"sqlite worker thread crashed: " ++ show e
+			Right True -> go
+			Right False -> return ()
 	
 	getjob :: IO (Either BlockedIndefinitelyOnMVar Job)
 	getjob = try $ takeMVar jobs
@@ -157,15 +161,21 @@ workerThread db tablename jobs =
 			-- Exception is thrown when the MVar is garbage
 			-- collected, which means the whole DbHandle
 			-- is not used any longer. Shutdown cleanly.
-			Left BlockedIndefinitelyOnMVar -> return ()
-			Right CloseJob -> return ()
+			Left BlockedIndefinitelyOnMVar -> return False
+			Right CloseJob -> return False
 			Right (QueryJob a) -> a >> loop
-			Right (ChangeJob a) -> a >> loop
+			Right (ChangeJob a) -> do
+				a
+				-- Exit this sqlite transaction so the
+				-- database gets updated on disk.
+				return True
 			-- Change is run in a separate database connection
 			-- since sqlite only supports a single writer at a
 			-- time, and it may crash the database connection
 			-- that the write is made to.
-			Right (RobustChangeJob a) -> liftIO (a (runSqliteRobustly tablename db)) >> loop
+			Right (RobustChangeJob a) -> do
+				liftIO (a (runSqliteRobustly tablename db))
+				loop
 	
 -- like runSqlite, but calls settle on the raw sql Connection.
 runSqliteRobustly :: TableName -> T.Text -> (SqlPersistM a) -> IO a
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index 6c6789a29..4c707a779 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -17,9 +17,6 @@ there need to be a new interface in supported remotes?
 
 Work is in progress. Todo list:
 
-* bug: export db update does not reash disk after Remote.Helper.Export calls
-  updateExportTree. 
-
 * tracking exports
 
 * Support configuring export in the assistant

devblog
diff --git a/doc/devblog/day_473__distributed_use_of_exports.mdwn b/doc/devblog/day_473__distributed_use_of_exports.mdwn
new file mode 100644
index 000000000..311329246
--- /dev/null
+++ b/doc/devblog/day_473__distributed_use_of_exports.mdwn
@@ -0,0 +1,17 @@
+The tricky part of the `git annex export` feature has definitely been
+making it work in a distributed situation. The last details of that seem to
+have been worked out now.
+
+I had to remove support for dropping individual files from export remotes.
+The [[design|design/exporting_trees_to_special_remotes]] has a scenario
+where that makes distributed use of exports inconsistent.
+
+But, what is working now is `git annex export` being run in one repository,
+and then another repository, after syncing, can get files from the export.
+
+Most of export is done now. The only thing I'm thinking about adding is
+a way to make an export track a branch. so `git annex sync` can update
+the export.
+
+Today's work was sponsored by Jake Vosloo on
+[Patreon](https://patreon.com/joeyh/).

merge changes made on other repos into ExportTree
Now when one repository has exported a tree, another repository can get
files from the export, after syncing.
There's a bug: While the database update works, somehow the database on
disk does not get updated, and so the database update is run the next
time, etc. Wasn't able to figure out why yet.
This commit was sponsored by Ole-Morten Duesund on Patreon.
diff --git a/Annex/Locations.hs b/Annex/Locations.hs
index 947cceef9..f86dfc6f4 100644
--- a/Annex/Locations.hs
+++ b/Annex/Locations.hs
@@ -303,7 +303,7 @@ gitAnnexExportDbDir u r = gitAnnexExportDir u r </> "db"
 
 {- Lock file for export state for a special remote. -}
 gitAnnexExportLock :: UUID -> Git.Repo -> FilePath
-gitAnnexExportLock u r = gitAnnexExportDir u r ++ ".lck"
+gitAnnexExportLock u r = gitAnnexExportDbDir u r ++ ".lck"
 
 {- .git/annex/schedulestate is used to store information about when
  - scheduled jobs were last run. -}
diff --git a/Command/Export.hs b/Command/Export.hs
index 811e2351a..02c64eadf 100644
--- a/Command/Export.hs
+++ b/Command/Export.hs
@@ -27,7 +27,6 @@ import Annex.LockFile
 import Logs.Location
 import Logs.Export
 import Database.Export
-import Remote.Helper.Export
 import Messages.Progress
 import Utility.Tmp
 
@@ -129,7 +128,7 @@ seek' o r = do
 					(\diff -> startUnexport r ea db (Git.DiffTree.file diff) (unexportboth diff))
 					oldtreesha new
 			updateExportTree db emptyTree new
-	liftIO $ recordDataSource db new
+	liftIO $ recordExportTreeCurrent db new
 
 	-- Waiting until now to record the export guarantees that,
 	-- if this export is interrupted, there are no files left over
@@ -312,3 +311,28 @@ cleanupRename ea db ek src dest = do
 	if exportDirectories src /= exportDirectories dest
 		then removeEmptyDirectories ea db src [asKey ek]
 		else return True
+
+-- | Remove empty directories from the export. Call after removing an
+-- exported file, and after calling removeExportLocation and flushing the
+-- database.
+removeEmptyDirectories :: ExportActions Annex -> ExportHandle -> ExportLocation -> [Key] -> Annex Bool
+removeEmptyDirectories ea db loc ks
+	| null (exportDirectories loc) = return True
+	| otherwise = case removeExportDirectory ea of
+		Nothing -> return True
+		Just removeexportdirectory -> do
+			ok <- allM (go removeexportdirectory) 
+				(reverse (exportDirectories loc))
+			unless ok $ liftIO $ do
+				-- Add location back to export database, 
+				-- so this is tried again next time.
+				forM_ ks $ \k ->
+					addExportedLocation db k loc
+				flushDbQueue db
+			return ok
+  where
+	go removeexportdirectory d = 
+		ifM (liftIO $ isExportDirectoryEmpty db d)
+			( removeexportdirectory d
+			, return True
+			)
diff --git a/Database/Export.hs b/Database/Export.hs
index ad106f84e..322ab48fd 100644
--- a/Database/Export.hs
+++ b/Database/Export.hs
@@ -15,21 +15,21 @@ module Database.Export (
 	openDb,
 	closeDb,
 	flushDbQueue,
-	recordDataSource,
-	getDataSource,
 	addExportedLocation,
 	removeExportedLocation,
 	getExportedLocation,
 	isExportDirectoryEmpty,
+	getExportTreeCurrent,
+	recordExportTreeCurrent,
 	getExportTree,
 	addExportTree,
 	removeExportTree,
 	updateExportTree,
 	updateExportTree',
 	ExportedId,
-	ExportTreeId,
 	ExportedDirectoryId,
-	DataSourceId,
+	ExportTreeId,
+	ExportTreeCurrentId,
 ) where
 
 import Database.Types
@@ -50,29 +50,33 @@ import Database.Esqueleto hiding (Key)
 newtype ExportHandle = ExportHandle H.DbQueue
 
 share [mkPersist sqlSettings, mkMigrate "migrateExport"] [persistLowerCase|
--- Files that have been exported to the remote.
+-- Files that have been exported to the remote and are present on it.
 Exported
   key IKey
   file SFilePath
   ExportedIndex key file
--- The tree that has been exported to the remote. 
--- Not all of these files are necessarily present on the remote yet.
-ExportTree
-  key IKey
-  file SFilePath
-  ExportTreeIndex key file
 -- Directories that exist on the remote, and the files that are in them.
 ExportedDirectory
   subdir SFilePath
   file SFilePath
   ExportedDirectoryIndex subdir file
--- Record of what tree the current database content comes from.
-DataSource
+-- The content of the tree that has been exported to the remote.
+-- Not all of these files are necessarily present on the remote yet.
+ExportTree
+  key IKey
+  file SFilePath
+  ExportTreeIndex key file
+-- The tree stored in ExportTree
+ExportTreeCurrent
   tree SRef
   UniqueTree tree
 |]
 
-{- Opens the database, creating it if it doesn't exist yet. -}
+{- Opens the database, creating it if it doesn't exist yet.
+ -
+ - Only a single process should write to the export at a time, so guard
+ - any writes with the gitAnnexExportLock.
+ -}
 openDb :: UUID -> Annex ExportHandle
 openDb u = do
 	dbdir <- fromRepo (gitAnnexExportDbDir u)
@@ -97,19 +101,19 @@ queueDb (ExportHandle h) = H.queueDb h checkcommit
 flushDbQueue :: ExportHandle -> IO ()
 flushDbQueue (ExportHandle h) = H.flushDbQueue h
 
-recordDataSource :: ExportHandle -> Sha -> IO ()
-recordDataSource h s = queueDb h $ do
+recordExportTreeCurrent :: ExportHandle -> Sha -> IO ()
+recordExportTreeCurrent h s = queueDb h $ do
 	delete $ from $ \r -> do
-		where_ (r ^. DataSourceTree ==. r ^. DataSourceTree)
-	void $ insertUnique $ DataSource (toSRef s)
+		where_ (r ^. ExportTreeCurrentTree ==. r ^. ExportTreeCurrentTree)
+	void $ insertUnique $ ExportTreeCurrent $ toSRef s
 
-getDataSource :: ExportHandle -> IO (Maybe Sha)
-getDataSource (ExportHandle h) = H.queryDbQueue h $ do
+getExportTreeCurrent :: ExportHandle -> IO (Maybe Sha)
+getExportTreeCurrent (ExportHandle h) = H.queryDbQueue h $ do
 	l <- select $ from $ \r -> do
-		where_ (r ^. DataSourceTree ==. r ^. DataSourceTree)
-		return (r ^. DataSourceTree)
+		where_ (r ^. ExportTreeCurrentTree ==. r ^. ExportTreeCurrentTree)
+		return (r ^. ExportTreeCurrentTree)
 	case l of
-		(s:[]) -> return (Just (fromSRef (unValue s)))
+		(s:[]) -> return $ Just $ fromSRef $ unValue s
 		_ -> return Nothing
 
 addExportedLocation :: ExportHandle -> Key -> ExportLocation -> IO ()
@@ -167,7 +171,7 @@ getExportTree (ExportHandle h) k = H.queryDbQueue h $ do
 
 addExportTree :: ExportHandle -> Key -> ExportLocation -> IO ()
 addExportTree h k loc = queueDb h $
-	void $ insertUnique $ Exported ik ef
+	void $ insertUnique $ ExportTree ik ef
   where
 	ik = toIKey k
 	ef = toSFilePath (fromExportLocation loc)
diff --git a/Remote/Helper/Export.hs b/Remote/Helper/Export.hs
index 9b31baca3..d62c5a7e8 100644
--- a/Remote/Helper/Export.hs
+++ b/Remote/Helper/Export.hs
@@ -12,13 +12,16 @@ module Remote.Helper.Export where
 import Annex.Common
 import Types.Remote
 import Types.Backend
-import Types.Export
 import Types.Key
 import Backend
 import Remote.Helper.Encryptable (isEncrypted)
 import Database.Export
+import Logs.Export
+import Annex.LockFile
+import Git.Sha
 
 import qualified Data.Map as M
+import Control.Concurrent.STM
 
 -- | Use for remotes that do not support exports.
 class HasExportUnsupported a where
@@ -89,6 +92,33 @@ adjustExportable r = case M.lookup "exporttree" (config r) of
 		}
 	isexport = do

(Diff truncated)
update
diff --git a/doc/design/exporting_trees_to_special_remotes.mdwn b/doc/design/exporting_trees_to_special_remotes.mdwn
index c552fbc39..6cf738360 100644
--- a/doc/design/exporting_trees_to_special_remotes.mdwn
+++ b/doc/design/exporting_trees_to_special_remotes.mdwn
@@ -315,3 +315,38 @@ G is missing so gets uploaded.
 So, this works, as long as "delete all files that differ" means it
 deletes both old and new files. And as long as conflict resolution does not
 itself stash away files in the temp name for later renaming.
+
+## dropping from exports and copying to exports
+
+It might be nice for `git annex drop $file --from myexport` and
+`git annex copy $myfile --to export` to work. However, there are some
+very difficult issues in supporting those, and they don't really
+seem necessary to use exports. Re-running `git annex export`
+to resume an export handles all the cases that copying to an export
+would need to. And, deleting a file from a tree and exporting the new tree
+is the thing to do if a file no longer should be exported.
+
+Here's an example of the kind of problem supporting these needs to deal with:
+
+1. In repo A, file F with content K is exported
+2. In repo B, file F with content K' is exported, since F changed in the
+   exported treeish.
+3. In repo A, file F is removed from the export, which results in
+   K being removed from the location log for the export.
+
+But... did #3 happen before or after #2?
+If #3 occurred before #2, then K' is present in the export
+and the location log is correct.
+If #3 occurred after #2, and A and B's git-annex branches
+were not synced, then K' was accidentially removed
+from the export, and the location log is now wrong.
+
+Is there any reason to allow removeKey from an export?
+Why would someone want to drop a single file from an export?
+Why not remove the file from a tree, and export the new tree?
+
+(Alternatively, removeKey could itself update the exported tree,
+removing the file from it, and update the export log accordingly.
+This would avoid the problem. But that's complication and it would be
+rather slow and bloat the git repo with a lot of intermediate trees
+when dropping multiple keys.)

add ExportTree table to export db
New table needed to look up what filenames are used in the currently
exported tree, for reasons explained in export.mdwn.
Also, added smart constructors for ExportLocation and ExportDirectory to
make sure they contain filepaths with the right direction slashes.
And some code refactoring.
This commit was sponsored by Francois Marier on Patreon.
diff --git a/Annex/Export.hs b/Annex/Export.hs
new file mode 100644
index 000000000..0afe3cdcc
--- /dev/null
+++ b/Annex/Export.hs
@@ -0,0 +1,35 @@
+{- git-annex exports
+ -
+ - Copyright 2017 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU GPL version 3 or higher.
+ -}
+
+module Annex.Export where
+
+import Annex
+import Annex.CatFile
+import Types.Key
+import qualified Git
+
+-- An export includes both annexed files and files stored in git.
+-- For the latter, a SHA1 key is synthesized.
+data ExportKey = AnnexKey Key | GitKey Key
+	deriving (Show, Eq, Ord)
+
+asKey :: ExportKey -> Key
+asKey (AnnexKey k) = k
+asKey (GitKey k) = k
+
+exportKey :: Git.Sha -> Annex ExportKey
+exportKey sha = mk <$> catKey sha
+  where
+	mk (Just k) = AnnexKey k
+	mk Nothing = GitKey $ Key
+		{ keyName = show sha
+		, keyVariety = SHA1Key (HasExt False)
+		, keySize = Nothing
+		, keyMtime = Nothing
+		, keyChunkSize = Nothing
+		, keyChunkNum = Nothing
+		}
diff --git a/Command/Export.hs b/Command/Export.hs
index a9f474a19..f898c9e0d 100644
--- a/Command/Export.hs
+++ b/Command/Export.hs
@@ -21,6 +21,7 @@ import Git.Sha
 import Types.Key
 import Types.Remote
 import Types.Export
+import Annex.Export
 import Annex.Content
 import Annex.CatFile
 import Annex.LockFile
@@ -53,28 +54,6 @@ optParser _ = ExportOptions
 		( metavar paramTreeish
 		)
 
--- An export includes both annexed files and files stored in git.
--- For the latter, a SHA1 key is synthesized.
-data ExportKey = AnnexKey Key | GitKey Key
-	deriving (Show, Eq, Ord)
-
-asKey :: ExportKey -> Key
-asKey (AnnexKey k) = k
-asKey (GitKey k) = k
-
-exportKey :: Git.Sha -> Annex ExportKey
-exportKey sha = mk <$> catKey sha
-  where
-	mk (Just k) = AnnexKey k
-	mk Nothing = GitKey $ Key
-		{ keyName = show sha
-		, keyVariety = SHA1Key (HasExt False)
-		, keySize = Nothing
-		, keyMtime = Nothing
-		, keyChunkSize = Nothing
-		, keyChunkNum = Nothing
-		}
-
 -- To handle renames which swap files, the exported file is first renamed
 -- to a stable temporary name based on the key.
 exportTempName :: ExportKey -> ExportLocation
@@ -153,7 +132,8 @@ seek' o r = do
 	-- if this export is interrupted, there are no files left over
 	-- from a previous export, that are not part of this export.
 	c <- Annex.getState Annex.errcounter
-	when (c == 0) $
+	when (c == 0) $ do
+		liftIO $ recordDataSource db new
 		recordExport (uuid r) $ ExportChange
 			{ oldTreeish = map exportedTreeish old
 			, newTreeish = new
@@ -184,24 +164,24 @@ mkDiffMap old new = do
   where
 	combinedm (srca, dsta) (srcb, dstb) = (srca <|> srcb, dsta <|> dstb)
 	mkdm i = do
-		srcek <- getk (Git.DiffTree.srcsha i)
-		dstek <- getk (Git.DiffTree.dstsha i)
+		srcek <- getek (Git.DiffTree.srcsha i)
+		dstek <- getek (Git.DiffTree.dstsha i)
 		return $ catMaybes
 			[ (, (Just (Git.DiffTree.file i), Nothing)) <$> srcek
 			, (, (Nothing, Just (Git.DiffTree.file i))) <$> dstek
 			]
-	getk sha
+	getek sha
 		| sha == nullSha = return Nothing
 		| otherwise = Just <$> exportKey sha
 
 startExport :: Remote -> ExportActions Annex -> ExportHandle -> Git.LsTree.TreeItem -> CommandStart
 startExport r ea db ti = do
 	ek <- exportKey (Git.LsTree.sha ti)
-	stopUnless (liftIO $ notElem loc <$> getExportLocation db (asKey ek)) $ do
+	stopUnless (liftIO $ notElem loc <$> getExportedLocation db (asKey ek)) $ do
 		showStart "export" f
 		next $ performExport r ea db ek (Git.LsTree.sha ti) loc
   where
-	loc = ExportLocation $ toInternalGitPath f
+	loc = mkExportLocation f
 	f = getTopFilePath $ Git.LsTree.file ti
 
 performExport :: Remote -> ExportActions Annex -> ExportHandle -> ExportKey -> Sha -> ExportLocation -> CommandPerform
@@ -231,7 +211,7 @@ performExport r ea db ek contentsha loc = do
 
 cleanupExport :: Remote -> ExportHandle -> ExportKey -> ExportLocation -> CommandCleanup
 cleanupExport r db ek loc = do
-	liftIO $ addExportLocation db (asKey ek) loc
+	liftIO $ addExportedLocation db (asKey ek) loc
 	logChange (asKey ek) (uuid r) InfoPresent
 	return True
 
@@ -244,7 +224,7 @@ startUnexport r ea db f shas = do
 			showStart "unexport" f'
 			next $ performUnexport r ea db eks loc
   where
-	loc = ExportLocation $ toInternalGitPath f'
+	loc = mkExportLocation f'
 	f' = getTopFilePath f
 
 startUnexport' :: Remote -> ExportActions Annex -> ExportHandle -> TopFilePath -> ExportKey -> CommandStart
@@ -252,7 +232,7 @@ startUnexport' r ea db f ek = do
 	showStart "unexport" f'
 	next $ performUnexport r ea db [ek] loc
   where
-	loc = ExportLocation $ toInternalGitPath f'
+	loc = mkExportLocation f'
 	f' = getTopFilePath f
 
 performUnexport :: Remote -> ExportActions Annex -> ExportHandle -> [ExportKey] -> ExportLocation -> CommandPerform
@@ -266,11 +246,11 @@ cleanupUnexport :: Remote -> ExportActions Annex -> ExportHandle -> [ExportKey]
 cleanupUnexport r ea db eks loc = do
 	liftIO $ do
 		forM_ eks $ \ek ->
-			removeExportLocation db (asKey ek) loc
+			removeExportedLocation db (asKey ek) loc
 		flushDbQueue db
 
 	remaininglocs <- liftIO $ 
-		concat <$> forM eks (\ek -> getExportLocation db (asKey ek))
+		concat <$> forM eks (\ek -> getExportedLocation db (asKey ek))
 	when (null remaininglocs) $
 		forM_ eks $ \ek ->
 			logChange (asKey ek) (uuid r) InfoMissing
@@ -282,31 +262,31 @@ startRecoverIncomplete r ea db sha oldf
 	| sha == nullSha = stop
 	| otherwise = do
 		ek <- exportKey sha
-		let loc@(ExportLocation f) = exportTempName ek
-		showStart "unexport" f
-		liftIO $ removeExportLocation db (asKey ek) oldloc
+		let loc = exportTempName ek
+		showStart "unexport" (fromExportLocation f)
+		liftIO $ removeExportedLocation db (asKey ek) oldloc
 		next $ performUnexport r ea db [ek] loc
   where
-	oldloc = ExportLocation $ toInternalGitPath oldf'
+	oldloc = mkExportLocation oldf'
 	oldf' = getTopFilePath oldf
 
 startMoveToTempName :: Remote -> ExportActions Annex -> ExportHandle -> TopFilePath -> ExportKey -> CommandStart
 startMoveToTempName r ea db f ek = do
-	let tmploc@(ExportLocation tmpf) = exportTempName ek
-	showStart "rename" (f' ++ " -> " ++ tmpf)
+	showStart "rename" (f' ++ " -> " ++ fromExportLocation tmploc)
 	next $ performRename r ea db ek loc tmploc
   where
-	loc = ExportLocation $ toInternalGitPath f'
+	loc = mkExportLocation f'
 	f' = getTopFilePath f
+	tmploc = exportTempName ek
 
 startMoveFromTempName :: Remote -> ExportActions Annex -> ExportHandle -> ExportKey -> TopFilePath -> CommandStart
 startMoveFromTempName r ea db ek f = do
-	let tmploc@(ExportLocation tmpf) = exportTempName ek
-	stopUnless (liftIO $ elem tmploc <$> getExportLocation db (asKey ek)) $ do
-		showStart "rename" (tmpf ++ " -> " ++ f')
+	let tmploc = exportTempName ek
+	stopUnless (liftIO $ elem tmploc <$> getExportedLocation db (asKey ek)) $ do
+		showStart "rename" (exportLocation tmploc ++ " -> " ++ f')
 		next $ performRename r ea db ek tmploc loc

(Diff truncated)
lock to avoid more than one export to a remote at a time
This commit was sponsored by Jack Hill on Patreon.
diff --git a/Annex/Locations.hs b/Annex/Locations.hs
index a5de2e4ff..947cceef9 100644
--- a/Annex/Locations.hs
+++ b/Annex/Locations.hs
@@ -37,6 +37,7 @@ module Annex.Locations (
 	gitAnnexFsckDbLock,
 	gitAnnexFsckResultsLog,
 	gitAnnexExportDbDir,
+	gitAnnexExportLock,
 	gitAnnexScheduleState,
 	gitAnnexTransferDir,
 	gitAnnexCredsDir,
@@ -300,6 +301,10 @@ gitAnnexExportDir u r = gitAnnexDir r </> "export" </> fromUUID u
 gitAnnexExportDbDir :: UUID -> Git.Repo -> FilePath
 gitAnnexExportDbDir u r = gitAnnexExportDir u r </> "db"
 
+{- Lock file for export state for a special remote. -}
+gitAnnexExportLock :: UUID -> Git.Repo -> FilePath
+gitAnnexExportLock u r = gitAnnexExportDir u r ++ ".lck"
+
 {- .git/annex/schedulestate is used to store information about when
  - scheduled jobs were last run. -}
 gitAnnexScheduleState :: Git.Repo -> FilePath
diff --git a/Command/Export.hs b/Command/Export.hs
index 22ea72170..a9f474a19 100644
--- a/Command/Export.hs
+++ b/Command/Export.hs
@@ -23,6 +23,7 @@ import Types.Remote
 import Types.Export
 import Annex.Content
 import Annex.CatFile
+import Annex.LockFile
 import Logs.Location
 import Logs.Export
 import Database.Export
@@ -85,7 +86,10 @@ seek o = do
 	r <- getParsed (exportRemote o)
 	unlessM (isExportSupported r) $
 		giveup "That remote does not support exports."
+	withExclusiveLock (gitAnnexExportLock (uuid r)) (seek' o r)
 
+seek' :: ExportOptions -> Remote -> CommandSeek
+seek' o r = do
 	new <- fromMaybe (giveup "unknown tree") <$>
 		-- Dereference the tree pointed to by the branch, commit,
 		-- or tag.
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index f979cd0c0..f23ed6866 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -26,11 +26,11 @@ Work is in progress. Todo list:
   export database is not populated. So, seems that the export database needs
   to get populated based on the export log in these cases.  
 
-  This needs a (local) record of what treeish the (local) export db
+  This needs a (local) record of what tree the (local) export db
   was last updated for, which is updated at the same time as the export log. 
   One way to record that would be as a git ref.
 
-  When the export log contains a different treeish than the local
+  When the export log contains a different tree than the local
   record, the export was updated in another repository, and so the
   export db needs to be updated.
 

move tracking exports to design
diff --git a/doc/design/exporting_trees_to_special_remotes.mdwn b/doc/design/exporting_trees_to_special_remotes.mdwn
index 6e7cc68db..c552fbc39 100644
--- a/doc/design/exporting_trees_to_special_remotes.mdwn
+++ b/doc/design/exporting_trees_to_special_remotes.mdwn
@@ -50,12 +50,6 @@ tree.
 If an export is interrupted, running it again should resume where it left
 off.
 
-It would also be nice to have a way to say, "I want to export the master branch", 
-and have git-annex sync and the assistant automatically update the export.
-This could be done by recording the treeish in eg, refs/remotes/myexport/HEAD.
-git-annex export could do this by default (if the user doesn't want the export
-to track the branch, they could instead export a tree or a tag).
-
 ## updating an export
 
 The user can at any time re-run git-annex export with a new treeish
@@ -76,6 +70,26 @@ that swap names.
 If the special remote supports copying files, that would also make some
 updates more efficient.
 
+## tracking exports
+
+This lets the user say, "I want to export the master branch", 
+and have git-annex sync and the assistant automatically update the export
+when master changes.
+
+git-annex export could do this by default (if the user doesn't want the
+export to track the branch, they could instead export a tree or a tag). Or
+it could be a --tracking parameter.
+
+How to record the export tracking branch? It could be stored
+as refs/remotes/myexport/master. This says that the master branch
+is being exported to myexport, and the ref points to the last treeish
+that was exported.
+
+But.. master:subdir is a valid treeish, referring to the subdir
+of the current master tree. This is a useful thing to want to export.
+But, that's not a legal ref name. So, perhaps better to record
+the export tracking branch some other way. Perhaps in git config?
+
 ## changes to special remote interface
 
 This needs some additional methods added to special remotes, and to
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index c6d79f7a7..f979cd0c0 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -28,8 +28,7 @@ Work is in progress. Todo list:
 
   This needs a (local) record of what treeish the (local) export db
   was last updated for, which is updated at the same time as the export log. 
-  One way to record that would be as a git ref. (Which may also help
-  for tracking exports of eg the master branch, see below.)
+  One way to record that would be as a git ref.
 
   When the export log contains a different treeish than the local
   record, the export was updated in another repository, and so the
@@ -39,27 +38,10 @@ Work is in progress. Todo list:
   logged treeish. Add/delete exported files from the database to get
   it to the same state as the remote database.
 
-* git-annex sync to export and export tracking branch
+* tracking exports
 
-  This needs a way to configure an export tracking branch.
-  Eg, `git annex export --tracking master --to myexport`
-  
-  (There should only be one tracking branch per export remote.)
-  
-  Then running `git annex sync --content` would update the export with
-  any changes to master.
-
-  How to record the export tracking branch? It could be stored
-  as refs/remotes/myexport/master. This says that the master branch
-  is being exported to myexport, and the ref points to the last treeish
-  that was exported. 
-  
-  But.. master:subdir is a valid treeish, referring to the subdir
-  of the current master tree. This is a useful thing to want to export.
-  But, that's not a legal ref name. So, perhaps better to record
-  the export tracking branch some other way. Perhaps in git config?
-
-* Support export in the assistant (when eg setting up a S3 special remote).
+* Support configuring export in the assistant
+  (when eg setting up a S3 special remote).
 
   This is similar to the little-used preferreddir= preferred content
   setting and the "public" repository group. The assistant uses

diff --git a/doc/forum/sync_remote_repo_on_local_sync_upstream.mdwn b/doc/forum/sync_remote_repo_on_local_sync_upstream.mdwn
new file mode 100644
index 000000000..c2b566cc5
--- /dev/null
+++ b/doc/forum/sync_remote_repo_on_local_sync_upstream.mdwn
@@ -0,0 +1,21 @@
+I am looking for a short cut for my workflow where I sync a (somewhat) central repo and a few client repos.
+
+So, I sync upstream 
+
+    laptop > git annex sync --content
+
+and ssh to the central repo and run another sync
+
+    server > git annex sync
+
+to bring it up to date so that I can sync/pull it again from my desktop.
+
+Is there an easy way to script/do both steps in one for different protocols? E.g., update a ssh repo and a USB-drive repo when syncing on the local one?
+
+At the moment, I would try to check for all known remotes
+
+    > git remote -v
+
+and depending on the protocol ssh/cd into each for triggering a sync.
+
+

don't support removing content from export with removeKey
There does not seem to be a use case for supporting that, and it would
need a lot of complication to support it in a way that allows eventual
consistency when two repositories are updating the same export.
This commit was sponsored by Henrik Riomar on Patreon.
diff --git a/Remote/Helper/Export.hs b/Remote/Helper/Export.hs
index edd0b96df..df75dacd0 100644
--- a/Remote/Helper/Export.hs
+++ b/Remote/Helper/Export.hs
@@ -117,20 +117,15 @@ adjustExportable r = case M.lookup "exporttree" (config r) of
 						warning $ "exported content cannot be verified due to using the " ++ formatKeyVariety (keyVariety k) ++ " backend"
 						return False
 			, retrieveKeyFileCheap = \_ _ _ -> return False
-			-- Remove all files a key was exported to.
-			, removeKey = \k -> do
-				locs <- liftIO $ getExportLocation db k
-				ea <- exportActions r
-				oks <- forM locs $ \loc ->
-					ifM (removeExport ea k loc)
-						( do
-							liftIO $ do
-								removeExportLocation db k loc
-								flushDbQueue db
-							removeEmptyDirectories ea db loc [k]
-						, return False
-						)
-				return (and oks)
+			-- Removing a key from an export would need to
+			-- change the tree in the export log to not include
+			-- the file. Otherwise, conflicts when removing
+			-- files would not be dealt with correctly.
+			-- There does not seem to be a good use case for
+			-- removing a key from an export in any case.
+			, removeKey = \_k -> do
+				warning "dropping content from an export is not supported; use `git annex export` to export a tree that lacks the files you want to remove"
+				return False
 			-- Can't lock content on exports, since they're
 			-- not key/value stores, and someone else could
 			-- change what's exported to a file at any time.
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index 0ed04d7e7..c6d79f7a7 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -38,38 +38,6 @@ Work is in progress. Todo list:
   Updating the export db could diff the last exported treeish with the
   logged treeish. Add/delete exported files from the database to get
   it to the same state as the remote database.
-  But, removeKey from an export makes the diff approach problimatic;
-  see below.
-
-* removeKey from an export is problimatic in distributed context
-
-  A file can be removed from an export via removeKey,
-  which updates the export db and location log, but does not update
-  the export log. This is problimatic when multiple repos are updating
-  an export.
-
-  1. In repo A, file F with content K is exported
-  2. In repo B, file F with content K' is exported, since F changed in the
-     exported treeish.
-  3. In repo A, file F is removed from the export, which results in
-     K being removed from the location log for the export.
-
-  Did #3 happen before or after #2?  
-  If #3 occurred before #2, then K' is present in the export
-  and the location log is correct.  
-  If #3 occurred after #2, and A and B's git-annex branches
-  were not synced, then K' was accidentially removed
-  from the export, and the location log is now wrong.
-
-  Is there any reason to allow removeKey from an export?
-  Why would someone want to drop a single file from an export?
-  Why not remove the file from a tree, and export the new tree?
-
-  (Alternatively, removeKey could itself update the exported tree,
-  removing the file from it, and update the export log accordingly.
-  This would avoid the problem. But that's complication and it would be
-  rather slow and bloat the git repo with a lot of intermediate trees
-  when dropping multiple keys.)
 
 * git-annex sync to export and export tracking branch
 

clarification
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index b4cc9e7ff..0ed04d7e7 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -57,7 +57,8 @@ Work is in progress. Todo list:
   Did #3 happen before or after #2?  
   If #3 occurred before #2, then K' is present in the export
   and the location log is correct.  
-  If #3 occurred after #2, then K' was accidentially removed
+  If #3 occurred after #2, and A and B's git-annex branches
+  were not synced, then K' was accidentially removed
   from the export, and the location log is now wrong.
 
   Is there any reason to allow removeKey from an export?

design for next steps on exports
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index d40fd4c74..b4cc9e7ff 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -20,14 +20,82 @@ Work is in progress. Todo list:
 * `git annex get --from export` works in the repo that exported to it,
   but in another repo, the export db won't be populated, so it won't work.
   Maybe just show a useful error message in this case?  
+
   However, exporting from one repository and then trying to update the
   export from another repository also doesn't work right, because the
   export database is not populated. So, seems that the export database needs
-  to get populated based on the export log in these cases.
+  to get populated based on the export log in these cases.  
+
+  This needs a (local) record of what treeish the (local) export db
+  was last updated for, which is updated at the same time as the export log. 
+  One way to record that would be as a git ref. (Which may also help
+  for tracking exports of eg the master branch, see below.)
+
+  When the export log contains a different treeish than the local
+  record, the export was updated in another repository, and so the
+  export db needs to be updated.
+
+  Updating the export db could diff the last exported treeish with the
+  logged treeish. Add/delete exported files from the database to get
+  it to the same state as the remote database.
+  But, removeKey from an export makes the diff approach problimatic;
+  see below.
+
+* removeKey from an export is problimatic in distributed context
+
+  A file can be removed from an export via removeKey,
+  which updates the export db and location log, but does not update
+  the export log. This is problimatic when multiple repos are updating
+  an export.
+
+  1. In repo A, file F with content K is exported
+  2. In repo B, file F with content K' is exported, since F changed in the
+     exported treeish.
+  3. In repo A, file F is removed from the export, which results in
+     K being removed from the location log for the export.
+
+  Did #3 happen before or after #2?  
+  If #3 occurred before #2, then K' is present in the export
+  and the location log is correct.  
+  If #3 occurred after #2, then K' was accidentially removed
+  from the export, and the location log is now wrong.
+
+  Is there any reason to allow removeKey from an export?
+  Why would someone want to drop a single file from an export?
+  Why not remove the file from a tree, and export the new tree?
+
+  (Alternatively, removeKey could itself update the exported tree,
+  removing the file from it, and update the export log accordingly.
+  This would avoid the problem. But that's complication and it would be
+  rather slow and bloat the git repo with a lot of intermediate trees
+  when dropping multiple keys.)
+
+* git-annex sync to export and export tracking branch
+
+  This needs a way to configure an export tracking branch.
+  Eg, `git annex export --tracking master --to myexport`
+  
+  (There should only be one tracking branch per export remote.)
+  
+  Then running `git annex sync --content` would update the export with
+  any changes to master.
+
+  How to record the export tracking branch? It could be stored
+  as refs/remotes/myexport/master. This says that the master branch
+  is being exported to myexport, and the ref points to the last treeish
+  that was exported. 
+  
+  But.. master:subdir is a valid treeish, referring to the subdir
+  of the current master tree. This is a useful thing to want to export.
+  But, that's not a legal ref name. So, perhaps better to record
+  the export tracking branch some other way. Perhaps in git config?
+
 * Support export in the assistant (when eg setting up a S3 special remote).
-  Would need git-annex sync to export to the master tree?
+
   This is similar to the little-used preferreddir= preferred content
-  setting and the "public" repository group.
+  setting and the "public" repository group. The assistant uses
+  those for IA, which could be replaced with setting up an export
+  tracking branch.
 
 Low priority:
 

add example, including use of branch:subdir to export only a subdir
diff --git a/doc/git-annex-export.mdwn b/doc/git-annex-export.mdwn
index 72319a8fc..8958e7233 100644
--- a/doc/git-annex-export.mdwn
+++ b/doc/git-annex-export.mdwn
@@ -19,8 +19,12 @@ so is not allowed. You have to configure a special remote with
 `exporttree=yes` when initially setting it up with
 [[git-annex-initremote]](1).
 
+The treeish to export can be the name of a git branch, or a tag, or any
+other treeish accepted by git, including eg master:subdir to only export a
+subdirectory from a branch.
+
 Repeated exports are done efficiently, by diffing the old and new tree,
-and transferring only the changed files.
+and transferring only the changed files, and renaming files as necessary.
 
 Exports can be interrupted and resumed. However, partially uploaded files
 will be re-started from the beginning.
@@ -32,6 +36,26 @@ verification of content downloaded from an export. Some types of keys,
 that are not based on checksums, cannot be downloaded from an export.
 And, git-annex will never trust an export to retain the content of a key.
 
+# EXAMPLE
+
+	git annex initremote myexport type=directory directory=/mnt/myexport \
+		exportree=yes encryption=none
+	git annex export master --to myexport
+
+After that, /mnt/myexport will contain the same tree of files as the master
+branch does.
+
+	git mv myfile subdir/myfile
+	git commit -m renamed
+	git annex export master --to myexport
+
+That updates /mnt/myexport to reflect the renamed file.
+
+	git annex export master:subdir --to myexport
+
+That updates /mnt/myexport, to contain only the files in the "subdir"
+directory of the master branch.
+
 # EXPORT CONFLICTS
 
 If two different git-annex repositories are both exporting different trees

add link to git-annex-remote-gvfs for smb / sftp
diff --git a/doc/special_remotes.mdwn b/doc/special_remotes.mdwn
index 23cb1cc33..bfb2f08d8 100644
--- a/doc/special_remotes.mdwn
+++ b/doc/special_remotes.mdwn
@@ -52,6 +52,7 @@ for using git-annex with various services:
 * [Openstack Swift / Rackspace cloud files / Memset Memstore](https://github.com/DanielDent/git-annex-remote-rclone)
 * [Microsoft One Drive](https://github.com/DanielDent/git-annex-remote-rclone)
 * [Yandex Disk](https://github.com/DanielDent/git-annex-remote-rclone)
+* [smb / sftp](https://github.com/grawity/code/blob/master/net/git-annex-remote-gvfs)
 
 Want to add support for something else? [[Write your own!|external]]
 

Added a comment: Thanks!
diff --git a/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6/comment_6_fce1ef0377e00fb9431d4c8b0215387b._comment b/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6/comment_6_fce1ef0377e00fb9431d4c8b0215387b._comment
new file mode 100644
index 000000000..5cd5cbf45
--- /dev/null
+++ b/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6/comment_6_fce1ef0377e00fb9431d4c8b0215387b._comment
@@ -0,0 +1,11 @@
+[[!comment format=mdwn
+ username="gleachkr@7c488e398809299a1100b93f8884de43dee83674"
+ nickname="gleachkr"
+ avatar="http://cdn.libravatar.org/avatar/c7ce6b5eae91547b25e9a05fc7c8cf22"
+ subject="Thanks!"
+ date="2017-09-16T16:46:48Z"
+ content="""
+Removing the db and running `fsck` seems to have fixed the problem. I really appreciate it. I do think that I unlocked and then re-locked some of the affected files, but I think only after I noticed that they were behaving strangely---in particular, I think I only did this with the affected files that were not `fsck`ing correctly (and this was before I realized they were not `fsck`ing correctly), not with e.g. the Car Seat Headrest song.
+
+I didn't get the chance to fill in the \"Have you had any luck using git-annex before\" part of the standard bug report, so I thought I should add that here: Yes, I've used git-annex regularly for quite a while now. It's an inspiring piece of open-source software. I'm always finding new things it can do. Thank you for creating it, and thanks for your help with this issue!
+"""]]

update
diff --git a/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6/comment_5_5f2e03340fb98bd146c4563e8e57dd30._comment b/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6/comment_5_5f2e03340fb98bd146c4563e8e57dd30._comment
new file mode 100644
index 000000000..811e60427
--- /dev/null
+++ b/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6/comment_5_5f2e03340fb98bd146c4563e8e57dd30._comment
@@ -0,0 +1,12 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 5"""
+ date="2017-09-16T16:07:38Z"
+ content="""
+Actually, you should `git annex lock` before moving the database out of
+the way, otherwise it might be confused. So:
+
+1. git annex lock
+2. move keys/db to a safe backup location
+3. git annex fsck
+"""]]

response for gleachkr
diff --git a/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6/comment_4_90062420b67a7f364f5ece947408798f._comment b/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6/comment_4_90062420b67a7f364f5ece947408798f._comment
new file mode 100644
index 000000000..af86bec35
--- /dev/null
+++ b/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6/comment_4_90062420b67a7f364f5ece947408798f._comment
@@ -0,0 +1,12 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 4"""
+ date="2017-09-16T16:04:24Z"
+ content="""
+Right, just back up the keys/db just in case, and you need to fsck at least
+any files that are not locked (to repopulate the keys/db for those), 
+so safest is to fsck the whole repository..
+
+Is it possible that you've run `git annex unlock` / `git annex lock` on
+some of the affected files in the past?
+"""]]

Added a comment
diff --git a/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6/comment_3_9efad60d2c43a902c7a0e2d6225be1c5._comment b/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6/comment_3_9efad60d2c43a902c7a0e2d6225be1c5._comment
new file mode 100644
index 000000000..20d832007
--- /dev/null
+++ b/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6/comment_3_9efad60d2c43a902c7a0e2d6225be1c5._comment
@@ -0,0 +1,23 @@
+[[!comment format=mdwn
+ username="gleachkr@7c488e398809299a1100b93f8884de43dee83674"
+ nickname="gleachkr"
+ avatar="http://cdn.libravatar.org/avatar/c7ce6b5eae91547b25e9a05fc7c8cf22"
+ subject="comment 3"
+ date="2017-09-16T16:02:41Z"
+ content="""
+The second file (`01 Fill in the Blank.m4a`) does still exist, although `annex get` always re-retrieves it. annx.thin is not set. Here's the git config, minus remotes:
+
+    [core]
+        repositoryformatversion = 0
+        filemode = true
+        bare = false
+        logallrefupdates = true
+    [annex]
+        uuid = e9731ab7-6a76-4eef-b337-2b8573380014
+        version = 6
+    [filter \"annex\"]
+        smudge = git-annex smudge %f
+        clean = git-annex smudge --clean %f
+
+Thanks for the fix. Just to make sure I understand before breaking anything futher, the idea would be to move `.git/annex/keys/db` somewhere safe, `git annex lock` all the affected files, and then `git annex fsck` the whole repository? or just the affected files?
+"""]]

followup for gleachkr
diff --git a/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6/comment_2_5e37bcb145880fe6074995f17c5aa7c3._comment b/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6/comment_2_5e37bcb145880fe6074995f17c5aa7c3._comment
new file mode 100644
index 000000000..94bada599
--- /dev/null
+++ b/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6/comment_2_5e37bcb145880fe6074995f17c5aa7c3._comment
@@ -0,0 +1,32 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 2"""
+ date="2017-09-16T14:54:49Z"
+ content="""
+The .dump that shows the file is in the "content" table but
+not the "associated" table seems to confirm my hypothesis.
+
+Aha -- I was able to replicate having "content" but no "associated" in the
+keys database, by first using `git annex add` on a file, then `git annex
+unlock`, then `git annex lock`. Any chance this is what you did? (Perhaps
+some of those commits that git log shows were the locking/unlocking; if you
+`git show` the commits and see that the mode of the file has changed, that
+would confirm it.)
+
+I've still not quite managed to replicate the problem, because the cached
+inodes were still right. Tried moving the file away to another repo, but
+it then removed the cached inodes and so avoided the problem.
+
+Very interesting about the second file with `git annex get` and `git annex
+fsck` behaving differently. Does the file 'Car Seat Headrest/Teens of Denial/01 Fill in the Blank.m4a'
+still exist in your git repository?
+
+Is annex.thin set in .git/config?
+
+----
+
+I probably have enough information to move on to getting your repository
+fixed so you can stop being bothered by the problem at least. I think you
+could probably move .git/annex/keys/db out of the way, and run `git annex
+lock` followed by `git annex fsck` to get into a non-broken state.
+"""]]

Added a comment: More data points
diff --git a/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6/comment_1_fa6b924d792613b60979a87deea2a66f._comment b/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6/comment_1_fa6b924d792613b60979a87deea2a66f._comment
new file mode 100644
index 000000000..6ace37334
--- /dev/null
+++ b/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6/comment_1_fa6b924d792613b60979a87deea2a66f._comment
@@ -0,0 +1,67 @@
+[[!comment format=mdwn
+ username="gleachkr@7c488e398809299a1100b93f8884de43dee83674"
+ nickname="gleachkr"
+ avatar="http://cdn.libravatar.org/avatar/c7ce6b5eae91547b25e9a05fc7c8cf22"
+ subject="More data points"
+ date="2017-09-16T01:16:59Z"
+ content="""
+the results of `git log` on one affected file are (the last commit being after I noticed the problem and tried to fix it)
+
+    commit 0e93d1c9c18cf7aead978e0ae453e66991d6e500
+    Author: Graham <alarm@localhost.localdomain>
+    Date:   Tue Sep 12 19:20:33 2017 +0000
+
+        attempt to fix references for Whitney
+
+    commit 02bf95566eb2dd6947f417019e1d601abcfb55c1
+    Author: Graham <gleachkr@gmail.com>
+    Date:   Tue Aug 8 18:45:06 2017 -0500
+    
+        cleanup
+    
+    commit 655b0fd419945d0ea32d9d178c551af0a64e6afd
+    Author: Graham <alarm@localhost.localdomain>
+    Date:   Sat Nov 19 19:21:10 2016 +0000
+    
+        database cleanup
+    
+    commit 54fa420bcaf33847eee77a30fbd9556beea28f77
+    Author: gleachkr <gleachkr@gmail.com>
+    Date:   Fri Sep 9 17:48:48 2016 -0500
+    
+        git-annex in graham@Descartes:~/music
+
+running `.dump` in sqlite3 yields the following two lines containing the key associated with one file suffering from the problem, with no lines containing both \"associated\" and the key (sorry, not a db expert):
+
+    INSERT INTO content VALUES(171,'SHA256E-s8350646--d9bbbd67402a1b7560ebc47bc7bafaf74115a99c628e7458ba4754d8a355908a.m4a','I \"237 8350646 1473527901\"');
+    INSERT INTO content VALUES(172,'SHA256E-s8350646--d9bbbd67402a1b7560ebc47bc7bafaf74115a99c628e7458ba4754d8a355908a.m4a','I \"1711652 8350646 1473527901\"');
+
+another affected file shows up on lines:
+
+    INSERT INTO associated VALUES(3,'SHA256E-s8789357--2bedeea689e7d7dda1e877989abd3e822f722c9bc45a14d1951d5dc104f4ad62.m4a','Car Seat Headrest/Teens of Denial/01 Fill in the Blank.m4a');
+    INSERT INTO content VALUES(31,'SHA256E-s8789357--2bedeea689e7d7dda1e877989abd3e822f722c9bc45a14d1951d5dc104f4ad62.m4a','I \"135750 8789357 1473525850\"');
+    INSERT INTO content VALUES(32,'SHA256E-s8789357--2bedeea689e7d7dda1e877989abd3e822f722c9bc45a14d1951d5dc104f4ad62.m4a','I \"1711014 8789357 1473525850\"');
+    INSERT INTO content VALUES(296,'SHA256E-s8789357--2bedeea689e7d7dda1e877989abd3e822f722c9bc45a14d1951d5dc104f4ad62.m4a','I \"1050021 8789357 1498915821\"');
+
+This second file has slightly different deviant behavior. It still registers as not present to `annex find` and `annex info`, is redownloaded by `annex get`, and shows up in `annex list`. But unlike the first, it does not register as not present when `annex fsck` is run. It just checks out, apparently.
+
+The second file gives `git log` results as follows:
+
+    commit 02bf95566eb2dd6947f417019e1d601abcfb55c1
+    Author: Graham <gleachkr@gmail.com>
+    Date:   Tue Aug 8 18:45:06 2017 -0500
+    
+        cleanup
+    
+    commit 655b0fd419945d0ea32d9d178c551af0a64e6afd
+    Author: Graham <alarm@localhost.localdomain>
+    Date:   Sat Nov 19 19:21:10 2016 +0000
+    
+        database cleanup
+    
+    commit 54fa420bcaf33847eee77a30fbd9556beea28f77
+    Author: gleachkr <gleachkr@gmail.com>
+    Date:   Fri Sep 9 17:48:48 2016 -0500
+    
+        git-annex in graham@Descartes:~/music
+"""]]

devblog
diff --git a/doc/devblog/day_472__removing_empty_directories.mdwn b/doc/devblog/day_472__removing_empty_directories.mdwn
new file mode 100644
index 000000000..563a347c3
--- /dev/null
+++ b/doc/devblog/day_472__removing_empty_directories.mdwn
@@ -0,0 +1,20 @@
+After doing some fine-tuning of webdav export on Wednesday, I noticed a
+problem: There seems to be no way in the webdav spec to delete a collection
+(directory) only when it's empty like `rmdir(2)` does. 
+It would be possible to check the contents of the collection before
+deleting it, but that's complex (involving XML parsing) and race-prone.
+
+So, I decided to add a remote method to delete a directory, and make
+git-annex keep track of when a directory in an export is empty, and delete
+it. While it does complicate the design some to need to do this, that seems
+better than complicating the implementation of remotes like webdav. And
+some remotes may not have a `rmdir(2)` equivilant or a way to check if a
+directory is empty.
+
+Spent most of today implementing that, including some rather hairy
+eSQL to maintain a table of exported directories.
+
+Still not quite done with export..
+
+Today's work was sponsored by Trenton Cronholm on
+[Patreon](https://patreon.com/joeyh)

empty directory removal working
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index c8438d942..d40fd4c74 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -40,9 +40,3 @@ Low priority:
   Run each pair in turn. Then run the current rename code. Although this
   still probably misses cases, where eg, content cycles amoung 3 files, and
   the same content amoung 3 other files. Is there a general algorythm?
-* webdav: When a file in a subdirectory gets deleted,
-  the webdav collection is remains, empty. Could to check if collection is
-  empty, and delete otherwise, but that would have a race if another export
-  is running to the same webdav server.  
-  Probably best to add a remote method to delete a directory, and have
-  export use it on all directories it thinks should be empty.

implement removeExportDirectory
Not yet called by Command.Export.
WebDAV needs this to clean up empty collections. Also, example.sh turned
out to not be cleaning up directories when removing content
from them, so it made sense for it to use this.
Remote.Directory did not need it, and since its cleanup method for empty
directories is more efficient than what Command.Export will need to do
to find empty directories, it uses Nothing so that extra work can be
avoided.
This commit was sponsored by Thom May on Patreon.
diff --git a/Remote/Directory.hs b/Remote/Directory.hs
index c3ebeb899..24f35868b 100644
--- a/Remote/Directory.hs
+++ b/Remote/Directory.hs
@@ -47,26 +47,29 @@ gen r u c gc = do
 	let chunkconfig = getChunkConfig c
 	return $ Just $ specialRemote c
 		(prepareStore dir chunkconfig)
-		(retrieve dir chunkconfig)
-		(simplyPrepare $ remove dir)
-		(simplyPrepare $ checkKey dir chunkconfig)
+		(retrieveKeyFileM dir chunkconfig)
+		(simplyPrepare $ removeKeyM dir)
+		(simplyPrepare $ checkPresentM dir chunkconfig)
 		Remote
 			{ uuid = u
 			, cost = cst
 			, name = Git.repoDescribe r
 			, storeKey = storeKeyDummy
 			, retrieveKeyFile = retreiveKeyFileDummy
-			, retrieveKeyFileCheap = retrieveCheap dir chunkconfig
+			, retrieveKeyFileCheap = retrieveKeyFileCheapM dir chunkconfig
 			, removeKey = removeKeyDummy
 			, lockContent = Nothing
 			, checkPresent = checkPresentDummy
 			, checkPresentCheap = True
 			, exportActions = return $ ExportActions
-				{ storeExport = storeExportDirectory dir
-				, retrieveExport = retrieveExportDirectory dir
-				, removeExport = removeExportDirectory dir
-				, checkPresentExport = checkPresentExportDirectory dir
-				, renameExport = renameExportDirectory dir
+				{ storeExport = storeExportM dir
+				, retrieveExport = retrieveExportM dir
+				, removeExport = removeExportM dir
+				, checkPresentExport = checkPresentExportM dir
+				-- Not needed because removeExportLocation
+				-- auto-removes empty directories.
+				, removeExportDirectory = Nothing
+				, renameExport = renameExportM dir
 				}
 			, whereisKey = Nothing
 			, remoteFsck = Nothing
@@ -166,17 +169,17 @@ finalizeStoreGeneric tmp dest = do
 		mapM_ preventWrite =<< dirContents dest
 		preventWrite dest
 
-retrieve :: FilePath -> ChunkConfig -> Preparer Retriever
-retrieve d (LegacyChunks _) = Legacy.retrieve locations d
-retrieve d _ = simplyPrepare $ byteRetriever $ \k sink ->
+retrieveKeyFileM :: FilePath -> ChunkConfig -> Preparer Retriever
+retrieveKeyFileM d (LegacyChunks _) = Legacy.retrieve locations d
+retrieveKeyFileM d _ = simplyPrepare $ byteRetriever $ \k sink ->
 	sink =<< liftIO (L.readFile =<< getLocation d k)
 
-retrieveCheap :: FilePath -> ChunkConfig -> Key -> AssociatedFile -> FilePath -> Annex Bool
+retrieveKeyFileCheapM :: FilePath -> ChunkConfig -> Key -> AssociatedFile -> FilePath -> Annex Bool
 -- no cheap retrieval possible for chunks
-retrieveCheap _ (UnpaddedChunks _) _ _ _ = return False
-retrieveCheap _ (LegacyChunks _) _ _ _ = return False
+retrieveKeyFileCheapM _ (UnpaddedChunks _) _ _ _ = return False
+retrieveKeyFileCheapM _ (LegacyChunks _) _ _ _ = return False
 #ifndef mingw32_HOST_OS
-retrieveCheap d NoChunks k _af f = liftIO $ catchBoolIO $ do
+retrieveKeyFileCheapM d NoChunks k _af f = liftIO $ catchBoolIO $ do
 	file <- absPath =<< getLocation d k
 	ifM (doesFileExist file)
 		( do
@@ -185,11 +188,11 @@ retrieveCheap d NoChunks k _af f = liftIO $ catchBoolIO $ do
 		, return False
 		)
 #else
-retrieveCheap _ _ _ _ _ = return False
+retrieveKeyFileCheapM _ _ _ _ _ = return False
 #endif
 
-remove :: FilePath -> Remover
-remove d k = liftIO $ removeDirGeneric d (storeDir d k)
+removeKeyM :: FilePath -> Remover
+removeKeyM d k = liftIO $ removeDirGeneric d (storeDir d k)
 
 {- Removes the directory, which must be located under the topdir.
  -
@@ -216,9 +219,9 @@ removeDirGeneric topdir dir = do
 		then return ok
 		else doesDirectoryExist topdir <&&> (not <$> doesDirectoryExist dir)
 
-checkKey :: FilePath -> ChunkConfig -> CheckPresent
-checkKey d (LegacyChunks _) k = Legacy.checkKey d locations k
-checkKey d _ k = checkPresentGeneric d (locations d k)
+checkPresentM :: FilePath -> ChunkConfig -> CheckPresent
+checkPresentM d (LegacyChunks _) k = Legacy.checkKey d locations k
+checkPresentM d _ k = checkPresentGeneric d (locations d k)
 
 checkPresentGeneric :: FilePath -> [FilePath] -> Annex Bool
 checkPresentGeneric d ps = liftIO $
@@ -230,8 +233,8 @@ checkPresentGeneric d ps = liftIO $
 			)
 		)
 
-storeExportDirectory :: FilePath -> FilePath -> Key -> ExportLocation -> MeterUpdate -> Annex Bool
-storeExportDirectory d src _k loc p = liftIO $ catchBoolIO $ do
+storeExportM :: FilePath -> FilePath -> Key -> ExportLocation -> MeterUpdate -> Annex Bool
+storeExportM d src _k loc p = liftIO $ catchBoolIO $ do
 	createDirectoryIfMissing True (takeDirectory dest)
 	-- Write via temp file so that checkPresentGeneric will not
 	-- see it until it's fully stored.
@@ -240,27 +243,27 @@ storeExportDirectory d src _k loc p = liftIO $ catchBoolIO $ do
   where
 	dest = exportPath d loc
 
-retrieveExportDirectory :: FilePath -> Key -> ExportLocation -> FilePath -> MeterUpdate -> Annex Bool
-retrieveExportDirectory d _k loc dest p = liftIO $ catchBoolIO $ do
+retrieveExportM :: FilePath -> Key -> ExportLocation -> FilePath -> MeterUpdate -> Annex Bool
+retrieveExportM d _k loc dest p = liftIO $ catchBoolIO $ do
 	withMeteredFile src p (L.writeFile dest)
 	return True
   where
 	src = exportPath d loc
 
-removeExportDirectory :: FilePath -> Key -> ExportLocation -> Annex Bool
-removeExportDirectory d _k loc = liftIO $ do
+removeExportM :: FilePath -> Key -> ExportLocation -> Annex Bool
+removeExportM d _k loc = liftIO $ do
 	nukeFile src
 	removeExportLocation d loc
 	return True
   where
 	src = exportPath d loc
 
-checkPresentExportDirectory :: FilePath -> Key -> ExportLocation -> Annex Bool
-checkPresentExportDirectory d _k loc =
+checkPresentExportM :: FilePath -> Key -> ExportLocation -> Annex Bool
+checkPresentExportM d _k loc =
 	checkPresentGeneric d [exportPath d loc]
 
-renameExportDirectory :: FilePath -> Key -> ExportLocation -> ExportLocation -> Annex Bool
-renameExportDirectory d _k oldloc newloc = liftIO $ catchBoolIO $ do
+renameExportM :: FilePath -> Key -> ExportLocation -> ExportLocation -> Annex Bool
+renameExportM d _k oldloc newloc = liftIO $ catchBoolIO $ do
 	createDirectoryIfMissing True (takeDirectory dest)
 	renameFile src dest
 	removeExportLocation d oldloc
diff --git a/Remote/External.hs b/Remote/External.hs
index fd4fd0649..b1204f776 100644
--- a/Remote/External.hs
+++ b/Remote/External.hs
@@ -71,11 +71,12 @@ gen r u c gc
 		exportsupported <- checkExportSupported' external
 		let exportactions = if exportsupported
 			then return $ ExportActions
-				{ storeExport = storeExportExternal external
-				, retrieveExport = retrieveExportExternal external
-				, removeExport = removeExportExternal external
-				, checkPresentExport = checkPresentExportExternal external
-				, renameExport = renameExportExternal external
+				{ storeExport = storeExportM external
+				, retrieveExport = retrieveExportM external
+				, removeExport = removeExportM external
+				, checkPresentExport = checkPresentExportM external
+				, removeExportDirectory = Just $ removeExportDirectoryM external
+				, renameExport = renameExportM external
 				}
 			else exportUnsupported
 		-- Cheap exportSupported that replaces the expensive
@@ -84,13 +85,13 @@ gen r u c gc
 			then exportIsSupported
 			else exportUnsupported
 		mk cst avail
-			(store external)
-			(retrieve external)
-			(remove external)
-			(checkKey external)
-			(Just (whereis external))
-			(Just (claimurl external))
-			(Just (checkurl external))
+			(storeKeyM external)
+			(retrieveKeyFileM external)
+			(removeKeyM external)
+			(checkPresentM external)
+			(Just (whereisKeyM external))
+			(Just (claimUrlM external))
+			(Just (checkUrlM external))
 			exportactions
 			cheapexportsupported
   where
@@ -170,8 +171,8 @@ checkExportSupported' external = safely $
 		UNSUPPORTED_REQUEST -> Just $ return False
 		_ -> Nothing
 
-store :: External -> Storer
-store external = fileStorer $ \k f p ->
+storeKeyM :: External -> Storer
+storeKeyM external = fileStorer $ \k f p ->
 	handleRequestKey external (\sk -> TRANSFER Upload sk f) k (Just p) $ \resp ->
 		case resp of
 			TRANSFER_SUCCESS Upload k' | k == k' ->
@@ -182,8 +183,8 @@ store external = fileStorer $ \k f p ->
 					return False
 			_ -> Nothing

(Diff truncated)
forwarded from irc
diff --git a/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6.mdwn b/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6.mdwn
new file mode 100644
index 000000000..60e145fa3
--- /dev/null
+++ b/doc/bugs/inAnnex_check_failed_repeatedly_for_present_content_v6.mdwn
@@ -0,0 +1,42 @@
+In a v6 repository, `git annex get` of a particular file re-downloaded it
+each time it was run. `git annex whereis` said the content was locally
+present. But, `git annex fsck` of the file said the content was
+missing, and removed it from the location log.
+
+The file was locked, and the repository was on ext4.
+
+Reported by gleachkr on IRC. Don't have enough information to reproduce
+the problem yet. --[[Joey]]
+
+> My initial analysis is that this must be a problem with
+> `Annex.Content.inAnnex`. Note that checks the cached inode for the
+> content and if it finds a mismatch, it behaves as if the content is not
+> present. That would be consistent with the problem as reported.
+> 
+> When I init a v6 repository and add some locked files, no inode cache is
+> recorded, which makes sense as they're locked.
+> 
+> Hypothesis: A cached inode for the key got into the keys database,
+> despite the file being locked, and that is messing up inAnnex.
+> 
+> Should inAnnex even be checking the inode cache for locked content?
+> This seems unncessary, and note that it's done for v4 mode as well as v6.
+> 
+> How could a cached inode for a locked file leak in? Perhaps the file
+> was earlier unlocked. Or perhaps another, unlocked file, had the same
+> content. I tried both these scenarios, and was able to get a cached
+> inode to be listed for a file, but in my tests at least, it also cached
+> the inode of the locked file, and I did not replicate the problem.
+> 
+> --[[Joey]]
+
+## more information needed
+
+If gleachkr comes back to IRC, it would be good to find out:
+
+* Was this file previously unlocked? `git log` on the file would probably
+  tell, unless it was briefly unlocked in between commits.
+* Run `git annex info` on the file to see what its key is.  
+  Then, run `sqlite3 .git/annex/keys/db` and .dump and see
+  what is recorded for that key, in both the "content" and "associated"
+  tables.

update
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index 6a8c2687a..c8438d942 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -43,4 +43,6 @@ Low priority:
 * webdav: When a file in a subdirectory gets deleted,
   the webdav collection is remains, empty. Could to check if collection is
   empty, and delete otherwise, but that would have a race if another export
-  is running to the same webdav server.
+  is running to the same webdav server.  
+  Probably best to add a remote method to delete a directory, and have
+  export use it on all directories it thinks should be empty.

diff --git a/doc/forum/Syncing_with___39__include__39___rule_for_duplicate_files.mdwn b/doc/forum/Syncing_with___39__include__39___rule_for_duplicate_files.mdwn
new file mode 100644
index 000000000..6121691b2
--- /dev/null
+++ b/doc/forum/Syncing_with___39__include__39___rule_for_duplicate_files.mdwn
@@ -0,0 +1,17 @@
+I have the following problem. I have file in git annex repo which is in two places in this repo.
+So there are two links to the file in the working tree.
+Let say (for clarity) that its path is: a/foo and b/foo.
+
+And when I do:
+
+    git annex wanted . "include=a/*"
+    git annex sync --content
+
+git-annex downloads the file and then drops it
+(i.e. it try to get a/foo and then drop b/foo).
+
+What should I do to avoid droping the file when include only
+one link to the file?
+
+The only one solution to the problem I've found is to use
+deprecated 'direct' mode, but let's say I want to do this the right way.

Added a comment: @joey: Sorry ...
diff --git a/doc/bugs/Too_difficult_if_not_impossible_to_explicitly_add__47__keep_file_under_git___40__not_annex__41___in_v6_without_employing_.gitattributes/comment_9_5a22839b8dc11965a879dd2654bd5d60._comment b/doc/bugs/Too_difficult_if_not_impossible_to_explicitly_add__47__keep_file_under_git___40__not_annex__41___in_v6_without_employing_.gitattributes/comment_9_5a22839b8dc11965a879dd2654bd5d60._comment
new file mode 100644
index 000000000..c2a1a3dc1
--- /dev/null
+++ b/doc/bugs/Too_difficult_if_not_impossible_to_explicitly_add__47__keep_file_under_git___40__not_annex__41___in_v6_without_employing_.gitattributes/comment_9_5a22839b8dc11965a879dd2654bd5d60._comment
@@ -0,0 +1,76 @@
+[[!comment format=mdwn
+ username="benjamin.poldrack@d09ccff6d42dd20277610b59867cf7462927b8e3"
+ nickname="benjamin.poldrack"
+ avatar="http://cdn.libravatar.org/avatar/5c1a901caa7c2cfeeb7e17e786c5230d"
+ subject="@joey: Sorry ..."
+ date="2017-09-14T12:00:09Z"
+ content="""
+... I somehow managed to miss your response. Now, since a somewhat related topic is emerging again with datalad, I looked into this one again.
+I reproduced, what I described before, but I noticed that it involves kind of an implicit upgrade from a V5 to V6 repository.
+
+First, let's have v5 repo with a file in git and a file in annex:
+
+
+    ben@tree /tmp % mkdir origin
+    ben@tree /tmp % cd origin
+    ben@tree /tmp/origin % git init
+    Initialized empty Git repository in /tmp/origin/.git/
+    ben@tree /tmp/origin % git annex init
+    init  ok
+    (recording state in git...)
+    ben@tree /tmp/origin % echo some > some
+    ben@tree /tmp/origin % git add some
+    ben@tree /tmp/origin % echo something different > annex
+    ben@tree /tmp/origin % git annex add annex
+    add annex ok
+    (recording state in git...)
+    ben@tree /tmp/origin % git commit -m \"initial\"
+    [master (root-commit) 8b96354] initial
+     2 files changed, 2 insertions(+)
+     create mode 120000 annex
+     create mode 100644 some
+    ben@tree /tmp/origin % ll
+    total 376
+    drwxr-xr-x  3 ben  ben    4096 Sep 14 13:34 .
+    drwxrwxrwt 24 root root 364544 Sep 14 13:33 ..
+    lrwxrwxrwx  1 ben  ben     180 Sep 14 13:34 annex -> .git/annex/objects/g7/4P/SHA256E-s20--b6105173f468fc7afa866aa469220cd56e5200db590be89922239a38631379c9/SHA256E-s20--b6105173f468fc7afa866aa469220cd56e5200db590be89922239a38631379c9
+    drwxr-xr-x  9 ben  ben    4096 Sep 14 13:34 .git
+    -rw-r--r--  1 ben  ben       5 Sep 14 13:34 some
+    ben@tree /tmp/origin % git ls-files
+    annex
+    some
+    ben@tree /tmp/origin % git annex find
+    annex
+
+Now, clone this repository:
+
+    ben@tree /tmp/origin % cd ..
+    ben@tree /tmp % git clone origin cloned
+    Cloning into 'cloned'...
+    done.
+    ben@tree /tmp % cd cloned
+
+And annex-init as a v6 repository:
+
+    ben@tree /tmp/cloned % git annex init --version=6
+    init  (merging origin/git-annex into git-annex...)
+    (recording state in git...)
+    (scanning for unlocked files...)
+    ok
+    (recording state in git...)
+    ben@tree /tmp/cloned % git status
+    On branch master
+    Your branch is up-to-date with 'origin/master'.
+
+    Changes not staged for commit:
+      (use \"git add <file>...\" to update what will be committed)
+      (use \"git checkout -- <file>...\" to discard changes in working directory)
+
+    	    modified:   some
+
+    no changes added to commit (use \"git add\" and/or \"git commit -a\")
+
+
+This kind of \"implicit\" upgrade might not be a common use case, but the result seems to be a bit weird nonetheless.
+
+"""]]

update
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index efa8b1c38..6a8c2687a 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -41,5 +41,6 @@ Low priority:
   still probably misses cases, where eg, content cycles amoung 3 files, and
   the same content amoung 3 other files. Is there a general algorythm?
 * webdav: When a file in a subdirectory gets deleted,
-  the webdav collection is remains, empty. Need to check if collection is
-  empty, and delete otherwise.
+  the webdav collection is remains, empty. Could to check if collection is
+  empty, and delete otherwise, but that would have a race if another export
+  is running to the same webdav server.

work around box.com webdav rename bug
Apparently box.com renaming is just buggy. I tried a couple of fixes:
* In case the http Manager was opening multiple connections and reaching
different backend servers, I tried limiting the number of connections
to 1. Didn't help.
* To make sure it was not a http connection reuse problem, I tried
rewriting how exportAction works, so that the same http connection
is clearly open. Didn't help.
So, disable renaming of exports for box.com. It would be good to test it
with some other webdav server.
This commit was sponsored by John Peloquin on Patreon.
diff --git a/Remote/WebDAV.hs b/Remote/WebDAV.hs
index 5e853ae22..8c72365e6 100644
--- a/Remote/WebDAV.hs
+++ b/Remote/WebDAV.hs
@@ -201,9 +201,15 @@ checkPresentExportDav r mh _k loc = case mh of
 		either giveup return v
 
 renameExportDav :: Maybe DavHandle -> Key -> ExportLocation -> ExportLocation -> Annex Bool
-renameExportDav mh _k src dest = runExport mh $ \dav -> do
-	moveDAV (baseURL dav) (exportLocation src) (exportLocation dest)
-	return True
+renameExportDav Nothing _ _ _ = return False
+renameExportDav (Just h) _k src dest
+	-- box.com's DAV endpoint has buggy handling of renames,
+	-- so avoid renaming when using it.
+	| boxComUrl `isPrefixOf` baseURL h = return False
+	| otherwise = runExport (Just h) $ \dav -> do
+		maybe noop (void . mkColRecursive) (locationParent (exportLocation dest))
+		moveDAV (baseURL dav) (exportLocation src) (exportLocation dest)
+		return True
 
 runExport :: Maybe DavHandle -> (DavHandle -> DAVT IO Bool) -> Annex Bool
 runExport Nothing _ = return False
@@ -213,7 +219,10 @@ configUrl :: Remote -> Maybe URLString
 configUrl r = fixup <$> M.lookup "url" (config r)
   where
 	-- box.com DAV url changed
-	fixup = replace "https://www.box.com/dav/" "https://dav.box.com/dav/"
+	fixup = replace "https://www.box.com/dav/" boxComUrl
+
+boxComUrl :: URLString
+boxComUrl = "https://dav.box.com/dav/"
 
 type DavUser = B8.ByteString
 type DavPass = B8.ByteString
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index 45fc56995..efa8b1c38 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -40,12 +40,6 @@ Low priority:
   Run each pair in turn. Then run the current rename code. Although this
   still probably misses cases, where eg, content cycles amoung 3 files, and
   the same content amoung 3 other files. Is there a general algorythm?
-* Exporting to box.com via webdav, a rename of a file behaves
-  oddly. The rename to the temp file succeeds, but the rename of the temp
-  file to the final name fails.
-  Also, sometimes the delete of the temp file that's done as a fallback
-  fails to actually delete it.
-  Hypothesis: Those are done in separate http connections and it might be
-  talking to two different backend servers that are out of sync.
-  So, making export cache connections might help. Update: No, caching
-  connections did not solve it.
+* webdav: When a file in a subdirectory gets deleted,
+  the webdav collection is remains, empty. Need to check if collection is
+  empty, and delete otherwise.

Added a comment
diff --git a/doc/bugs/fsck_does_not_detect__47__fix_some_key_directories_correctly/comment_3_c7d43b06f88d2000fcf574ebab971ae1._comment b/doc/bugs/fsck_does_not_detect__47__fix_some_key_directories_correctly/comment_3_c7d43b06f88d2000fcf574ebab971ae1._comment
new file mode 100644
index 000000000..162052573
--- /dev/null
+++ b/doc/bugs/fsck_does_not_detect__47__fix_some_key_directories_correctly/comment_3_c7d43b06f88d2000fcf574ebab971ae1._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="yarikoptic"
+ avatar="http://cdn.libravatar.org/avatar/f11e9c84cb18d26a1748c33b48c924b4"
+ subject="comment 3"
+ date="2017-09-13T17:58:54Z"
+ content="""
+yeap -- sharedrepository=1 I have.  should be legit right?
+"""]]

comment
diff --git a/doc/bugs/fsck_does_not_detect__47__fix_some_key_directories_correctly/comment_2_eb7c6445b1ca53d5552506d8ae93b5d4._comment b/doc/bugs/fsck_does_not_detect__47__fix_some_key_directories_correctly/comment_2_eb7c6445b1ca53d5552506d8ae93b5d4._comment
new file mode 100644
index 000000000..5d708f871
--- /dev/null
+++ b/doc/bugs/fsck_does_not_detect__47__fix_some_key_directories_correctly/comment_2_eb7c6445b1ca53d5552506d8ae93b5d4._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 2"""
+ date="2017-09-13T16:12:22Z"
+ content="""
+I think you must have core.sharedRepository set to group or all or
+something like that, otherwise fsck never complains about modes.
+"""]]

Added a comment
diff --git a/doc/bugs/fsck_does_not_detect__47__fix_some_key_directories_correctly/comment_1_1d627cb46d9ecdf0b526a9c8e9764011._comment b/doc/bugs/fsck_does_not_detect__47__fix_some_key_directories_correctly/comment_1_1d627cb46d9ecdf0b526a9c8e9764011._comment
new file mode 100644
index 000000000..92fbb3ba5
--- /dev/null
+++ b/doc/bugs/fsck_does_not_detect__47__fix_some_key_directories_correctly/comment_1_1d627cb46d9ecdf0b526a9c8e9764011._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="yarikoptic"
+ avatar="http://cdn.libravatar.org/avatar/f11e9c84cb18d26a1748c33b48c924b4"
+ subject="comment 1"
+ date="2017-09-13T14:32:57Z"
+ content="""
+also note, that even though I am not the owner, I should have sufficient privileges to modify (member of the group(s))!
+"""]]

crossed away comment on hash levels
diff --git a/doc/bugs/fsck_does_not_detect__47__fix_some_key_directories_correctly.mdwn b/doc/bugs/fsck_does_not_detect__47__fix_some_key_directories_correctly.mdwn
index 8ca57bc48..8d6e304e4 100644
--- a/doc/bugs/fsck_does_not_detect__47__fix_some_key_directories_correctly.mdwn
+++ b/doc/bugs/fsck_does_not_detect__47__fix_some_key_directories_correctly.mdwn
@@ -37,7 +37,7 @@ fsck sub003/anatomy/highres001.nii.gz
 
 """]]
 
-btw -- the same wrong permissions on the upper hash directories, and they do not get fixed/complained about at all
+~~btw -- the same wrong permissions on the upper hash directories, and they do not get fixed/complained about at all (that is ok)~~
 
 
 [[!meta author=yoh]]

initial finding about incorrect permissions ignored by fsck
diff --git a/doc/bugs/fsck_does_not_detect__47__fix_some_key_directories_correctly.mdwn b/doc/bugs/fsck_does_not_detect__47__fix_some_key_directories_correctly.mdwn
new file mode 100644
index 000000000..8ca57bc48
--- /dev/null
+++ b/doc/bugs/fsck_does_not_detect__47__fix_some_key_directories_correctly.mdwn
@@ -0,0 +1,43 @@
+### Please describe the problem.
+
+ATM I am chasing a problem that somehow one key "mutated" although I do not remember doing anything malicious, file seems to be also not  writable (itself).
+So I decided to fsck, and only spotted a problem when some warnings started to appear that I am not the owner of the (key)file.  So I looked inside and found
+that all key dirs are writeable BUT annex complains only about the ones where it can't change permissions since they don't belong to me
+
+
+### What version of git-annex are you using? On what operating system?
+
+6.20170815+gitg22da64d0f-1~ndall+1
+
+### Please provide any additional information below.
+
+[[!format sh """
+$> for f in sub00{1,3}/anatomy/highres001.nii.gz; do ls -ld $(realpath $f | xargs dirname ); git annex fsck $f; done 
+drwxrws--- 2 yoh famface 3 May 12 15:15 /data/famface/openfmri/data/.git/annex/objects/08/V3/SHA256E-s6498717--b850ac82ec9db2d399962609e9381d9c2bdf1f426012500b7005b173ea4d9102.nii.gz/
+fsck sub001/anatomy/highres001.nii.gz (checksum...) ok
+(recording state in git...)
+drwxrwsr-x 2 contematto famface 3 Jul 13  2015 /data/famface/openfmri/data/.git/annex/objects/x2/xX/SHA256E-s6592524--dccba651dc4cd104826a05a2efb6a257b39ca2d8b44d215027250221729f9434.nii.gz/
+fsck sub003/anatomy/highres001.nii.gz 
+  ** Unable to set correct write mode for .git/annex/objects/x2/xX/SHA256E-s6592524--dccba651dc4cd104826a05a2efb6a257b39ca2d8b44d215027250221729f9434.nii.gz/SHA256E-s6592524--dccba651dc4cd104826a05a2efb6a257b39ca2d8b44d215027250221729f9434.nii.gz ; perhaps you don't own that file
+(checksum...) ok
+(recording state in git...)
+
+# thought to see may be annex would complain then!?
+$> chmod a+rwx /data/famface/openfmri/data/.git/annex/objects/08/V3/SHA256E-s6498717--b850ac82ec9db2d399962609e9381d9c2bdf1f426012500b7005b173ea4d9102.nii.gz/
+
+$> for f in sub00{1,3}/anatomy/highres001.nii.gz; do ls -ld $(realpath $f | xargs dirname ); git annex fsck $f; done                                          
+drwxrwsrwx 2 yoh famface 3 May 12 15:15 /data/famface/openfmri/data/.git/annex/objects/08/V3/SHA256E-s6498717--b850ac82ec9db2d399962609e9381d9c2bdf1f426012500b7005b173ea4d9102.nii.gz/
+fsck sub001/anatomy/highres001.nii.gz (checksum...) ok
+(recording state in git...)
+drwxrwsr-x 2 contematto famface 3 Jul 13  2015 /data/famface/openfmri/data/.git/annex/objects/x2/xX/SHA256E-s6592524--dccba651dc4cd104826a05a2efb6a257b39ca2d8b44d215027250221729f9434.nii.gz/
+fsck sub003/anatomy/highres001.nii.gz 
+  ** Unable to set correct write mode for .git/annex/objects/x2/xX/SHA256E-s6592524--dccba651dc4cd104826a05a2efb6a257b39ca2d8b44d215027250221729f9434.nii.gz/SHA256E-s6592524--dccba651dc4cd104826a05a2efb6a257b39ca2d8b44d215027250221729f9434.nii.gz ; perhaps you don't own that file
+(checksum...) ok
+(recording state in git...)
+
+"""]]
+
+btw -- the same wrong permissions on the upper hash directories, and they do not get fixed/complained about at all
+
+
+[[!meta author=yoh]]

fix compaction of export.log
It was not getting old lines removed, because the tree graft confused
the updater, so it union merged from the previous git-annex branch,
which still contained the old lines. Fixed by carefully using setIndexSha.
This commit was supported by the NSF-funded DataLad project.
diff --git a/Annex/Branch.hs b/Annex/Branch.hs
index 5214df627..5f3c71b1a 100644
--- a/Annex/Branch.hs
+++ b/Annex/Branch.hs
@@ -1,6 +1,6 @@
 {- management of the git-annex branch
  -
- - Copyright 2011-2016 Joey Hess <id@joeyh.name>
+ - Copyright 2011-2017 Joey Hess <id@joeyh.name>
  -
  - Licensed under the GNU GPL version 3 or higher.
  -}
@@ -23,8 +23,9 @@ module Annex.Branch (
 	forceCommit,
 	getBranch,
 	files,
-	withIndex,
+	graftTreeish,
 	performTransitions,
+	withIndex,
 ) where
 
 import qualified Data.ByteString.Lazy as L
@@ -46,6 +47,7 @@ import qualified Git.Sha
 import qualified Git.Branch
 import qualified Git.UnionMerge
 import qualified Git.UpdateIndex
+import qualified Git.Tree
 import Git.LsTree (lsTreeParams)
 import qualified Git.HashObject
 import Annex.HashObject
@@ -614,3 +616,25 @@ getMergedRefs' = do
 	parse l = 
 		let (s, b) = separate (== '\t') l
 		in (Ref s, Ref b)
+
+{- Grafts a treeish into the branch at the specified location,
+ - and then removes it. This ensures that the treeish won't get garbage
+ - collected, and will always be available as long as the git-annex branch
+ - is available. -}
+graftTreeish :: Git.Ref -> TopFilePath -> Annex ()
+graftTreeish treeish graftpoint = lockJournal $ \jl -> do
+	branchref <- getBranch
+	updateIndex jl branchref
+	Git.Tree.Tree t <- inRepo $ Git.Tree.getTree branchref
+	t' <- inRepo $ Git.Tree.recordTree $ Git.Tree.Tree $
+		Git.Tree.RecordedSubTree graftpoint treeish [] : t
+	c <- inRepo $ Git.Branch.commitTree Git.Branch.AutomaticCommit
+		"graft" [branchref] t'
+	origtree <- inRepo $ Git.Tree.recordTree (Git.Tree.Tree t)
+	c' <- inRepo $ Git.Branch.commitTree Git.Branch.AutomaticCommit
+		"graft cleanup" [c] origtree
+	inRepo $ Git.Branch.update' fullname c'
+	-- The tree in c' is the same as the tree in branchref,
+	-- and the index was updated to that above, so it's safe to
+	-- say that the index contains c'.
+	setIndexSha c'
diff --git a/Logs/Export.hs b/Logs/Export.hs
index 2bc1b1705..b0eddba7c 100644
--- a/Logs/Export.hs
+++ b/Logs/Export.hs
@@ -11,6 +11,7 @@ import qualified Data.Map as M
 
 import Annex.Common
 import qualified Annex.Branch
+import Annex.Journal
 import qualified Git
 import qualified Git.Branch
 import Git.Tree
@@ -97,7 +98,7 @@ recordExportBeginning remoteuuid newtree = do
 		showExportLog 
 			. changeMapLog c ep new
 			. parseExportLog
-	graftTreeish newtree
+	Annex.Branch.graftTreeish newtree (asTopFilePath "export.tree")
 
 parseExportLog :: String -> MapLog ExportParticipants Exported
 parseExportLog = parseMapLog parseExportParticipants parseExported
@@ -125,20 +126,3 @@ parseExported :: String -> Maybe Exported
 parseExported s = case words s of
 	(et:it) -> Just $ Exported (Git.Ref et) (map Git.Ref it)
 	_ -> Nothing
-
--- To prevent git-annex branch merge conflicts, the treeish is
--- first grafted in and then removed in a subsequent commit.
-graftTreeish :: Git.Ref -> Annex ()
-graftTreeish treeish = do
-	branchref <- Annex.Branch.getBranch
-	Tree t <- inRepo $ getTree branchref
-	t' <- inRepo $ recordTree $ Tree $
-		RecordedSubTree (asTopFilePath graftpoint) treeish [] : t
-	commit <- inRepo $ Git.Branch.commitTree Git.Branch.AutomaticCommit
-		"export tree" [branchref] t'
-	origtree <- inRepo $ recordTree (Tree t)
-	commit' <- inRepo $ Git.Branch.commitTree Git.Branch.AutomaticCommit
-		"export tree cleanup" [commit] origtree
-	inRepo $ Git.Branch.update' Annex.Branch.fullname commit'
-  where
-	graftpoint = "export.tree"
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index 69f3dd170..45fc56995 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -17,7 +17,6 @@ there need to be a new interface in supported remotes?
 
 Work is in progress. Todo list:
 
-* Compact the export.log to remove old entries.
 * `git annex get --from export` works in the repo that exported to it,
   but in another repo, the export db won't be populated, so it won't work.
   Maybe just show a useful error message in this case?  
@@ -25,7 +24,6 @@ Work is in progress. Todo list:
   export from another repository also doesn't work right, because the
   export database is not populated. So, seems that the export database needs
   to get populated based on the export log in these cases.
-* Support export to aditional special remotes (webdav etc)
 * Support export in the assistant (when eg setting up a S3 special remote).
   Would need git-annex sync to export to the master tree?
   This is similar to the little-used preferreddir= preferred content

devblog
diff --git a/doc/devblog/day_471__export_to_more_remotes.mdwn b/doc/devblog/day_471__export_to_more_remotes.mdwn
new file mode 100644
index 000000000..bf8c1e66a
--- /dev/null
+++ b/doc/devblog/day_471__export_to_more_remotes.mdwn
@@ -0,0 +1,13 @@
+Got `git annex export` working to webdav and rsync special remotes. Tested
+exporting to the Internet Archive via S3, and to box.com via webdav. Both
+had little weirdnesses in their handling of the protocols, which were
+worked around, and it's quite nice to be able to export trees to those
+services, as well as Amazon S3.
+
+Also added connection caching for exports, so S3 and webdav exports only
+make one http connection, instead of one per file.
+
+Had to change the format of `git-annex:export.log`; the old format didn't
+take into account that a repository can export to several different remotes.
+
+Today's work was supported by the NSF-funded DataLad project.

change export.log format to support multiple export remotes
This breaks backwards compatibility, but only with unreleased versions of
git-annex, which I think is acceptable.
This commit was supported by the NSF-funded DataLad project.
diff --git a/Logs/Export.hs b/Logs/Export.hs
index dc9952b86..2bc1b1705 100644
--- a/Logs/Export.hs
+++ b/Logs/Export.hs
@@ -17,7 +17,7 @@ import Git.Tree
 import Git.Sha
 import Git.FilePath
 import Logs
-import Logs.UUIDBased
+import Logs.MapLog
 import Annex.UUID
 
 data Exported = Exported
@@ -26,24 +26,30 @@ data Exported = Exported
 	}
 	deriving (Eq, Show)
 
+data ExportParticipants = ExportParticipants
+	{ exportFrom :: UUID
+	, exportTo :: UUID
+	}
+	deriving (Eq, Ord)
+
+data ExportChange = ExportChange
+	{ oldTreeish :: [Git.Ref]
+	, newTreeish :: Git.Ref
+	}
+
 -- | Get what's been exported to a special remote.
 --
 -- If the list contains multiple items, there was an export conflict,
 -- and different trees were exported to the same special remote.
 getExport :: UUID -> Annex [Exported]
-getExport remoteuuid = nub . mapMaybe get . M.elems . simpleMap 
-	. parseLogNew parseExportLog
+getExport remoteuuid = nub . mapMaybe get . M.toList . simpleMap 
+	. parseExportLog
 	<$> Annex.Branch.get exportLog
   where
-	get (ExportLog exported u)
-		| u == remoteuuid = Just exported
+	get (ep, exported)
+		| exportTo ep == remoteuuid = Just exported
 		| otherwise = Nothing
 
-data ExportChange = ExportChange
-	{ oldTreeish :: [Git.Ref]
-	, newTreeish :: Git.Ref
-	}
-
 -- | Record a change in what's exported to a special remote.
 --
 -- This is called before an export begins uploading new files to the
@@ -61,16 +67,17 @@ recordExport :: UUID -> ExportChange -> Annex ()
 recordExport remoteuuid ec = do
 	c <- liftIO currentVectorClock
 	u <- getUUID
-	let val = ExportLog (Exported (newTreeish ec) []) remoteuuid
+	let ep = ExportParticipants { exportFrom = u, exportTo = remoteuuid }
+	let exported = Exported (newTreeish ec) []
 	Annex.Branch.change exportLog $
-		showLogNew formatExportLog 
-			. changeLog c u val 
+		showExportLog
+			. changeMapLog c ep exported 
 			. M.mapWithKey (updateothers c u)
-			. parseLogNew parseExportLog
+			. parseExportLog
   where
-	updateothers c u theiru le@(LogEntry _ (ExportLog exported@(Exported { exportedTreeish = t }) remoteuuid'))
-		| u == theiru || remoteuuid' /= remoteuuid || t `notElem` oldTreeish ec = le
-		| otherwise = LogEntry c (ExportLog (exported { exportedTreeish = newTreeish ec }) theiru)
+	updateothers c u ep le@(LogEntry _ exported@(Exported { exportedTreeish = t }))
+		| u == exportFrom ep || remoteuuid /= exportTo ep || t `notElem` oldTreeish ec = le
+		| otherwise = LogEntry c (exported { exportedTreeish = newTreeish ec })
 
 -- | Record the beginning of an export, to allow cleaning up from
 -- interrupted exports.
@@ -80,29 +87,43 @@ recordExportBeginning :: UUID -> Git.Ref -> Annex ()
 recordExportBeginning remoteuuid newtree = do
 	c <- liftIO currentVectorClock
 	u <- getUUID
-	ExportLog old _ <- fromMaybe (ExportLog (Exported emptyTree []) remoteuuid)
-		. M.lookup u . simpleMap 
-		. parseLogNew parseExportLog
+	let ep = ExportParticipants { exportFrom = u, exportTo = remoteuuid }
+	old <- fromMaybe (Exported emptyTree [])
+		. M.lookup ep . simpleMap 
+		. parseExportLog
 		<$> Annex.Branch.get exportLog
 	let new = old { incompleteExportedTreeish = nub (newtree:incompleteExportedTreeish old) }
 	Annex.Branch.change exportLog $
-		showLogNew formatExportLog 
-			. changeLog c u (ExportLog new remoteuuid)
-			. parseLogNew parseExportLog
+		showExportLog 
+			. changeMapLog c ep new
+			. parseExportLog
 	graftTreeish newtree
 
-data ExportLog = ExportLog Exported UUID
+parseExportLog :: String -> MapLog ExportParticipants Exported
+parseExportLog = parseMapLog parseExportParticipants parseExported
+
+showExportLog :: MapLog ExportParticipants Exported -> String
+showExportLog = showMapLog formatExportParticipants formatExported
+
+formatExportParticipants :: ExportParticipants -> String
+formatExportParticipants ep = 
+	fromUUID (exportFrom ep) ++ ':' : fromUUID (exportTo ep)
 
-formatExportLog :: ExportLog -> String
-formatExportLog (ExportLog exported remoteuuid) = unwords $
-	[ Git.fromRef (exportedTreeish exported)
-	, fromUUID remoteuuid
-	] ++ map Git.fromRef (incompleteExportedTreeish exported)
+parseExportParticipants :: String -> Maybe ExportParticipants
+parseExportParticipants s = case separate (== ':') s of
+	("",_) -> Nothing
+	(_,"") -> Nothing
+	(f,t) -> Just $ ExportParticipants
+		{ exportFrom = toUUID f
+		, exportTo = toUUID t
+		}
+formatExported :: Exported -> String
+formatExported exported = unwords $ map Git.fromRef $
+	exportedTreeish exported : incompleteExportedTreeish exported
 
-parseExportLog :: String -> Maybe ExportLog
-parseExportLog s = case words s of
-	(et:u:it) -> Just $
-		ExportLog (Exported (Git.Ref et) (map Git.Ref it)) (toUUID u)
+parseExported :: String -> Maybe Exported
+parseExported s = case words s of
+	(et:it) -> Just $ Exported (Git.Ref et) (map Git.Ref it)
 	_ -> Nothing
 
 -- To prevent git-annex branch merge conflicts, the treeish is
diff --git a/doc/internals.mdwn b/doc/internals.mdwn
index ccf1e09b6..5abe7aa07 100644
--- a/doc/internals.mdwn
+++ b/doc/internals.mdwn
@@ -186,20 +186,22 @@ Tracks what trees have been exported to special remotes by
 [[git-annex-export]](1).
 
 Each line starts with a timestamp, then the uuid of the repository
-that exported to the special remote, followed by the sha1 of the tree
-that was exported, and then by the uuid of the special remote.
+that exported to the special remote, followed by a colon (`:`) and
+the uuid of the special remote. Then, separated by a spaces,
+the sha1 of the tree that was exported, and optionally any number of
+subsequent sha1s, of trees that have started to be exported but whose
+export is not yet complete. 
 
-There can also be subsequent sha1s, of trees that have started to be
-exported but whose export is not yet complete. The sha1 of the exported
-tree can be the empty tree (4b825dc642cb6eb9a060e54bf8d69288fbee4904)
-in order to record the beginning of the first export.
+In order to record the beginning of the first export, where nothing
+has been exported yet, the sha1 of the exported tree can be
+the empty tree (4b825dc642cb6eb9a060e54bf8d69288fbee4904).
 
 For example:
 
-	1317929100.012345s e605dca6-446a-11e0-8b2a-002170d25c55 4b825dc642cb6eb9a060e54bf8d69288fbee4904 26339d22-446b-11e0-9101-002170d25c55 bb08b1abd207aeecccbc7060e523b011d80cb35b
-	1317929100.012345s e605dca6-446a-11e0-8b2a-002170d25c55 bb08b1abd207aeecccbc7060e523b011d80cb35b 26339d22-446b-11e0-9101-002170d25c55
-	1317929189.157237s e605dca6-446a-11e0-8b2a-002170d25c55 bb08b1abd207aeecccbc7060e523b011d80cb35b 26339d22-446b-11e0-9101-002170d25c55 7c7af825782b7c8706039b855c72709993542be4
-	1317923000.251111s e605dca6-446a-11e0-8b2a-002170d25c55 7c7af825782b7c8706039b855c72709993542be4 26339d22-446b-11e0-9101-002170d25c55
+	1317929100.012345s e605dca6-446a-11e0-8b2a-002170d25c55:26339d22-446b-11e0-9101-002170d25c55 4b825dc642cb6eb9a060e54bf8d69288fbee4904 bb08b1abd207aeecccbc7060e523b011d80cb35b
+	1317929100.012345s e605dca6-446a-11e0-8b2a-002170d25c55:26339d22-446b-11e0-9101-002170d25c55 bb08b1abd207aeecccbc7060e523b011d80cb35b 
+	1317929189.157237s e605dca6-446a-11e0-8b2a-002170d25c55:26339d22-446b-11e0-9101-002170d25c55 bb08b1abd207aeecccbc7060e523b011d80cb35b 7c7af825782b7c8706039b855c72709993542be4
+	1317923000.251111s e605dca6-446a-11e0-8b2a-002170d25c55:26339d22-446b-11e0-9101-002170d25c55 7c7af825782b7c8706039b855c72709993542be4 
 
 (The trees are also grafted into the git-annex branch, at
 `export.tree`, to prevent git from garbage collecting it. However, the head
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index 0dde3a7d6..69f3dd170 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -17,9 +17,7 @@ there need to be a new interface in supported remotes?
 
 Work is in progress. Todo list:
 
-* The export.log parsing only works right when there's one export
-  remote. With 2, only the most recently exported to one is gotten from the
-  log.
+* Compact the export.log to remove old entries.
 * `git annex get --from export` works in the repo that exported to it,
   but in another repo, the export db won't be populated, so it won't work.
   Maybe just show a useful error message in this case?  

bug
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index cef5a749d..0dde3a7d6 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -17,6 +17,9 @@ there need to be a new interface in supported remotes?
 
 Work is in progress. Todo list:
 
+* The export.log parsing only works right when there's one export
+  remote. With 2, only the most recently exported to one is gotten from the
+  log.
 * `git annex get --from export` works in the repo that exported to it,
   but in another repo, the export db won't be populated, so it won't work.
   Maybe just show a useful error message in this case?  

export: cache connections for S3 and webdav
diff --git a/Command/Export.hs b/Command/Export.hs
index 8f1a6f149..611656581 100644
--- a/Command/Export.hs
+++ b/Command/Export.hs
@@ -89,15 +89,18 @@ seek o = do
 		-- or tag.
 		inRepo (Git.Ref.tree (exportTreeish o))
 	old <- getExport (uuid r)
-	recordExportBeginning (uuid r) new
 	db <- openDb (uuid r)
+	ea <- exportActions r
+	recordExportBeginning (uuid r) new
 	
+	liftIO $ print (old, new)
+
 	-- Clean up after incomplete export of a tree, in which
 	-- the next block of code below may have renamed some files to
 	-- temp files. Diff from the incomplete tree to the new tree,
 	-- and delete any temp files that the new tree can't use.
 	forM_ (concatMap incompleteExportedTreeish old) $ \incomplete ->
-		mapdiff (\diff -> startRecoverIncomplete r db (Git.DiffTree.srcsha diff) (Git.DiffTree.file diff))
+		mapdiff (\diff -> startRecoverIncomplete r ea db (Git.DiffTree.srcsha diff) (Git.DiffTree.file diff))
 			incomplete
 			new
 
@@ -115,15 +118,15 @@ seek o = do
 			seekdiffmap $ \(ek, (moldf, mnewf)) ->
 				case (moldf, mnewf) of
 					(Just oldf, Just _newf) ->
-						startMoveToTempName r db oldf ek
+						startMoveToTempName r ea db oldf ek
 					(Just oldf, Nothing) -> 
-						startUnexport' r db oldf ek
+						startUnexport' r ea db oldf ek
 					_ -> stop
 			-- Rename from temp to new files.
 			seekdiffmap $ \(ek, (moldf, mnewf)) ->
 				case (moldf, mnewf) of
 					(Just _oldf, Just newf) ->
-						startMoveFromTempName r db ek newf
+						startMoveFromTempName r ea db ek newf
 					_ -> stop
 		ts -> do
 			warning "Export conflict detected. Different trees have been exported to the same special remote. Resolving.."
@@ -139,7 +142,7 @@ seek o = do
 				-- Don't rename to temp, because the
 				-- content is unknown; delete instead.
 				mapdiff
-					(\diff -> startUnexport r db (Git.DiffTree.file diff) (unexportboth diff))
+					(\diff -> startUnexport r ea db (Git.DiffTree.file diff) (unexportboth diff))
 					oldtreesha new
 
 	-- Waiting until now to record the export guarantees that,
@@ -154,7 +157,7 @@ seek o = do
 
 	-- Export everything that is not yet exported.
 	(l, cleanup') <- inRepo $ Git.LsTree.lsTree new
-	seekActions $ pure $ map (startExport r db) l
+	seekActions $ pure $ map (startExport r ea db) l
 	void $ liftIO cleanup'
 
 	closeDb db
@@ -187,23 +190,24 @@ mkDiffMap old new = do
 		| sha == nullSha = return Nothing
 		| otherwise = Just <$> exportKey sha
 
-startExport :: Remote -> ExportHandle -> Git.LsTree.TreeItem -> CommandStart
-startExport r db ti = do
+startExport :: Remote -> ExportActions Annex -> ExportHandle -> Git.LsTree.TreeItem -> CommandStart
+startExport r ea db ti = do
 	ek <- exportKey (Git.LsTree.sha ti)
 	stopUnless (liftIO $ notElem loc <$> getExportLocation db (asKey ek)) $ do
 		showStart "export" f
-		next $ performExport r db ek (Git.LsTree.sha ti) loc
+		next $ performExport r ea db ek (Git.LsTree.sha ti) loc
   where
 	loc = ExportLocation $ toInternalGitPath f
 	f = getTopFilePath $ Git.LsTree.file ti
 
-performExport :: Remote -> ExportHandle -> ExportKey -> Sha -> ExportLocation -> CommandPerform
-performExport r db ek contentsha loc = do
-	let storer = storeExport $ exportActions r
+performExport :: Remote -> ExportActions Annex -> ExportHandle -> ExportKey -> Sha -> ExportLocation -> CommandPerform
+performExport r ea db ek contentsha loc = do
+	let storer = storeExport ea
 	sent <- case ek of
 		AnnexKey k -> ifM (inAnnex k)
 			( metered Nothing k $ \m -> do
-				let rollback = void $ performUnexport r db [ek] loc
+				let rollback = void $
+					performUnexport r ea db [ek] loc
 				sendAnnex k rollback
 					(\f -> storer f k loc m)
 			, do
@@ -227,29 +231,29 @@ cleanupExport r db ek loc = do
 	logChange (asKey ek) (uuid r) InfoPresent
 	return True
 
-startUnexport :: Remote -> ExportHandle -> TopFilePath -> [Git.Sha] -> CommandStart
-startUnexport r db f shas = do
+startUnexport :: Remote -> ExportActions Annex -> ExportHandle -> TopFilePath -> [Git.Sha] -> CommandStart
+startUnexport r ea db f shas = do
 	eks <- forM (filter (/= nullSha) shas) exportKey
 	if null eks
 		then stop
 		else do
 			showStart "unexport" f'
-			next $ performUnexport r db eks loc
+			next $ performUnexport r ea db eks loc
   where
 	loc = ExportLocation $ toInternalGitPath f'
 	f' = getTopFilePath f
 
-startUnexport' :: Remote -> ExportHandle -> TopFilePath -> ExportKey -> CommandStart
-startUnexport' r db f ek = do
+startUnexport' :: Remote -> ExportActions Annex -> ExportHandle -> TopFilePath -> ExportKey -> CommandStart
+startUnexport' r ea db f ek = do
 	showStart "unexport" f'
-	next $ performUnexport r db [ek] loc
+	next $ performUnexport r ea db [ek] loc
   where
 	loc = ExportLocation $ toInternalGitPath f'
 	f' = getTopFilePath f
 
-performUnexport :: Remote -> ExportHandle -> [ExportKey] -> ExportLocation -> CommandPerform
-performUnexport r db eks loc = do
-	ifM (allM (\ek -> removeExport (exportActions r) (asKey ek) loc) eks)
+performUnexport :: Remote -> ExportActions Annex -> ExportHandle -> [ExportKey] -> ExportLocation -> CommandPerform
+performUnexport r ea db eks loc = do
+	ifM (allM (\ek -> removeExport ea (asKey ek) loc) eks)
 		( next $ cleanupUnexport r db eks loc
 		, stop
 		)
@@ -269,47 +273,47 @@ cleanupUnexport r db eks loc = do
 			logChange (asKey ek) (uuid r) InfoMissing
 	return True
 
-startRecoverIncomplete :: Remote -> ExportHandle -> Git.Sha -> TopFilePath -> CommandStart
-startRecoverIncomplete r db sha oldf
+startRecoverIncomplete :: Remote -> ExportActions Annex -> ExportHandle -> Git.Sha -> TopFilePath -> CommandStart
+startRecoverIncomplete r ea db sha oldf
 	| sha == nullSha = stop
 	| otherwise = do
 		ek <- exportKey sha
 		let loc@(ExportLocation f) = exportTempName ek
 		showStart "unexport" f
 		liftIO $ removeExportLocation db (asKey ek) oldloc
-		next $ performUnexport r db [ek] loc
+		next $ performUnexport r ea db [ek] loc
   where
 	oldloc = ExportLocation $ toInternalGitPath oldf'
 	oldf' = getTopFilePath oldf
 
-startMoveToTempName :: Remote -> ExportHandle -> TopFilePath -> ExportKey -> CommandStart
-startMoveToTempName r db f ek = do
+startMoveToTempName :: Remote -> ExportActions Annex -> ExportHandle -> TopFilePath -> ExportKey -> CommandStart
+startMoveToTempName r ea db f ek = do
 	let tmploc@(ExportLocation tmpf) = exportTempName ek
 	showStart "rename" (f' ++ " -> " ++ tmpf)
-	next $ performRename r db ek loc tmploc
+	next $ performRename r ea db ek loc tmploc
   where
 	loc = ExportLocation $ toInternalGitPath f'
 	f' = getTopFilePath f
 
-startMoveFromTempName :: Remote -> ExportHandle -> ExportKey -> TopFilePath -> CommandStart
-startMoveFromTempName r db ek f = do
+startMoveFromTempName :: Remote -> ExportActions Annex -> ExportHandle -> ExportKey -> TopFilePath -> CommandStart
+startMoveFromTempName r ea db ek f = do
 	let tmploc@(ExportLocation tmpf) = exportTempName ek
 	stopUnless (liftIO $ elem tmploc <$> getExportLocation db (asKey ek)) $ do
 		showStart "rename" (tmpf ++ " -> " ++ f')
-		next $ performRename r db ek tmploc loc
+		next $ performRename r ea db ek tmploc loc
   where
 	loc = ExportLocation $ toInternalGitPath f'
 	f' = getTopFilePath f
 
-performRename :: Remote -> ExportHandle -> ExportKey -> ExportLocation -> ExportLocation -> CommandPerform
-performRename r db ek src dest = do
-	ifM (renameExport (exportActions r) (asKey ek) src dest)
+performRename :: Remote -> ExportActions Annex -> ExportHandle -> ExportKey -> ExportLocation -> ExportLocation -> CommandPerform
+performRename r ea db ek src dest = do
+	ifM (renameExport ea (asKey ek) src dest)
 		( next $ cleanupRename db ek src dest
 		-- In case the special remote does not support renaming,
 		-- unexport the src instead.
 		, do
 			warning "rename failed; deleting instead"
-			performUnexport r db [ek] src
+			performUnexport r ea db [ek] src
 		)
 
 cleanupRename :: ExportHandle -> ExportKey -> ExportLocation -> ExportLocation -> CommandCleanup
diff --git a/Logs/Export.hs b/Logs/Export.hs
index 2327d70d1..dc9952b86 100644
--- a/Logs/Export.hs
+++ b/Logs/Export.hs
@@ -24,7 +24,7 @@ data Exported = Exported
 	{ exportedTreeish :: Git.Ref

(Diff truncated)
more box.com strangeness
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index da69b3e66..9584d8904 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -44,6 +44,8 @@ Low priority:
 * Exporting to box.com via webdav, a rename of a file behaves
   oddly. The rename to the temp file succeeds, but the rename of the temp
   file to the final name fails.
+  Also, sometimes the delete of the temp file that's done as a fallback
+  fails to actually delete it.
   Hypothesis: Those are done in separate http connections and it might be
   talking to two different backend servers that are out of sync.
   So, making export cache connections might help.

document box.com rename problem
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index 43d4d0e8c..da69b3e66 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -41,3 +41,9 @@ Low priority:
   Run each pair in turn. Then run the current rename code. Although this
   still probably misses cases, where eg, content cycles amoung 3 files, and
   the same content amoung 3 other files. Is there a general algorythm?
+* Exporting to box.com via webdav, a rename of a file behaves
+  oddly. The rename to the temp file succeeds, but the rename of the temp
+  file to the final name fails.
+  Hypothesis: Those are done in separate http connections and it might be
+  talking to two different backend servers that are out of sync.
+  So, making export cache connections might help.

export to webdav
This basically works, but there's a bug when renaming a file that leaves
a .git-annex-temp-content-key file in the webdav store, that never gets
cleaned up.
Also, exporting files with spaces to box.com seems to fail; perhaps it
does not support it?
This commit was supported by the NSF-funded DataLad project.
diff --git a/CHANGELOG b/CHANGELOG
index 4365ed9f9..e885d42f8 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -4,7 +4,7 @@ git-annex (6.20170819) UNRELEASED; urgency=medium
     exports of trees to special remotes.
   * Use git-annex initremote with exporttree=yes to set up a special remote
     for use by git-annex export.
-  * Implemented export to directory and S3 special remotes.
+  * Implemented export to directory, S3, and webdav special remotes.
   * External special remote protocol extended to support export.
   * Support building with feed-1.0, while still supporting older versions.
   * init: Display an additional message when it detects a filesystem that
diff --git a/Command/Export.hs b/Command/Export.hs
index d2ba53dd2..52355e69d 100644
--- a/Command/Export.hs
+++ b/Command/Export.hs
@@ -304,7 +304,9 @@ performRename r db ek src dest = do
 		( next $ cleanupRename db ek src dest
 		-- In case the special remote does not support renaming,
 		-- unexport the src instead.
-		, performUnexport r db [ek] src
+		, do
+			warning "rename failed; deleting instead"
+			performUnexport r db [ek] src
 		)
 
 cleanupRename :: ExportHandle -> ExportKey -> ExportLocation -> ExportLocation -> CommandCleanup
diff --git a/Remote/WebDAV.hs b/Remote/WebDAV.hs
index 4cc3c92e0..04eb35cef 100644
--- a/Remote/WebDAV.hs
+++ b/Remote/WebDAV.hs
@@ -1,6 +1,6 @@
 {- WebDAV remotes.
  -
- - Copyright 2012-2014 Joey Hess <id@joeyh.name>
+ - Copyright 2012-2017 Joey Hess <id@joeyh.name>
  -
  - Licensed under the GNU GPL version 3 or higher.
  -}
@@ -15,7 +15,7 @@ import qualified Data.Map as M
 import qualified Data.ByteString.Lazy as L
 import qualified Data.ByteString.UTF8 as B8
 import qualified Data.ByteString.Lazy.UTF8 as L8
-import Network.HTTP.Client (HttpException(..))
+import Network.HTTP.Client (HttpException(..), RequestBody)
 import Network.HTTP.Types
 import System.IO.Error
 import Control.Monad.Catch
@@ -46,7 +46,7 @@ remote = RemoteType
 	, enumerate = const (findSpecialRemotes "webdav")
 	, generate = gen
 	, setup = webdavSetup
-	, exportSupported = exportUnsupported
+	, exportSupported = exportIsSupported
 	}
 
 gen :: Git.Repo -> UUID -> RemoteConfig -> RemoteGitConfig -> Annex (Maybe Remote)
@@ -70,7 +70,13 @@ gen r u c gc = new <$> remoteCost gc expensiveRemoteCost
 			, lockContent = Nothing
 			, checkPresent = checkPresentDummy
 			, checkPresentCheap = False
-			, exportActions = exportUnsupported
+			, exportActions = ExportActions
+				{ storeExport = storeExportDav this
+				, retrieveExport = retrieveExportDav this
+				, removeExport = removeExportDav this
+				, checkPresentExport = checkPresentExportDav this
+				, renameExport = renameExportDav this
+				}
 			, whereisKey = Nothing
 			, remoteFsck = Nothing
 			, repairRepo = Nothing
@@ -114,17 +120,21 @@ store (LegacyChunks chunksize) (Just dav) = fileStorer $ \k f p -> liftIO $
 store _  (Just dav) = httpStorer $ \k reqbody -> liftIO $ goDAV dav $ do
 	let tmp = keyTmpLocation k
 	let dest = keyLocation k
+	storeHelper dav tmp dest reqbody
+	return True
+
+storeHelper :: DavHandle -> DavLocation -> DavLocation -> RequestBody -> DAVT IO ()
+storeHelper dav tmp dest reqbody = do
 	void $ mkColRecursive tmpDir
 	inLocation tmp $
 		putContentM' (contentType, reqbody)
-	finalizeStore (baseURL dav) tmp dest
-	return True
+	finalizeStore dav tmp dest
 
-finalizeStore :: URLString -> DavLocation -> DavLocation -> DAVT IO ()
-finalizeStore baseurl tmp dest = do
+finalizeStore :: DavHandle -> DavLocation -> DavLocation -> DAVT IO ()
+finalizeStore dav tmp dest = do
 	inLocation dest $ void $ safely $ delContentM
 	maybe noop (void . mkColRecursive) (locationParent dest)
-	moveDAV baseurl tmp dest
+	moveDAV (baseURL dav) tmp dest
 
 retrieveCheap :: Key -> AssociatedFile -> FilePath -> Annex Bool
 retrieveCheap _ _ _ = return False
@@ -133,26 +143,29 @@ retrieve :: ChunkConfig -> Maybe DavHandle -> Retriever
 retrieve _ Nothing = giveup "unable to connect"
 retrieve (LegacyChunks _) (Just dav) = retrieveLegacyChunked dav
 retrieve _ (Just dav) = fileRetriever $ \d k p -> liftIO $
-	goDAV dav $
-		inLocation (keyLocation k) $
-			withContentM $
-				httpBodyRetriever d p
+	goDAV dav $ retrieveHelper (keyLocation k) d p
+
+retrieveHelper :: DavLocation -> FilePath -> MeterUpdate -> DAVT IO ()
+retrieveHelper loc d p = inLocation loc $
+	withContentM $ httpBodyRetriever d p
 
 remove :: Maybe DavHandle -> Remover
 remove Nothing _ = return False
-remove (Just dav) k = liftIO $ do
+remove (Just dav) k = liftIO $ goDAV dav $
 	-- Delete the key's whole directory, including any
 	-- legacy chunked files, etc, in a single action.
-	let d = keyDir k
-	goDAV dav $ do
-		v <- safely $ inLocation d delContentM
-		case v of
-			Just _ -> return True
-			Nothing -> do
-				v' <- existsDAV d
-				case v' of
-					Right False -> return True
-					_ -> return False
+	removeHelper (keyDir k)
+
+removeHelper :: DavLocation -> DAVT IO Bool
+removeHelper d = do
+	v <- safely $ inLocation d delContentM
+	case v of
+		Just _ -> return True
+		Nothing -> do
+			v' <- existsDAV d
+			case v' of
+				Right False -> return True
+				_ -> return False
 
 checkKey :: Remote -> ChunkConfig -> Maybe DavHandle -> CheckPresent
 checkKey r _ Nothing _ = giveup $ name r ++ " not configured"
@@ -165,6 +178,38 @@ checkKey r chunkconfig (Just dav) k = do
 				existsDAV (keyLocation k)
 			either giveup return v
 
+storeExportDav :: Remote -> FilePath -> Key -> ExportLocation -> MeterUpdate -> Annex Bool
+storeExportDav r f _k loc p = runExport r $ \dav -> do
+	reqbody <- liftIO $ httpBodyStorer f p
+	storeHelper dav (exportTmpLocation loc) (exportLocation loc) reqbody
+	return True
+
+retrieveExportDav :: Remote -> Key -> ExportLocation -> FilePath -> MeterUpdate -> Annex Bool
+retrieveExportDav r _k loc d p = runExport r $ \_dav -> do
+	retrieveHelper (exportLocation loc) d p
+	return True
+
+removeExportDav :: Remote -> Key -> ExportLocation -> Annex Bool
+removeExportDav r _k loc = runExport r $ \_dav ->
+	removeHelper (exportLocation loc)
+
+checkPresentExportDav :: Remote -> Key -> ExportLocation -> Annex Bool
+checkPresentExportDav r _k loc = withDAVHandle r $ \mh -> case mh of
+	Nothing -> giveup $ name r ++ " not configured"
+	Just h -> liftIO $ do
+		v <- goDAV h $ existsDAV (exportLocation loc)
+		either giveup return v
+
+renameExportDav :: Remote -> Key -> ExportLocation -> ExportLocation -> Annex Bool
+renameExportDav r _k src dest = runExport r $ \dav -> do
+	moveDAV (baseURL dav) (exportLocation src) (exportLocation dest)
+	return True
+
+runExport :: Remote -> (DavHandle -> DAVT IO Bool) -> Annex Bool
+runExport r a = withDAVHandle r $ \mh -> case mh of
+	Nothing -> return False
+	Just h -> fromMaybe False <$> liftIO (goDAV h $ safely (a h))
+
 configUrl :: Remote -> Maybe URLString
 configUrl r = fixup <$> M.lookup "url" (config r)
   where
@@ -278,7 +323,6 @@ existsDAV l = inLocation l check `catchNonAsync` (\e -> return (Left $ show e))
 			(const $ ispresent False)
 	ispresent = return . Right
 
--- Ignores any exceptions when performing a DAV action.
 safely :: DAVT IO a -> DAVT IO (Maybe a)
 safely = eitherToMaybe <$$> tryNonAsync
 
@@ -351,7 +395,7 @@ storeLegacyChunked chunksize k dav b =
 	storer locs = Legacy.storeChunked chunksize locs storehttp b
 	recorder l s = storehttp l (L8.fromString s)
 	finalizer tmp' dest' = goDAV dav $ 
-		finalizeStore (baseURL dav) tmp' (fromJust $ locationParent dest')
+		finalizeStore dav tmp' (fromJust $ locationParent dest')
 
 	tmp = addTrailingPathSeparator $ keyTmpLocation k

(Diff truncated)
stop warning about removals from IA
In a test, I uploaded a pdf, and several files were derived from it.
After removing the pdf, the derived files went away after approximatly
half an hour. This window does not seem worth warning about every time.
Documented it in the tip.
diff --git a/CHANGELOG b/CHANGELOG
index b4a80b2aa..4365ed9f9 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -9,8 +9,7 @@ git-annex (6.20170819) UNRELEASED; urgency=medium
   * Support building with feed-1.0, while still supporting older versions.
   * init: Display an additional message when it detects a filesystem that
     allows writing to files whose write bit is not set.
-  * S3: Allow removing files from IA, but warn about derived versions
-    potentially still existing there.
+  * S3: Allow removing files from IA.
 
  -- Joey Hess <id@joeyh.name>  Mon, 28 Aug 2017 12:20:59 -0400
 
diff --git a/Remote/S3.hs b/Remote/S3.hs
index bfb80b61f..c8092a4c9 100644
--- a/Remote/S3.hs
+++ b/Remote/S3.hs
@@ -278,18 +278,11 @@ retrieveCheap _ _ _ = return False
  - While it may remove the file, there are generally other files
  - derived from it that it does not remove. -}
 remove :: S3Info -> S3Handle -> Remover
-remove info h k = warnIARemoval info $ do
+remove info h k = do
 	res <- tryNonAsync $ sendS3Handle h $
 		S3.DeleteObject (T.pack $ bucketObject info k) (bucket info)
 	return $ either (const False) (const True) res
 
-warnIARemoval :: S3Info -> Annex a -> Annex a
-warnIARemoval info a
-	| isIA info = do
-		warning "Derived versions of removed file may still be present in the Internet Archive"
-		a
-	| otherwise = a
-
 checkKey :: Remote -> S3Info -> Maybe S3Handle -> CheckPresent
 checkKey r info Nothing k = case getpublicurl info of
 	Nothing -> do
@@ -345,7 +338,7 @@ retrieveExportS3 r info _k loc f p =
 		return True
 
 removeExportS3 :: Remote -> S3Info -> Key -> ExportLocation -> Annex Bool
-removeExportS3 r info _k loc = warnIARemoval info $
+removeExportS3 r info _k loc = 
 	catchNonAsync go (\e -> warning (show e) >> return False)
   where
 	go = withS3Handle (config r) (gitconfig r) (uuid r) $ \h -> do
diff --git a/doc/tips/Internet_Archive_via_S3.mdwn b/doc/tips/Internet_Archive_via_S3.mdwn
index be802b5b2..ba3c75891 100644
--- a/doc/tips/Internet_Archive_via_S3.mdwn
+++ b/doc/tips/Internet_Archive_via_S3.mdwn
@@ -51,15 +51,15 @@ Then you can annex files and copy them to the remote as usual:
 	# git annex copy photo1.jpeg --fast --to archive-panama
 	copy (to archive-panama...) ok
 
+## update lag
+
 It may take a while for archive.org to make files publically visible after
 they've been uploaded.
 
-## removing files
-
 While files can be removed from the Internet Archive, 
 [derived versions](https://archive.org/help/derivatives.php)
-of some files may continued to be stored there after the originals
-were removed. git-annex warns about this problem.
+of some files may continued to be stored there for a while
+after the originals were removed.
 
 ## exporting trees
 

S3: Allow removing files from IA, but warn about derived versions potentially still existing there.
Removal works, only derives are a potential issue, so allow removing
with a warning. This way, unexporting a file works, and behavior is
consistent with IA remotes whether or not exporttree=yes.
Also tested exporting filenames containing unicode, spaces, underscores.
All worked, despite the IA's faq saying it doesn't.
This commit was sponsored by Trenton Cronholm on Patreon.
diff --git a/CHANGELOG b/CHANGELOG
index 137e4e970..b4a80b2aa 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -9,6 +9,8 @@ git-annex (6.20170819) UNRELEASED; urgency=medium
   * Support building with feed-1.0, while still supporting older versions.
   * init: Display an additional message when it detects a filesystem that
     allows writing to files whose write bit is not set.
+  * S3: Allow removing files from IA, but warn about derived versions
+    potentially still existing there.
 
  -- Joey Hess <id@joeyh.name>  Mon, 28 Aug 2017 12:20:59 -0400
 
diff --git a/Remote/S3.hs b/Remote/S3.hs
index c7b72def5..396d2c388 100644
--- a/Remote/S3.hs
+++ b/Remote/S3.hs
@@ -278,14 +278,17 @@ retrieveCheap _ _ _ = return False
  - While it may remove the file, there are generally other files
  - derived from it that it does not remove. -}
 remove :: S3Info -> S3Handle -> Remover
-remove info h k
+remove info h k = warnIARemoval info $ do
+	res <- tryNonAsync $ sendS3Handle h $
+		S3.DeleteObject (T.pack $ bucketObject info k) (bucket info)
+	return $ either (const False) (const True) res
+
+warnIARemoval :: S3Info -> Annex a -> Annex a
+warnIARemoval info a
 	| isIA info = do
-		warning "Cannot remove content from the Internet Archive"
-		return False
-	| otherwise = do
-		res <- tryNonAsync $ sendS3Handle h $
-			S3.DeleteObject (T.pack $ bucketObject info k) (bucket info)
-		return $ either (const False) (const True) res
+		warning "Derived versions of removed file may still be present in the Internet Archive"
+		a
+	| otherwise = a
 
 checkKey :: Remote -> S3Info -> Maybe S3Handle -> CheckPresent
 checkKey r info Nothing k = case getpublicurl info of
@@ -342,7 +345,7 @@ retrieveExportS3 r info _k loc f p =
 		return True
 
 removeExportS3 :: Remote -> S3Info -> Key -> ExportLocation -> Annex Bool
-removeExportS3 r info _k loc = 
+removeExportS3 r info _k loc = warnIARemoval info $
 	catchNonAsync go (\e -> warning (show e) >> return False)
   where
 	go = withS3Handle (config r) (gitconfig r) (uuid r) $ \h -> do
@@ -620,9 +623,9 @@ getBucketObject c = munge . key2file
 getBucketExportLocation :: RemoteConfig -> ExportLocation -> FilePath
 getBucketExportLocation c (ExportLocation loc) = getFilePrefix c ++ loc
 
-{- Internet Archive limits filenames to a subset of ascii,
- - with no whitespace. Other characters are xml entity
- - encoded. -}
+{- Internet Archive documentation limits filenames to a subset of ascii.
+ - While other characters seem to work now, this entity encodes everything
+ - else to avoid problems. -}
 iaMunge :: String -> String
 iaMunge = (>>= munge)
   where
diff --git a/doc/tips/Internet_Archive_via_S3.mdwn b/doc/tips/Internet_Archive_via_S3.mdwn
index 20d14bdec..be802b5b2 100644
--- a/doc/tips/Internet_Archive_via_S3.mdwn
+++ b/doc/tips/Internet_Archive_via_S3.mdwn
@@ -11,9 +11,10 @@ comply with their [terms of service](http://www.archive.org/about/terms.php).
 A nice added feature is that whenever git-annex sends a file to the
 Internet Archive, it records its url, the same as if you'd run `git annex
 addurl`. So any users who can clone your repository can download the files
-from archive.org, without needing any login or password info. This makes
-the Internet Archive a nice way to publish the large files associated with
-a public git repository.
+from archive.org, without needing any login or password info. 
+The url to the content in the Internet Archive is also displayed by
+`git annex whereis`. This makes the Internet Archive a nice way to
+publish the large files associated with a public git repository.
 
 ## webapp setup
 
@@ -50,10 +51,15 @@ Then you can annex files and copy them to the remote as usual:
 	# git annex copy photo1.jpeg --fast --to archive-panama
 	copy (to archive-panama...) ok
 
-Once a file has been stored on archive.org, it cannot be (easily) removed
-from it. Also, git-annex whereis will tell you a public url for the file
-on archive.org. (It may take a while for archive.org to make the file
-publically visibile.)
+It may take a while for archive.org to make files publically visible after
+they've been uploaded.
+
+## removing files
+
+While files can be removed from the Internet Archive, 
+[derived versions](https://archive.org/help/derivatives.php)
+of some files may continued to be stored there after the originals
+were removed. git-annex warns about this problem.
 
 ## exporting trees
 
@@ -63,6 +69,7 @@ are important, you can run `git annex initremote` with an additional
 parameter "exporttree=yes", and then use [[git-annex-export]] to publish
 a tree of files to the Internet Archive.
 
-Note that the Internet Archive does not support filenames containing
-whitespace and some other characters. Exporting such problem filenames will
-fail; you can rename the file and re-export.
+Note that the Internet Archive may not support certian characters
+in filenames ([see FAQ](http://archive.org/about/faqs.php#1099)).
+If exporting a filename fails due to such limitations, you would need
+to rename it in your git annex repository in order to export it.
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index ac77b3d72..43d4d0e8c 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -29,8 +29,6 @@ Work is in progress. Todo list:
   Would need git-annex sync to export to the master tree?
   This is similar to the little-used preferreddir= preferred content
   setting and the "public" repository group.
-* Test export to IA via S3. In particualar, does removing an exported file
-  work?
 
 Low priority:
 

Added a comment: Issue also affects Samsung devices, git unaffected
diff --git a/doc/bugs/android__58___cannot_link_executable/comment_4_1e86ba33f6b709bf8bc72b70adbc73dd._comment b/doc/bugs/android__58___cannot_link_executable/comment_4_1e86ba33f6b709bf8bc72b70adbc73dd._comment
new file mode 100644
index 000000000..ea86f4d99
--- /dev/null
+++ b/doc/bugs/android__58___cannot_link_executable/comment_4_1e86ba33f6b709bf8bc72b70adbc73dd._comment
@@ -0,0 +1,10 @@
+[[!comment format=mdwn
+ username="https://christian.amsuess.com/chrysn"
+ avatar="http://christian.amsuess.com/avatar/c6c0d57d63ac88f3541522c4b21198c3c7169a665a2f2d733b4f78670322ffdc"
+ subject="Issue also affects Samsung devices, git unaffected"
+ date="2017-09-11T18:07:51Z"
+ content="""
+I'm experiencing this on a Samsung SM-T813 (arm64) with Android 7.0.
+
+Running `git` commands or busybox commands in the shipped shell works, this seems to affect the git-annex binary only.
+"""]]

Added a comment: The initremote command appears to hang due to low entropy
diff --git a/doc/tips/using_Amazon_S3/comment_15_d8cc20706debc17f4f738d2019577dea._comment b/doc/tips/using_Amazon_S3/comment_15_d8cc20706debc17f4f738d2019577dea._comment
new file mode 100644
index 000000000..1bc4ed866
--- /dev/null
+++ b/doc/tips/using_Amazon_S3/comment_15_d8cc20706debc17f4f738d2019577dea._comment
@@ -0,0 +1,25 @@
+[[!comment format=mdwn
+ username="NathanCollins"
+ avatar="http://cdn.libravatar.org/avatar/8354544a22bb5a0ac8005ca008f94ad1"
+ subject="The initremote command appears to hang due to low entropy"
+ date="2017-09-10T02:32:26Z"
+ content="""
+For me, the `git annex initremote amazon-s3 encryption=shared embedcreds=yes` [1] command hung for several minutes after printing
+
+    initremote amazon-s3 (encryption setup)
+
+Turns out the problem was that I was low on entropy. Figured this out by running
+
+    gpg --gen-random 2
+
+per [this bug comment](https://github.com/DanielDent/git-annex-remote-rclone/issues/6#issuecomment-231347642). According to [this blog post](https://delightlylinux.wordpress.com/2015/07/01/is-gpg-hanging-when-generating-a-key/) a solution is to
+
+    sudo aptitude install rng-tools
+    sudo rngd -r /dev/urandom
+
+The `git annex initremote` command had finished by the time I found that solution, but I verified that it made `gpg --gen-random 2` work.
+
+System: Ubuntu 16.04 with Git Annex 5.20151208-1build1 installed via package manager.
+
+[1] I'm using AWS credentials that are restricted to a specific bucket, so I'm not worried about the conjunction `encryption=shared` and `embedcreds=yes`.
+"""]]

diff --git a/doc/bugs/git_annex_test_fails.mdwn b/doc/bugs/git_annex_test_fails.mdwn
new file mode 100644
index 000000000..aba378b41
--- /dev/null
+++ b/doc/bugs/git_annex_test_fails.mdwn
@@ -0,0 +1,28 @@
+### Please describe the problem.
+git annex test fails outside a git repository.
+
+[[!format sh """
+$ git annex test
+git-annex: Not in a git repository.
+"""]]
+
+and then some tests fail once you work around that.
+[[!format sh """
+Exception: getCurrentDirectory:getWorkingDirectory: resource exhausted (Too many open files)
+"""]]
+
+Exception: getCurrentDirectory:getWorkingDirectory: resource exhausted
+
+### What steps will reproduce the problem?
+Run `git annex test`.
+
+### What version of git-annex are you using? On what operating system?
+HEAD at 425a3a1 built with GHC 8.2.1.
+
+### Please provide any additional information below.
+
+Full log is here: https://gist.github.com/ilovezfs/1ed886b43d534b239be25f4aa8b7394e
+
+### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
+
+Yes!

close
diff --git a/doc/bugs/Build_failure_with_feed___62____61___1.0.0.0.mdwn b/doc/bugs/Build_failure_with_feed___62____61___1.0.0.0.mdwn
index e11c98268..2b7bc53de 100644
--- a/doc/bugs/Build_failure_with_feed___62____61___1.0.0.0.mdwn
+++ b/doc/bugs/Build_failure_with_feed___62____61___1.0.0.0.mdwn
@@ -178,3 +178,4 @@ https://gist.githubusercontent.com/anonymous/dcc8d9823bd50b7fca10d5cf8961e75d/ra
 ### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
 Yes!
 
+> I fixed this last week, in [[!commit ee2f096e3ba3aea24445ff2093b426b68e000cc2]]. [[done]]

diff --git a/doc/bugs/Build_failure_with_feed___62____61___1.0.0.0.mdwn b/doc/bugs/Build_failure_with_feed___62____61___1.0.0.0.mdwn
new file mode 100644
index 000000000..e11c98268
--- /dev/null
+++ b/doc/bugs/Build_failure_with_feed___62____61___1.0.0.0.mdwn
@@ -0,0 +1,180 @@
+### Please describe the problem.
+git-annex cannot build with feed 1.0.0.0, which was uploaded to Hackage on Sat Aug 26 23:56:18 UTC 2017 by AdamBergmark.
+
+### What steps will reproduce the problem?
+Try to build git-annex with cabal install in a cabal sandbox.
+
+### What version of git-annex are you using? On what operating system?
+6.20170818
+
+### Please provide any additional information below.
+
+[[!format sh """
+[416 of 562] Compiling Command.ImportFeed ( Command/ImportFeed.hs, dist/dist-sandbox-b4eb11e/build/git-annex/git-annex-tmp/Command/ImportFeed.o )
+
+Command/ImportFeed.hs:139:61: error:
+    • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+      Expected type: URLString
+        Actual type: Text.Atom.Feed.URI
+    • In the first argument of ‘Enclosure’, namely ‘enclosureurl’
+      In the second argument of ‘($)’, namely ‘Enclosure enclosureurl’
+      In the second argument of ‘($)’, namely
+        ‘ToDownload f u i $ Enclosure enclosureurl’
+
+Command/ImportFeed.hs:142:49: error:
+    • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+      Expected type: URLString
+        Actual type: Data.Text.Internal.Text
+    • In the first argument of ‘quviSupported’, namely ‘link’
+      In the first argument of ‘ifM’, namely ‘(quviSupported link)’
+      In the expression:
+        ifM
+          (quviSupported link)
+          (return $ Just $ ToDownload f u i $ QuviLink link, return Nothing)
+
+Command/ImportFeed.hs:143:71: error:
+    • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+      Expected type: URLString
+        Actual type: Data.Text.Internal.Text
+    • In the first argument of ‘QuviLink’, namely ‘link’
+      In the second argument of ‘($)’, namely ‘QuviLink link’
+      In the second argument of ‘($)’, namely
+        ‘ToDownload f u i $ QuviLink link’
+
+Command/ImportFeed.hs:214:54: error:
+    • Couldn't match type ‘[Char]’ with ‘Data.Text.Internal.Text’
+      Expected type: S.Set Data.Text.Internal.Text
+        Actual type: S.Set ItemId
+    • In the second argument of ‘S.member’, namely ‘(knownitems cache)’
+      In the expression: S.member itemid (knownitems cache)
+      In a case alternative:
+          Just (_, itemid) -> S.member itemid (knownitems cache)
+
+Command/ImportFeed.hs:279:42: error:
+    • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+      Expected type: Maybe [Char]
+        Actual type: Maybe Text.RSS.Syntax.DateString
+    • In the second argument of ‘(<$>)’, namely
+        ‘getItemPublishDateString itm’
+      In the expression: replace "/" "-" <$> getItemPublishDateString itm
+      In a case alternative:
+          _ -> replace "/" "-" <$> getItemPublishDateString itm
+
+Command/ImportFeed.hs:293:44: error:
+    • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+      Expected type: String
+        Actual type: Data.Text.Internal.Text
+    • In the first argument of ‘toMetaValue’, namely ‘itemid’
+      In the second argument of ‘($)’, namely ‘toMetaValue itemid’
+      In the second argument of ‘M.singleton’, namely
+        ‘(S.singleton $ toMetaValue itemid)’
+
+Command/ImportFeed.hs:299:26: error:
+    • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+      Expected type: Maybe String
+        Actual type: Maybe Data.Text.Internal.Text
+    • In the expression: feedtitle
+      In the expression: [feedtitle]
+      In the expression: ("feedtitle", [feedtitle])
+
+Command/ImportFeed.hs:300:26: error:
+    • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+      Expected type: Maybe String
+        Actual type: Maybe Data.Text.Internal.Text
+    • In the expression: itemtitle
+      In the expression: [itemtitle]
+      In the expression: ("itemtitle", [itemtitle])
+
+Command/ImportFeed.hs:301:27: error:
+    • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+      Expected type: Maybe String
+        Actual type: Maybe Data.Text.Internal.Text
+    • In the expression: feedauthor
+      In the expression: [feedauthor]
+      In the expression: ("feedauthor", [feedauthor])
+
+Command/ImportFeed.hs:302:27: error:
+    • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+      Expected type: Maybe String
+        Actual type: Maybe Data.Text.Internal.Text
+    • In the expression: itemauthor
+      In the expression: [itemauthor]
+      In the expression: ("itemauthor", [itemauthor])
+
+Command/ImportFeed.hs:303:28: error:
+    • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+      Expected type: Maybe String
+        Actual type: Maybe Data.Text.Internal.Text
+    • In the expression: getItemSummary $ item i
+      In the expression: [getItemSummary $ item i]
+      In the expression: ("itemsummary", [getItemSummary $ item i])
+
+Command/ImportFeed.hs:304:32: error:
+    • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+      Expected type: Maybe String
+        Actual type: Maybe Data.Text.Internal.Text
+    • In the expression: getItemDescription $ item i
+      In the expression: [getItemDescription $ item i]
+      In the expression:
+        ("itemdescription", [getItemDescription $ item i])
+
+Command/ImportFeed.hs:305:27: error:
+    • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+      Expected type: Maybe String
+        Actual type: Maybe Data.Text.Internal.Text
+    • In the expression: getItemRights $ item i
+      In the expression: [getItemRights $ item i]
+      In the expression: ("itemrights", [getItemRights $ item i])
+
+Command/ImportFeed.hs:306:23: error:
+    • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+      Expected type: Maybe String
+        Actual type: Maybe Data.Text.Internal.Text
+    • In the expression: snd <$> getItemId (item i)
+      In the expression: [snd <$> getItemId (item i)]
+      In the expression: ("itemid", [snd <$> getItemId (item i)])
+
+Command/ImportFeed.hs:307:22: error:
+    • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+      Expected type: Maybe String
+        Actual type: Maybe Data.Text.Internal.Text
+    • In the expression: itemtitle
+      In the expression: [itemtitle, feedtitle]
+      In the expression: ("title", [itemtitle, feedtitle])
+
+Command/ImportFeed.hs:307:33: error:
+    • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+      Expected type: Maybe String
+        Actual type: Maybe Data.Text.Internal.Text
+    • In the expression: feedtitle
+      In the expression: [itemtitle, feedtitle]
+      In the expression: ("title", [itemtitle, feedtitle])
+
+Command/ImportFeed.hs:308:23: error:
+    • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+      Expected type: Maybe String
+        Actual type: Maybe Data.Text.Internal.Text
+    • In the expression: itemauthor
+      In the expression: [itemauthor, feedauthor]
+      In the expression: ("author", [itemauthor, feedauthor])
+
+Command/ImportFeed.hs:308:35: error:
+    • Couldn't match type ‘Data.Text.Internal.Text’ with ‘[Char]’
+      Expected type: Maybe String
+        Actual type: Maybe Data.Text.Internal.Text
+    • In the expression: feedauthor
+      In the expression: [itemauthor, feedauthor]
+      In the expression: ("author", [itemauthor, feedauthor])
+cabal: Leaving directory '.'
+cabal: Error: some packages failed to install:
+git-annex-6.20170818-ATXJn9dQzZj9avYQidLOBq failed during the building phase.
+The exception was:
+ExitFailure 1
+"""]]
+
+Full build log is here:
+https://gist.githubusercontent.com/anonymous/dcc8d9823bd50b7fca10d5cf8961e75d/raw/c6500526bbad0a94e067816b1af2c9e8717a3419/08.cabal
+
+### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
+Yes!
+

very minor typo
diff --git a/doc/tips/publishing_your_files_to_the_public.mdwn b/doc/tips/publishing_your_files_to_the_public.mdwn
index aac735d6c..aa6062847 100644
--- a/doc/tips/publishing_your_files_to_the_public.mdwn
+++ b/doc/tips/publishing_your_files_to_the_public.mdwn
@@ -24,7 +24,7 @@ for public downloads from that bucket.
 
 # Indexes
 
-By default, there is no index.ntml file exported, so if you open
+By default, there is no index.html file exported, so if you open
 `http://$BUCKET.s3.amazonaws.com/` in a web browser, you'll see an
 XML document listing the files.
 

very delayed response now that feature is added
diff --git a/doc/special_remotes/S3/comment_29_37430a66a9f39a635b32f04ff82db194._comment b/doc/special_remotes/S3/comment_29_37430a66a9f39a635b32f04ff82db194._comment
new file mode 100644
index 000000000..c85693de3
--- /dev/null
+++ b/doc/special_remotes/S3/comment_29_37430a66a9f39a635b32f04ff82db194._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 29"""
+ date="2017-09-08T20:46:28Z"
+ content="""
+@David_K @Joe, it's finally possible to publish annexed files to S3 while
+preserving filenames, using the new `git annex export` command!
+See [[tips/publishing_your_files_to_the_public]].
+"""]]

consistency
diff --git a/doc/tips/publishing_your_files_to_the_public.mdwn b/doc/tips/publishing_your_files_to_the_public.mdwn
index f7d332d57..aac735d6c 100644
--- a/doc/tips/publishing_your_files_to_the_public.mdwn
+++ b/doc/tips/publishing_your_files_to_the_public.mdwn
@@ -17,7 +17,7 @@ You can run that command again to update the export. See
 [[git-annex-export]] for details.
 
 Each exported file will be available to the public from
-`http://$BUCKET.s3.amazonaws.com/$file`
+`http://$BUCKET.s3.amazonaws.com/$FILE`
 
 Note: Bear in mind that Amazon will charge the owner of the bucket
 for public downloads from that bucket.

devblog
diff --git a/doc/devblog/day_470__export_to_external_and_S3.mdwn b/doc/devblog/day_470__export_to_external_and_S3.mdwn
new file mode 100644
index 000000000..df27b06ed
--- /dev/null
+++ b/doc/devblog/day_470__export_to_external_and_S3.mdwn
@@ -0,0 +1,12 @@
+Got `git annex export` working to external special remotes. Each external
+special remote will need some modifications to allow exporting. Exporting
+to some things doesn't make sense, but often there's a way to browse a tree
+of files stored on the special remote and so export is worth supporting.
+Now would be a good time to contact the author of your favorite special
+remote about supporting export..
+
+Also had time to get `git annex export` working to S3. The tip 
+[[tips/publishing_your_files_to_the_public]] had a clumsy method for
+publishing files via S3 before, and is now quite simple!
+
+Today's work was supported by the NSF-funded DataLad project.

S3 export finalization
Fixed ACL issue, and updated some documentation.
diff --git a/Remote/S3.hs b/Remote/S3.hs
index 96d24d00e..f80a08bb2 100644
--- a/Remote/S3.hs
+++ b/Remote/S3.hs
@@ -357,14 +357,16 @@ checkPresentExportS3 r info _k loc =
 	go = withS3Handle (config r) (gitconfig r) (uuid r) $ \h -> do
 		checkKeyHelper info h (T.pack $ bucketExportLocation info loc)
 
+-- S3 has no move primitive; copy and delete.
 renameExportS3 :: Remote -> S3Info -> Key -> ExportLocation -> ExportLocation -> Annex Bool
 renameExportS3 r info _k src dest = catchNonAsync go (\e -> warning (show e) >> return False)
   where
 	go = withS3Handle (config r) (gitconfig r) (uuid r) $ \h -> do
-		-- S3 has no move primitive; copy and delete.
-		void $ sendS3Handle h $ S3.copyObject (bucket info) dstobject
+		let co = S3.copyObject (bucket info) dstobject
 			(S3.ObjectId (bucket info) srcobject Nothing)
 			S3.CopyMetadata
+		-- ACL is not preserved by copy.
+		void $ sendS3Handle h $ co { S3.coAcl = acl info }
 		void $ sendS3Handle h $ S3.DeleteObject srcobject (bucket info)
 		return True
 	srcobject = T.pack $ bucketExportLocation info src
diff --git a/doc/tips/public_Amazon_S3_remote.mdwn b/doc/tips/public_Amazon_S3_remote.mdwn
index d362fd75d..ce484adfb 100644
--- a/doc/tips/public_Amazon_S3_remote.mdwn
+++ b/doc/tips/public_Amazon_S3_remote.mdwn
@@ -2,6 +2,9 @@ Here's how to create a Amazon [[S3 special remote|special_remotes/S3]] that
 can be read by anyone who gets a clone of your git-annex repository,
 without them needing Amazon AWS credentials.
 
+If you want to publish files to S3 so they can be accessed without using
+git-annex, see [[publishing_your_files_to_the_public]].
+
 Note: Bear in mind that Amazon will charge the owner of the bucket
 for public downloads from that bucket.
 
@@ -52,6 +55,3 @@ who are not using git-annex. To find the url, use `git annex whereis`.
 ----
 
 See [[special_remotes/S3]] for details about configuring S3 remotes.
-
-See [[publishing_your_files_to_the_public]] for other ways to use a public
-S3 bucket.
diff --git a/doc/tips/publishing_your_files_to_the_public.mdwn b/doc/tips/publishing_your_files_to_the_public.mdwn
index 5409dda0d..f7d332d57 100644
--- a/doc/tips/publishing_your_files_to_the_public.mdwn
+++ b/doc/tips/publishing_your_files_to_the_public.mdwn
@@ -1,88 +1,39 @@
 # Creating a special S3 remote to hold files shareable by URL
 
-(In this example, I'll assume you'll be creating a bucket in S3 named **public-annex** and a special remote in git-annex, which will store its files in the previous bucket, named **public-s3**, but change these names if you are going to do the thing for real)
+In this example, I'll assume you'll be creating a bucket in Amazon S3 named
+$BUCKET and a special remote named public-s3. Be sure to replace $BUCKET
+with something like "public-bucket-joey" when you follow along in your
+shell.
 
-Set up your special [S3](http://git-annex.branchable.com/special_remotes/S3/) remote with (at least) these options:
+Set up your special [[S3 remote|special_remotes/S3]] with (at least) these options:
 
-    git annex initremote public-s3 type=s3 encryption=none bucket=public-annex chunk=0 public=yes
+	git annex initremote public-s3 type=s3 encryption=none bucket=$BUCKET exporttree=yes public=yes encryption=none
 
-This way git-annex will upload the files to this repo, (when you call `git
-annex copy [FILES...] --to public-s3`) without encrypting them and without
-chunking them. And, thanks to the public=yes, they will be
-accessible by anyone with the link.
+Then export the files in the master branch to the remote:
 
-(Note that public=yes was added in git-annex version 5.20150605.
-If you have an older version, it will be silently ignored, and you
-will instead need to use the AWS dashboard to configure a public get policy
-for the bucket.)
+	git annex export master --to public-s3
 
-Following the example, the files will be accessible at `http://public-annex.s3.amazonaws.com/KEY` where `KEY` is the file key created by git-annex and which you can discover running
+You can run that command again to update the export. See
+[[git-annex-export]] for details.
 
-    git annex lookupkey FILEPATH
+Each exported file will be available to the public from
+`http://$BUCKET.s3.amazonaws.com/$file`
 
-This way you can share a link to each file you have at your S3 remote.
+Note: Bear in mind that Amazon will charge the owner of the bucket
+for public downloads from that bucket.
 
-## Sharing all links in a folder
+# Indexes
 
-To share all the links in a given folder, for example, you can go to that folder and run (this is an example with the _fish_ shell, but I'm sure you can do the same in _bash_, I just don't know exactly):
+By default, there is no index.ntml file exported, so if you open
+`http://$BUCKET.s3.amazonaws.com/` in a web browser, you'll see an
+XML document listing the files.
 
-    for filename in (ls)
-        echo $filename": https://public-annex.s3.amazonaws.com/"(git annex lookupkey $filename)
-    end
+For a nicer list of files, you can make an index.html file, check it into
+git, and export it to the bucket. You'll need to configure the bucket to
+use index.html as its index document, as
+[explained here](https://stackoverflow.com/questions/27899/is-there-a-way-to-have-index-html-functionality-with-content-hosted-on-s3).
 
-## Sharing all links matching certain metadata
+# Old method
 
-The same applies to all the filters you can do with git-annex.
-
-For example, let's share links to all the files whose _author_'s name starts with "Mario" and are, in fact, stored at your public-s3 remote.
-However, instead of just a list of links we will output a markdown-formatted list of the filenames linked to their S3 urls:
-
-    for filename in (git annex find --metadata "author=Mario*" --and --in public-s3)
-       echo "* ["$filename"](https://public-annex.s3.amazonaws.com/"(git annex lookupkey $filename)")"
-    end
-
-Very useful.
-
-## Sharing links with time-limited URLs
-
-By using pre-signed URLs it is possible to create limits on how long a URL is valid for retrieving an object. 
-To enable use a private S3 bucket for the remotes and then pre-sign actual URL with the script in [AWS-Tools](https://github.com/gdbtek/aws-tools).
-Example:
-
-    key=`git annex lookupkey "$fname"`;  sign_s3_url.bash --region 'eu-west-1' --bucket 'mybuck' --file-path $key --aws-access-key-id XX --aws-secret-access-key XX --method 'GET' --minute-expire 10
-
-## Adding the S3 URL as a source
-
-Assuming all files in the current directory are available on S3, this will register the public S3 url for the file in git-annex, making it available for everyone *through git-annex*:
-
-<pre>
-git annex find --in public-s3 | while read file ; do
-  key=$(git annex lookupkey $file)
-  echo $key https://public-annex.s3.amazonaws.com/$key
-done | git annex registerurl
-</pre>
-
-`registerurl` was introduced in `5.20150317`.
-
-## Manually configuring a public get policy
-
-Here is how to manually configure a public get policy
-for a bucket, in the AWS dashboard.
-
-    {
-      "Version": "2008-10-17",
-      "Statement": [
-        {
-          "Sid": "AllowPublicRead",
-          "Effect": "Allow",
-          "Principal": {
-            "AWS": "*"
-          },
-          "Action": "s3:GetObject",
-          "Resource": "arn:aws:s3:::public-annex/*"
-        }
-      ]
-    }
-
-This should not be necessary if using a new enough version
-of git-annex, which can instead be configured with public=yet.
+To use `git annex export`, you need git-annex version 6.20170909 or
+newer. Before we had `git annex export` an [[old_method]] was used instead.
diff --git a/doc/tips/publishing_your_files_to_the_public/old_method.mdwn b/doc/tips/publishing_your_files_to_the_public/old_method.mdwn
new file mode 100644
index 000000000..5409dda0d
--- /dev/null
+++ b/doc/tips/publishing_your_files_to_the_public/old_method.mdwn
@@ -0,0 +1,88 @@
+# Creating a special S3 remote to hold files shareable by URL
+
+(In this example, I'll assume you'll be creating a bucket in S3 named **public-annex** and a special remote in git-annex, which will store its files in the previous bucket, named **public-s3**, but change these names if you are going to do the thing for real)
+
+Set up your special [S3](http://git-annex.branchable.com/special_remotes/S3/) remote with (at least) these options:
+
+    git annex initremote public-s3 type=s3 encryption=none bucket=public-annex chunk=0 public=yes
+
+This way git-annex will upload the files to this repo, (when you call `git
+annex copy [FILES...] --to public-s3`) without encrypting them and without
+chunking them. And, thanks to the public=yes, they will be
+accessible by anyone with the link.
+
+(Note that public=yes was added in git-annex version 5.20150605.
+If you have an older version, it will be silently ignored, and you
+will instead need to use the AWS dashboard to configure a public get policy
+for the bucket.)
+
+Following the example, the files will be accessible at `http://public-annex.s3.amazonaws.com/KEY` where `KEY` is the file key created by git-annex and which you can discover running
+
+    git annex lookupkey FILEPATH
+
+This way you can share a link to each file you have at your S3 remote.
+
+## Sharing all links in a folder
+
+To share all the links in a given folder, for example, you can go to that folder and run (this is an example with the _fish_ shell, but I'm sure you can do the same in _bash_, I just don't know exactly):
+
+    for filename in (ls)
+        echo $filename": https://public-annex.s3.amazonaws.com/"(git annex lookupkey $filename)
+    end
+

(Diff truncated)
S3 export (untested)
It opens a http connection per file exported, but then so does git
annex copy --to s3.
Decided not to munge exported filenames for IA. Too large a chance of
the munging having confusing results. Instead, export of files not
supported by IA, eg with spaces in their name, will fail.
This commit was supported by the NSF-funded DataLad project.
diff --git a/CHANGELOG b/CHANGELOG
index b1701082c..137e4e970 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -4,7 +4,7 @@ git-annex (6.20170819) UNRELEASED; urgency=medium
     exports of trees to special remotes.
   * Use git-annex initremote with exporttree=yes to set up a special remote
     for use by git-annex export.
-  * Implemented export to directory special remotes.
+  * Implemented export to directory and S3 special remotes.
   * External special remote protocol extended to support export.
   * Support building with feed-1.0, while still supporting older versions.
   * init: Display an additional message when it detects a filesystem that
diff --git a/Remote/S3.hs b/Remote/S3.hs
index 4b56cce29..96d24d00e 100644
--- a/Remote/S3.hs
+++ b/Remote/S3.hs
@@ -59,7 +59,7 @@ remote = RemoteType
 	, enumerate = const (findSpecialRemotes "s3")
 	, generate = gen
 	, setup = s3Setup
-	, exportSupported = exportUnsupported
+	, exportSupported = exportIsSupported
 	}
 
 gen :: Git.Repo -> UUID -> RemoteConfig -> RemoteGitConfig -> Annex (Maybe Remote)
@@ -86,7 +86,13 @@ gen r u c gc = do
 			, lockContent = Nothing
 			, checkPresent = checkPresentDummy
 			, checkPresentCheap = False
-			, exportActions = exportUnsupported
+			, exportActions = ExportActions
+				{ storeExport = storeExportS3 this info
+				, retrieveExport = retrieveExportS3 this info
+				, removeExport = removeExportS3 this info
+				, checkPresentExport = checkPresentExportS3 this info
+				, renameExport = renameExportS3 this info
+				}
 			, whereisKey = Just (getWebUrls info)
 			, remoteFsck = Nothing
 			, repairRepo = Nothing
@@ -107,6 +113,7 @@ s3Setup :: SetupStage -> Maybe UUID -> Maybe CredPair -> RemoteConfig -> RemoteG
 s3Setup ss mu mcreds c gc = do
 	u <- maybe (liftIO genUUID) return mu
 	s3Setup' ss u mcreds c gc
+
 s3Setup' :: SetupStage -> UUID -> Maybe CredPair -> RemoteConfig -> RemoteGitConfig -> Annex (RemoteConfig, UUID)
 s3Setup' ss u mcreds c gc
 	| configIA c = archiveorg
@@ -170,25 +177,26 @@ prepareS3HandleMaybe r = resourcePrepare $ const $
 
 store :: Remote -> S3Info -> S3Handle -> Storer
 store _r info h = fileStorer $ \k f p -> do
-	case partSize info of
-		Just partsz | partsz > 0 -> do
-			fsz <- liftIO $ getFileSize f
-			if fsz > partsz
-				then multipartupload fsz partsz k f p
-				else singlepartupload k f p
-		_ -> singlepartupload k f p	
+	storeHelper info h f (T.pack $ bucketObject info k) p
 	-- Store public URL to item in Internet Archive.
 	when (isIA info && not (isChunkKey k)) $
 		setUrlPresent webUUID k (iaPublicKeyUrl info k)
 	return True
+
+storeHelper :: S3Info -> S3Handle -> FilePath -> S3.Object -> MeterUpdate -> Annex ()
+storeHelper info h f object p = case partSize info of
+	Just partsz | partsz > 0 -> do
+		fsz <- liftIO $ getFileSize f
+		if fsz > partsz
+			then multipartupload fsz partsz
+			else singlepartupload
+	_ -> singlepartupload
   where
-	singlepartupload k f p = do
+	singlepartupload = do
 		rbody <- liftIO $ httpBodyStorer f p
-		void $ sendS3Handle h $ putObject info (T.pack $ bucketObject info k) rbody
-	multipartupload fsz partsz k f p = do
+		void $ sendS3Handle h $ putObject info object rbody
+	multipartupload fsz partsz = do
 #if MIN_VERSION_aws(0,10,6)
-		let object = T.pack (bucketObject info k)
-
 		let startreq = (S3.postInitiateMultipartUpload (bucket info) object)
 				{ S3.imuStorageClass = Just (storageClass info)
 				, S3.imuMetadata = metaHeaders info
@@ -227,16 +235,27 @@ store _r info h = fileStorer $ \k f p -> do
 			(bucket info) object uploadid (zip [1..] etags)
 #else
 		warning $ "Cannot do multipart upload (partsize " ++ show partsz ++ ") of large file (" ++ show fsz ++ "); built with too old a version of the aws library."
-		singlepartupload k f p
+		singlepartupload
 #endif
 
 {- Implemented as a fileRetriever, that uses conduit to stream the chunks
  - out to the file. Would be better to implement a byteRetriever, but
  - that is difficult. -}
 retrieve :: Remote -> S3Info -> Maybe S3Handle -> Retriever
-retrieve _ info (Just h) = fileRetriever $ \f k p -> liftIO $ runResourceT $ do
+retrieve _ info (Just h) = fileRetriever $ \f k p ->
+	retrieveHelper info h (T.pack $ bucketObject info k) f p
+retrieve r info Nothing = case getpublicurl info of
+	Nothing -> \_ _ _ -> do
+		warnMissingCredPairFor "S3" (AWS.creds $ uuid r)
+		return False
+	Just geturl -> fileRetriever $ \f k p ->
+		unlessM (downloadUrl k p [geturl k] f) $
+			giveup "failed to download content"
+
+retrieveHelper :: S3Info -> S3Handle -> S3.Object -> FilePath -> MeterUpdate -> Annex ()
+retrieveHelper info h object f p = liftIO $ runResourceT $ do
 	(fr, fh) <- allocate (openFile f WriteMode) hClose
-	let req = S3.getObject (bucket info) (T.pack $ bucketObject info k)
+	let req = S3.getObject (bucket info) object
 	S3.GetObjectResponse { S3.gorResponse = rsp } <- sendS3Handle' h req
 	responseBody rsp $$+- sinkprogressfile fh p zeroBytesProcessed
 	release fr
@@ -251,13 +270,6 @@ retrieve _ info (Just h) = fileRetriever $ \f k p -> liftIO $ runResourceT $ do
 					void $ meterupdate sofar'
 					S.hPut fh bs
 				sinkprogressfile fh meterupdate sofar'
-retrieve r info Nothing = case getpublicurl info of
-	Nothing -> \_ _ _ -> do
-		warnMissingCredPairFor "S3" (AWS.creds $ uuid r)
-		return False
-	Just geturl -> fileRetriever $ \f k p ->
-		unlessM (downloadUrl k p [geturl k] f) $
-			giveup "failed to download content"
 
 retrieveCheap :: Key -> AssociatedFile -> FilePath -> Annex Bool
 retrieveCheap _ _ _ = return False
@@ -276,8 +288,19 @@ remove info h k
 		return $ either (const False) (const True) res
 
 checkKey :: Remote -> S3Info -> Maybe S3Handle -> CheckPresent
+checkKey r info Nothing k = case getpublicurl info of
+	Nothing -> do
+		warnMissingCredPairFor "S3" (AWS.creds $ uuid r)
+		giveup "No S3 credentials configured"
+	Just geturl -> do
+		showChecking r
+		withUrlOptions $ checkBoth (geturl k) (keySize k)
 checkKey r info (Just h) k = do
 	showChecking r
+	checkKeyHelper info h (T.pack $ bucketObject info k)
+
+checkKeyHelper :: S3Info -> S3Handle -> S3.Object -> Annex Bool
+checkKeyHelper info h object = do
 #if MIN_VERSION_aws(0,10,0)
 	rsp <- go
 	return (isJust $ S3.horMetadata rsp)
@@ -287,8 +310,7 @@ checkKey r info (Just h) k = do
 		return True
 #endif
   where
-	go = sendS3Handle h $
-		S3.headObject (bucket info) (T.pack $ bucketObject info k)
+	go = sendS3Handle h $ S3.headObject (bucket info) object
 
 #if ! MIN_VERSION_aws(0,10,0)
 	{- Catch exception headObject returns when an object is not present
@@ -303,13 +325,50 @@ checkKey r info (Just h) k = do
 			| otherwise = Nothing
 #endif
 
-checkKey r info Nothing k = case getpublicurl info of
-	Nothing -> do
-		warnMissingCredPairFor "S3" (AWS.creds $ uuid r)
-		giveup "No S3 credentials configured"
-	Just geturl -> do
-		showChecking r
-		withUrlOptions $ checkBoth (geturl k) (keySize k)
+storeExportS3 :: Remote -> S3Info -> FilePath -> Key -> ExportLocation -> MeterUpdate -> Annex Bool
+storeExportS3 r info f _k loc p = 
+	catchNonAsync go (\e -> warning (show e) >> return False)
+  where
+	go = withS3Handle (config r) (gitconfig r) (uuid r) $ \h -> do
+		storeHelper info h f (T.pack $ bucketExportLocation info loc) p
+		return True
+
+retrieveExportS3 :: Remote -> S3Info -> Key -> ExportLocation -> FilePath -> MeterUpdate -> Annex Bool
+retrieveExportS3 r info _k loc f p =
+	catchNonAsync go (\e -> warning (show e) >> return False)
+  where
+	go = withS3Handle (config r) (gitconfig r) (uuid r) $ \h -> do
+		retrieveHelper info h (T.pack $ bucketExportLocation info loc) f p
+		return True
+
+removeExportS3 :: Remote -> S3Info -> Key -> ExportLocation -> Annex Bool
+removeExportS3 r info _k loc = 
+	catchNonAsync go (\e -> warning (show e) >> return False)
+  where
+	go = withS3Handle (config r) (gitconfig r) (uuid r) $ \h -> do
+		res <- tryNonAsync $ sendS3Handle h $
+			S3.DeleteObject (T.pack $ bucketExportLocation info loc) (bucket info)
+		return $ either (const False) (const True) res
+
+checkPresentExportS3 :: Remote -> S3Info -> Key -> ExportLocation -> Annex Bool

(Diff truncated)
External special remote protocol extended to support export.
Also updated example.sh to support export.
This commit was supported by the NSF-funded DataLad project.
diff --git a/CHANGELOG b/CHANGELOG
index 3e168de9f..b1701082c 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -5,6 +5,7 @@ git-annex (6.20170819) UNRELEASED; urgency=medium
   * Use git-annex initremote with exporttree=yes to set up a special remote
     for use by git-annex export.
   * Implemented export to directory special remotes.
+  * External special remote protocol extended to support export.
   * Support building with feed-1.0, while still supporting older versions.
   * init: Display an additional message when it detects a filesystem that
     allows writing to files whose write bit is not set.
diff --git a/Remote/Directory.hs b/Remote/Directory.hs
index 22413b7e9..c17ed80a5 100644
--- a/Remote/Directory.hs
+++ b/Remote/Directory.hs
@@ -240,8 +240,8 @@ storeExportDirectory d src _k loc p = liftIO $ catchBoolIO $ do
   where
 	dest = exportPath d loc
 
-retrieveExportDirectory :: FilePath -> Key -> ExportLocation -> FilePath -> MeterUpdate -> Annex (Bool, Verification)
-retrieveExportDirectory d _k loc dest p = unVerified $ liftIO $ catchBoolIO $ do
+retrieveExportDirectory :: FilePath -> Key -> ExportLocation -> FilePath -> MeterUpdate -> Annex Bool
+retrieveExportDirectory d _k loc dest p = liftIO $ catchBoolIO $ do
 	withMeteredFile src p (L.writeFile dest)
 	return True
   where
diff --git a/Remote/External.hs b/Remote/External.hs
index 71a07d3ea..ed00cc93f 100644
--- a/Remote/External.hs
+++ b/Remote/External.hs
@@ -45,7 +45,7 @@ remote = RemoteType
 	, enumerate = const (findSpecialRemotes "externaltype")
 	, generate = gen
 	, setup = externalSetup
-	, exportSupported = exportUnsupported
+	, exportSupported = checkExportSupported
 	}
 
 gen :: Git.Repo -> UUID -> RemoteConfig -> RemoteGitConfig -> Annex (Maybe Remote)
@@ -61,11 +61,28 @@ gen r u c gc
 			Nothing
 			Nothing
 			Nothing
+			exportUnsupported
+			exportUnsupported
 	| otherwise = do
 		external <- newExternal externaltype u c gc
 		Annex.addCleanup (RemoteCleanup u) $ stopExternal external
 		cst <- getCost external r gc
 		avail <- getAvailability external r gc
+		exportsupported <- checkExportSupported' external
+		let exportactions = if exportsupported
+			then ExportActions
+				{ storeExport = storeExportExternal external
+				, retrieveExport = retrieveExportExternal external
+				, removeExport = removeExportExternal external
+				, checkPresentExport = checkPresentExportExternal external
+				, renameExport = renameExportExternal external
+				}
+			else exportUnsupported
+		-- Cheap exportSupported that replaces the expensive
+		-- checkExportSupported now that we've already checked it.
+		let cheapexportsupported = if exportsupported
+			then exportIsSupported
+			else exportUnsupported
 		mk cst avail
 			(store external)
 			(retrieve external)
@@ -74,8 +91,10 @@ gen r u c gc
 			(Just (whereis external))
 			(Just (claimurl external))
 			(Just (checkurl external))
+			exportactions
+			cheapexportsupported
   where
-	mk cst avail tostore toretrieve toremove tocheckkey towhereis toclaimurl tocheckurl = do
+	mk cst avail tostore toretrieve toremove tocheckkey towhereis toclaimurl tocheckurl exportactions cheapexportsupported = do
 		let rmt = Remote
 			{ uuid = u
 			, cost = cst
@@ -87,7 +106,7 @@ gen r u c gc
 			, lockContent = Nothing
 			, checkPresent = checkPresentDummy
 			, checkPresentCheap = False
-			, exportActions = exportUnsupported
+			, exportActions = exportactions
 			, whereisKey = towhereis
 			, remoteFsck = Nothing
 			, repairRepo = Nothing
@@ -97,7 +116,8 @@ gen r u c gc
 			, gitconfig = gc
 			, readonly = False
 			, availability = avail
-			, remotetype = remote
+			, remotetype = remote 
+				{ exportSupported = cheapexportsupported }
 			, mkUnavailable = gen r u c $
 				gc { remoteAnnexExternalType = Just "!dne!" }
 			, getInfo = return [("externaltype", externaltype)]
@@ -135,6 +155,21 @@ externalSetup _ mu _ c gc = do
 	gitConfigSpecialRemote u c'' "externaltype" externaltype
 	return (c'', u)
 
+checkExportSupported :: RemoteConfig -> RemoteGitConfig -> Annex Bool
+checkExportSupported c gc = do
+	let externaltype = fromMaybe (giveup "Specify externaltype=") $
+		remoteAnnexExternalType gc <|> M.lookup "externaltype" c
+	checkExportSupported' 
+		=<< newExternal externaltype NoUUID c gc
+
+checkExportSupported' :: External -> Annex Bool
+checkExportSupported' external = safely $
+	handleRequest external EXPORTSUPPORTED Nothing $ \resp -> case resp of
+		EXPORTSUPPORTED_SUCCESS -> Just $ return True
+		EXPORTSUPPORTED_FAILURE -> Just $ return False
+		UNSUPPORTED_REQUEST -> Just $ return False
+		_ -> Nothing
+
 store :: External -> Storer
 store external = fileStorer $ \k f p ->
 	handleRequestKey external (\sk -> TRANSFER Upload sk f) k (Just p) $ \resp ->
@@ -189,6 +224,78 @@ whereis external k = handleRequestKey external WHEREIS k Nothing $ \resp -> case
 	UNSUPPORTED_REQUEST -> Just $ return []
 	_ -> Nothing
 
+storeExportExternal :: External -> FilePath -> Key -> ExportLocation -> MeterUpdate -> Annex Bool
+storeExportExternal external f k loc p = safely $
+	handleRequestExport external loc req k (Just p) $ \resp -> case resp of
+		TRANSFER_SUCCESS Upload k' | k == k' ->
+			Just $ return True
+		TRANSFER_FAILURE Upload k' errmsg | k == k' ->
+			Just $ do
+				warning errmsg
+				return False
+		UNSUPPORTED_REQUEST -> Just $ do
+			warning "TRANSFEREXPORT not implemented by external special remote"
+			return False
+		_ -> Nothing
+  where
+	req sk = TRANSFEREXPORT Upload sk f
+
+retrieveExportExternal :: External -> Key -> ExportLocation -> FilePath -> MeterUpdate -> Annex Bool
+retrieveExportExternal external k loc d p = safely $
+	handleRequestExport external loc req k (Just p) $ \resp -> case resp of
+		TRANSFER_SUCCESS Download k'
+			| k == k' -> Just $ return True
+		TRANSFER_FAILURE Download k' errmsg
+			| k == k' -> Just $ do
+				warning errmsg
+				return False
+		UNSUPPORTED_REQUEST -> Just $ do
+			warning "TRANSFEREXPORT not implemented by external special remote"
+			return False
+		_ -> Nothing
+  where
+	req sk = TRANSFEREXPORT Download sk d
+
+removeExportExternal :: External -> Key -> ExportLocation -> Annex Bool
+removeExportExternal external k loc = safely $
+	handleRequestExport external loc REMOVEEXPORT k Nothing $ \resp -> case resp of
+		REMOVE_SUCCESS k'
+			| k == k' -> Just $ return True
+		REMOVE_FAILURE k' errmsg
+			| k == k' -> Just $ do
+				warning errmsg
+				return False
+		UNSUPPORTED_REQUEST -> Just $ do
+			warning "REMOVEEXPORT not implemented by external special remote"
+			return False
+		_ -> Nothing
+
+checkPresentExportExternal :: External -> Key -> ExportLocation -> Annex Bool
+checkPresentExportExternal external k loc = either giveup id <$> go
+  where
+	go = handleRequestExport external loc CHECKPRESENTEXPORT k Nothing $ \resp -> case resp of
+		CHECKPRESENT_SUCCESS k'
+			| k' == k -> Just $ return $ Right True
+		CHECKPRESENT_FAILURE k'
+			| k' == k -> Just $ return $ Right False
+		CHECKPRESENT_UNKNOWN k' errmsg
+			| k' == k -> Just $ return $ Left errmsg
+		UNSUPPORTED_REQUEST -> Just $ return $
+			Left "CHECKPRESENTEXPORT not implemented by external special remote"
+		_ -> Nothing
+
+renameExportExternal :: External -> Key -> ExportLocation -> ExportLocation -> Annex Bool
+renameExportExternal external k src dest = safely $
+	handleRequestExport external src req k Nothing $ \resp -> case resp of
+		RENAMEEXPORT_SUCCESS k'
+			| k' == k -> Just $ return True
+		RENAMEEXPORT_FAILURE k' 
+			| k' == k -> Just $ return False
+		UNSUPPORTED_REQUEST -> Just $ return False
+		_ -> Nothing
+  where
+	req sk = RENAMEEXPORT sk dest
+
 safely :: Annex Bool -> Annex Bool
 safely a = go =<< tryNonAsync a

(Diff truncated)
Added a comment
diff --git a/doc/forum/Is_there_a_way_to_unannex_some_file___63__/comment_2_3b05e9eaa554b70594f0195dfcef831d._comment b/doc/forum/Is_there_a_way_to_unannex_some_file___63__/comment_2_3b05e9eaa554b70594f0195dfcef831d._comment
new file mode 100644
index 000000000..d911c5efe
--- /dev/null
+++ b/doc/forum/Is_there_a_way_to_unannex_some_file___63__/comment_2_3b05e9eaa554b70594f0195dfcef831d._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="karel-de-macil"
+ avatar="http://cdn.libravatar.org/avatar/46688332185c941113a9c9827cb093e9"
+ subject="comment 2"
+ date="2017-09-08T08:31:36Z"
+ content="""
+yes exactly , thanks alot...
+"""]]

devblog
diff --git a/doc/devblog/day_469__export_merged.mdwn b/doc/devblog/day_469__export_merged.mdwn
new file mode 100644
index 000000000..5cd54d878
--- /dev/null
+++ b/doc/devblog/day_469__export_merged.mdwn
@@ -0,0 +1,8 @@
+I've merged the `export` branch, after fixing most of the remaining known
+warts, and testing clean-up from interrupted exports and export conflicts.
+
+The main thing remaining to be done is adding the new commands to the
+external special remote interface, and adding export support to S3, webdav,
+and rsync special remotes.
+
+Today's work was supported by the NSF-funded DataLad project.

mention git-annex export
diff --git a/doc/forum/original_filename_on_s3/comment_3_0aea0eef336dd648013a9cf7789fc445._comment b/doc/forum/original_filename_on_s3/comment_3_0aea0eef336dd648013a9cf7789fc445._comment
new file mode 100644
index 000000000..1d7568730
--- /dev/null
+++ b/doc/forum/original_filename_on_s3/comment_3_0aea0eef336dd648013a9cf7789fc445._comment
@@ -0,0 +1,10 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 3"""
+ date="2017-09-07T20:13:23Z"
+ content="""
+The new `git-annex export` feature allows you to export a tree
+to a special remote, with the origiginal filenames being visible there.
+
+S3 should support it soon.
+"""]]
diff --git a/doc/forum/syncing_music_to_my_android_device/comment_2_e49c9e5da5c5d1db228b9b753bed53ff._comment b/doc/forum/syncing_music_to_my_android_device/comment_2_e49c9e5da5c5d1db228b9b753bed53ff._comment
new file mode 100644
index 000000000..c14c9e923
--- /dev/null
+++ b/doc/forum/syncing_music_to_my_android_device/comment_2_e49c9e5da5c5d1db228b9b753bed53ff._comment
@@ -0,0 +1,18 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 2"""
+ date="2017-09-07T20:15:10Z"
+ content="""
+The new `git annex export` command can be used for this. It exports the
+current tree of files to a special remote. If your Android device can be
+mounted to the filesystem, you can initremote a directory special remote on
+it, and then export to it.
+
+It would also be possible to make a special remote that uses `adb` or some
+other method to manipulate the files on the Android device, rather than
+going via the directory special remote.
+
+The main downside is, if you modify files on the Android device, or add new
+files, there's no way to commit the changes from there back to
+your git repository.
+"""]]
diff --git a/doc/todo/Facilitate_public_pretty_S3_URLs.mdwn b/doc/todo/Facilitate_public_pretty_S3_URLs.mdwn
index f5b925498..69d09e3fa 100644
--- a/doc/todo/Facilitate_public_pretty_S3_URLs.mdwn
+++ b/doc/todo/Facilitate_public_pretty_S3_URLs.mdwn
@@ -17,4 +17,7 @@ Thank you!
 
 > I don't think this is something git-annex can usefully do.
 > Instead, see
-> <http://git-annex.branchable.com/tips/public_Amazon_S3_remote/>. [[done]] --[[Joey]]
+> <http://git-annex.branchable.com/tips/public_Amazon_S3_remote/>. --[[Joey]]
+
+> [[done]]; the new `git-annex export` feature allows you to export a tree
+> to a special remote. --[[Joey]]
diff --git a/doc/todo/dumb__44___unsafe__44___human-readable_backend.mdwn b/doc/todo/dumb__44___unsafe__44___human-readable_backend.mdwn
index b73467f4d..6bb1b5bf6 100644
--- a/doc/todo/dumb__44___unsafe__44___human-readable_backend.mdwn
+++ b/doc/todo/dumb__44___unsafe__44___human-readable_backend.mdwn
@@ -10,3 +10,6 @@ I know this would have several downsides:
 * much more exposed to corruption (no checksum to check against recorded? or can this be put somewhere else?)
 
 The main advantage, for me, is much better interoperability: any remote becomes usable by other non-git-annex clients... It would also be great as it would allow me to store only a *part* of my git-annex files on a remote without having a forest of empty files (on broken filesystems) or symlinks (on real filesystems) for files that are missing, something that is a massive source of confusion for users I work with. It could, for example, allow me to create thumb drives that would solve the [[hide missing files]] problem. -- [[anarcat]]
+
+> [[done]]; the new `git-annex export` feature allows you to export a tree
+> to a special remote. --[[Joey]]

update
diff --git a/doc/devblog/day_468__export_renames/comment_2_7fbbc5beb80acf32eff617ec704d5466._comment b/doc/devblog/day_468__export_renames/comment_2_7fbbc5beb80acf32eff617ec704d5466._comment
index fcd1eb66c..1dfdfe6de 100644
--- a/doc/devblog/day_468__export_renames/comment_2_7fbbc5beb80acf32eff617ec704d5466._comment
+++ b/doc/devblog/day_468__export_renames/comment_2_7fbbc5beb80acf32eff617ec704d5466._comment
@@ -13,7 +13,14 @@ efficiently, by moving to the single temp file and copying. Although it
 might still involve the special remote doing more work than strictly
 necessary depending on how it implements copy.
 
-At some point you have to pick simplicity and ability to recover from
-problems over totally optimal speed though, and I think your case is a
-reasonable place to draw the line.
+Anyway, if the user is exporting copys of files, they're probably going to
+care more about that being somewhat more efficient than about renames of
+pairs of those copies being optimally efficient..
+
+Handling it fully optimally, with only one temp file per key,
+would require analizing the change and finding pairs of renames
+that swap filenames and handling each pair in turn. I suppose that
+is doable, just needs a better data structure than I have now.
+I've added a note to my todo list and the design document, but
+no promises.
 """]]
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index c4e57bd1c..7a94cd1c8 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -26,3 +26,15 @@ Work is in progress. Todo list:
   to get populated based on the export log in these cases.
 * Support export to aditional special remotes (S3 etc)
 * Support export to external special remotes.
+
+Low priority:
+
+* When there are two pairs of duplicate files, and the filenames are
+  swapped around, the current rename handling renames both dups to a single
+  temp file, and so the other file in the pair gets re-uploaded
+  unncessarily. This could be improved.
+
+  Perhaps: Find pairs of renames that swap content between two files.
+  Run each pair in turn. Then run the current rename code. Although this
+  still probably misses cases, where eg, content cycles amoung 3 files, and
+  the same content amoung 3 other files. Is there a general algorythm?

correction
diff --git a/doc/devblog/day_468__export_renames/comment_2_7fbbc5beb80acf32eff617ec704d5466._comment b/doc/devblog/day_468__export_renames/comment_2_7fbbc5beb80acf32eff617ec704d5466._comment
index f9c5663a0..fcd1eb66c 100644
--- a/doc/devblog/day_468__export_renames/comment_2_7fbbc5beb80acf32eff617ec704d5466._comment
+++ b/doc/devblog/day_468__export_renames/comment_2_7fbbc5beb80acf32eff617ec704d5466._comment
@@ -3,6 +3,17 @@
  subject="""comment 2"""
  date="2017-09-07T19:54:41Z"
  content="""
-Yes, I considered such cases, also cycles with multiple files, etc.
-All will work! :)
+Did not consider such a case. However, that's closely related to exporting
+files with same content being inefficient. There's a move
+operation but no copy operation. I might add a copy operation eventually,
+unsure.
+
+If a copy operation is added, then that rename case can be handled more
+efficiently, by moving to the single temp file and copying. Although it
+might still involve the special remote doing more work than strictly
+necessary depending on how it implements copy.
+
+At some point you have to pick simplicity and ability to recover from
+problems over totally optimal speed though, and I think your case is a
+reasonable place to draw the line.
 """]]

comment
diff --git a/doc/devblog/day_468__export_renames/comment_2_7fbbc5beb80acf32eff617ec704d5466._comment b/doc/devblog/day_468__export_renames/comment_2_7fbbc5beb80acf32eff617ec704d5466._comment
new file mode 100644
index 000000000..f9c5663a0
--- /dev/null
+++ b/doc/devblog/day_468__export_renames/comment_2_7fbbc5beb80acf32eff617ec704d5466._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 2"""
+ date="2017-09-07T19:54:41Z"
+ content="""
+Yes, I considered such cases, also cycles with multiple files, etc.
+All will work! :)
+"""]]

avoid renaming to temp files before deleting
Only rename when actually ncessary.
The diff gets buffered in memory. Probably git has to buffer a diff in
memory when generating it as well, so this memory usage should not be a
problem, even when the diff is very large. I hope.
This commit was supported by the NSF-funded DataLad project.
diff --git a/Command/Export.hs b/Command/Export.hs
index 2cf453ea1..09878dabf 100644
--- a/Command/Export.hs
+++ b/Command/Export.hs
@@ -5,6 +5,8 @@
  - Licensed under the GNU GPL version 3 or higher.
  -}
 
+{-# LANGUAGE TupleSections #-}
+
 module Command.Export where
 
 import Command
@@ -26,6 +28,7 @@ import Messages.Progress
 import Utility.Tmp
 
 import qualified Data.ByteString.Lazy as L
+import qualified Data.Map as M
 
 cmd :: Command
 cmd = command "export" SectionCommon
@@ -49,7 +52,7 @@ optParser _ = ExportOptions
 -- An export includes both annexed files and files stored in git.
 -- For the latter, a SHA1 key is synthesized.
 data ExportKey = AnnexKey Key | GitKey Key
-	deriving (Show)
+	deriving (Show, Eq, Ord)
 
 asKey :: ExportKey -> Key
 asKey (AnnexKey k) = k
@@ -103,17 +106,22 @@ seek o = do
 	case map exportedTreeish old of
 		[] -> return ()
 		[oldtreesha] -> do
-			-- Rename all old files to temp.
-			mapdiff
-				(\diff -> startMoveToTempName r db (Git.DiffTree.file diff) (Git.DiffTree.srcsha diff))
-				oldtreesha new
+			diffmap <- mkDiffMap oldtreesha new
+			let seekdiffmap a = seekActions $ pure $ map a (M.toList diffmap)
+			-- Rename old files to temp, or delete.
+			seekdiffmap $ \(ek, (moldf, mnewf)) ->
+				case (moldf, mnewf) of
+					(Just oldf, Just _newf) ->
+						startMoveToTempName r db oldf ek
+					(Just oldf, Nothing) -> 
+						startUnexport' r db oldf ek
+					_ -> stop
 			-- Rename from temp to new files.
-			mapdiff (\diff -> startMoveFromTempName r db (Git.DiffTree.dstsha diff) (Git.DiffTree.file diff))
-				oldtreesha new
-			-- Remove all remaining temps.
-			mapdiff
-				(startUnexportTempName r db . Git.DiffTree.srcsha)
-				oldtreesha new
+			seekdiffmap $ \(ek, (moldf, mnewf)) ->
+				case (moldf, mnewf) of
+					(Just _oldf, Just newf) ->
+						startMoveFromTempName r db ek newf
+					_ -> stop
 		ts -> do
 			warning "Export conflict detected. Different trees have been exported to the same special remote. Resolving.."
 			forM_ ts $ \oldtreesha -> do
@@ -126,7 +134,7 @@ seek o = do
 					, Git.DiffTree.dstsha d
 					]
 				-- Don't rename to temp, because the
-				-- content is unknown; unexport instead.
+				-- content is unknown; delete instead.
 				mapdiff
 					(\diff -> startUnexport r db (Git.DiffTree.file diff) (unexportboth diff))
 					oldtreesha new
@@ -152,6 +160,28 @@ seek o = do
 		seekActions $ pure $ map a diff
 		void $ liftIO cleanup
 
+-- Map of old and new filenames for each changed ExportKey in a diff.
+type DiffMap = M.Map ExportKey (Maybe TopFilePath, Maybe TopFilePath)
+
+mkDiffMap :: Git.Ref -> Git.Ref -> Annex DiffMap
+mkDiffMap old new = do
+	(diff, cleanup) <- inRepo $ Git.DiffTree.diffTreeRecursive old new
+	diffmap <- M.fromListWith combinedm . concat <$> forM diff mkdm
+	void $ liftIO cleanup
+	return diffmap
+  where
+	combinedm (srca, dsta) (srcb, dstb) = (srca <|> srcb, dsta <|> dstb)
+	mkdm i = do
+		srcek <- getk (Git.DiffTree.srcsha i)
+		dstek <- getk (Git.DiffTree.dstsha i)
+		return $ catMaybes
+			[ (, (Just (Git.DiffTree.file i), Nothing)) <$> srcek
+			, (, (Nothing, Just (Git.DiffTree.file i))) <$> dstek
+			]
+	getk sha
+		| sha == nullSha = return Nothing
+		| otherwise = Just <$> exportKey sha
+
 startExport :: Remote -> ExportHandle -> Git.LsTree.TreeItem -> CommandStart
 startExport r db ti = do
 	ek <- exportKey (Git.LsTree.sha ti)
@@ -204,6 +234,14 @@ startUnexport r db f shas = do
 	loc = ExportLocation $ toInternalGitPath f'
 	f' = getTopFilePath f
 
+startUnexport' :: Remote -> ExportHandle -> TopFilePath -> ExportKey -> CommandStart
+startUnexport' r db f ek = do
+	showStart "unexport" f'
+	next $ performUnexport r db [ek] loc
+  where
+	loc = ExportLocation $ toInternalGitPath f'
+	f' = getTopFilePath f
+
 performUnexport :: Remote -> ExportHandle -> [ExportKey] -> ExportLocation -> CommandPerform
 performUnexport r db eks loc = do
 	ifM (allM (\ek -> removeExport (exportActions r) (asKey ek) loc) eks)
@@ -236,27 +274,21 @@ startUnexportTempName r db sha
 			showStart "unexport" f
 			next $ performUnexport r db [ek] loc
 
-startMoveToTempName :: Remote -> ExportHandle -> TopFilePath -> Git.Sha -> CommandStart
-startMoveToTempName r db f sha
-	| sha == nullSha = stop
-	| otherwise = do
-		ek <- exportKey sha
-		let tmploc@(ExportLocation tmpf) = exportTempName ek
-		showStart "rename" (f' ++ " -> " ++ tmpf)
-		next $ performRename r db ek loc tmploc
+startMoveToTempName :: Remote -> ExportHandle -> TopFilePath -> ExportKey -> CommandStart
+startMoveToTempName r db f ek = do
+	let tmploc@(ExportLocation tmpf) = exportTempName ek
+	showStart "rename" (f' ++ " -> " ++ tmpf)
+	next $ performRename r db ek loc tmploc
   where
 	loc = ExportLocation $ toInternalGitPath f'
 	f' = getTopFilePath f
 
-startMoveFromTempName :: Remote -> ExportHandle -> Git.Sha -> TopFilePath -> CommandStart
-startMoveFromTempName r db sha f
-	| sha == nullSha = stop
-	| otherwise = do
-		ek <- exportKey sha
-		let tmploc@(ExportLocation tmpf) = exportTempName ek
-		stopUnless (liftIO $ elem tmploc <$> getExportLocation db (asKey ek)) $ do
-			showStart "rename" (tmpf ++ " -> " ++ f')
-			next $ performRename r db ek tmploc loc
+startMoveFromTempName :: Remote -> ExportHandle -> ExportKey -> TopFilePath -> CommandStart
+startMoveFromTempName r db ek f = do
+	let tmploc@(ExportLocation tmpf) = exportTempName ek
+	stopUnless (liftIO $ elem tmploc <$> getExportLocation db (asKey ek)) $ do
+		showStart "rename" (tmpf ++ " -> " ++ f')
+		next $ performRename r db ek tmploc loc
   where
 	loc = ExportLocation $ toInternalGitPath f'
 	f' = getTopFilePath f
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index 8f5c3f8f1..c4e57bd1c 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -24,8 +24,5 @@ Work is in progress. Todo list:
   export from another repository also doesn't work right, because the
   export database is not populated. So, seems that the export database needs
   to get populated based on the export log in these cases.
-* Currently all modified/deleted files are renamed to temp files,
-  even when they won't be used. Avoid doing this unless the
-  temp file will be renamed to the new filename.
 * Support export to aditional special remotes (S3 etc)
 * Support export to external special remotes.

prevent exporttree=yes on remotes that don't support exports
Don't allow "exporttree=yes" to be set when the special remote
does not support exports. That would be confusing since the user would
set up a special remote for exports, but `git annex export` to it would
later fail.
This commit was supported by the NSF-funded DataLad project.
diff --git a/Annex/Content.hs b/Annex/Content.hs
index b74b39753..0b665d4dc 100644
--- a/Annex/Content.hs
+++ b/Annex/Content.hs
@@ -359,7 +359,7 @@ shouldVerify (RemoteVerify r) =
 			<&&> pure (remoteAnnexVerify (Types.Remote.gitconfig r)))
 	-- Export remotes are not key/value stores, so always verify
 	-- content from them even when verification is disabled.
-	<||> Types.Remote.exportSupported (Types.Remote.exportActions r)
+	<||> Types.Remote.isExportSupported r
 
 {- Checks if there is enough free disk space to download a key
  - to its temp file.
diff --git a/Command/Export.hs b/Command/Export.hs
index d397b2def..2cf453ea1 100644
--- a/Command/Export.hs
+++ b/Command/Export.hs
@@ -77,7 +77,7 @@ exportTempName ek = ExportLocation $
 seek :: ExportOptions -> CommandSeek
 seek o = do
 	r <- getParsed (exportRemote o)
-	unlessM (exportSupported (exportActions r)) $
+	unlessM (isExportSupported r) $
 		giveup "That remote does not support exports."
 
 	new <- fromMaybe (giveup "unknown tree") <$>
diff --git a/Logs/Trust.hs b/Logs/Trust.hs
index 85b62ed74..54cafc9f4 100644
--- a/Logs/Trust.hs
+++ b/Logs/Trust.hs
@@ -67,7 +67,7 @@ trustMapLoad = do
 	overrides <- Annex.getState Annex.forcetrust
 	l <- remoteList
 	-- Exports are never trusted, since they are not key/value stores.
-	exports <- filterM (Types.Remote.exportSupported . Types.Remote.exportActions) l
+	exports <- filterM Types.Remote.isExportSupported l
 	let exportoverrides = M.fromList $
 		map (\r -> (Types.Remote.uuid r, UnTrusted)) exports
 	logged <- trustMapRaw
diff --git a/Remote.hs b/Remote.hs
index 877c9f37d..8d826712c 100644
--- a/Remote.hs
+++ b/Remote.hs
@@ -53,6 +53,7 @@ module Remote (
 	checkAvailable,
 	isXMPPRemote,
 	claimingUrl,
+	isExportSupported,
 ) where
 
 import Data.Ord
diff --git a/Remote/BitTorrent.hs b/Remote/BitTorrent.hs
index 9a1be1c0e..37594bd11 100644
--- a/Remote/BitTorrent.hs
+++ b/Remote/BitTorrent.hs
@@ -36,12 +36,13 @@ import qualified Data.ByteString.Lazy as B
 #endif
 
 remote :: RemoteType
-remote = RemoteType {
-	typename = "bittorrent",
-	enumerate = list,
-	generate = gen,
-	setup = error "not supported"
-}
+remote = RemoteType
+	{ typename = "bittorrent"
+	, enumerate = list
+	, generate = gen
+	, setup = error "not supported"
+	, exportSupported = exportUnsupported
+	}
 
 -- There is only one bittorrent remote, and it always exists.
 list :: Bool -> Annex [Git.Repo]
diff --git a/Remote/Bup.hs b/Remote/Bup.hs
index 6ff2aa885..4180cbb7d 100644
--- a/Remote/Bup.hs
+++ b/Remote/Bup.hs
@@ -35,12 +35,13 @@ import Utility.Metered
 type BupRepo = String
 
 remote :: RemoteType
-remote = RemoteType {
-	typename = "bup",
-	enumerate = const (findSpecialRemotes "buprepo"),
-	generate = gen,
-	setup = bupSetup
-}
+remote = RemoteType
+	{ typename = "bup"
+	, enumerate = const (findSpecialRemotes "buprepo")
+	, generate = gen
+	, setup = bupSetup
+	, exportSupported = exportUnsupported
+	}
 
 gen :: Git.Repo -> UUID -> RemoteConfig -> RemoteGitConfig -> Annex (Maybe Remote)
 gen r u c gc = do
diff --git a/Remote/Ddar.hs b/Remote/Ddar.hs
index c5d02a4e6..3949bf569 100644
--- a/Remote/Ddar.hs
+++ b/Remote/Ddar.hs
@@ -30,12 +30,13 @@ data DdarRepo = DdarRepo
 	}
 
 remote :: RemoteType
-remote = RemoteType {
-	typename = "ddar",
-	enumerate = const (findSpecialRemotes "ddarrepo"),
-	generate = gen,
-	setup = ddarSetup
-}
+remote = RemoteType
+	{ typename = "ddar"
+	, enumerate = const (findSpecialRemotes "ddarrepo")
+	, generate = gen
+	, setup = ddarSetup
+	, exportSupported = exportUnsupported
+	}
 
 gen :: Git.Repo -> UUID -> RemoteConfig -> RemoteGitConfig -> Annex (Maybe Remote)
 gen r u c gc = do
diff --git a/Remote/Directory.hs b/Remote/Directory.hs
index 512ba1cef..22413b7e9 100644
--- a/Remote/Directory.hs
+++ b/Remote/Directory.hs
@@ -33,18 +33,19 @@ import Utility.Metered
 import Utility.Tmp
 
 remote :: RemoteType
-remote = RemoteType {
-	typename = "directory",
-	enumerate = const (findSpecialRemotes "directory"),
-	generate = gen,
-	setup = exportableRemoteSetup directorySetup
-}
+remote = RemoteType
+	{ typename = "directory"
+	, enumerate = const (findSpecialRemotes "directory")
+	, generate = gen
+	, setup = directorySetup
+	, exportSupported = exportIsSupported
+	}
 
 gen :: Git.Repo -> UUID -> RemoteConfig -> RemoteGitConfig -> Annex (Maybe Remote)
 gen r u c gc = do
 	cst <- remoteCost gc cheapRemoteCost
 	let chunkconfig = getChunkConfig c
-	exportableRemote $ specialRemote c
+	return $ Just $ specialRemote c
 		(prepareStore dir chunkconfig)
 		(retrieve dir chunkconfig)
 		(simplyPrepare $ remove dir)
@@ -61,8 +62,7 @@ gen r u c gc = do
 			, checkPresent = checkPresentDummy
 			, checkPresentCheap = True
 			, exportActions = ExportActions
-				{ exportSupported = return True
-				, storeExport = storeExportDirectory dir
+				{ storeExport = storeExportDirectory dir
 				, retrieveExport = retrieveExportDirectory dir
 				, removeExport = removeExportDirectory dir
 				, checkPresentExport = checkPresentExportDirectory dir
diff --git a/Remote/External.hs b/Remote/External.hs
index fca60a995..71a07d3ea 100644
--- a/Remote/External.hs
+++ b/Remote/External.hs
@@ -40,12 +40,13 @@ import System.Log.Logger (debugM)
 import qualified Data.Map as M
 
 remote :: RemoteType
-remote = RemoteType {
-	typename = "external",
-	enumerate = const (findSpecialRemotes "externaltype"),
-	generate = gen,
-	setup = externalSetup
-}
+remote = RemoteType
+	{ typename = "external"
+	, enumerate = const (findSpecialRemotes "externaltype")
+	, generate = gen
+	, setup = externalSetup
+	, exportSupported = exportUnsupported
+	}
 
 gen :: Git.Repo -> UUID -> RemoteConfig -> RemoteGitConfig -> Annex (Maybe Remote)
 gen r u c gc
diff --git a/Remote/GCrypt.hs b/Remote/GCrypt.hs
index dd681a75c..3270a1dc7 100644
--- a/Remote/GCrypt.hs
+++ b/Remote/GCrypt.hs
@@ -52,14 +52,15 @@ import Utility.Gpg
 import Utility.SshHost
 
 remote :: RemoteType
-remote = RemoteType {
-	typename = "gcrypt",
+remote = RemoteType
+	{ typename = "gcrypt"

(Diff truncated)
document new stuff for external special remotes
Got rid of RENAMEEXPORT-UNSUPPORTED, no reason not to use
RENAMEEXPORT-FAILURE for that.
This commit was supported by the NSF-funded DataLad project.
diff --git a/doc/design/exporting_trees_to_special_remotes.mdwn b/doc/design/exporting_trees_to_special_remotes.mdwn
index 2b5217d95..6e7cc68db 100644
--- a/doc/design/exporting_trees_to_special_remotes.mdwn
+++ b/doc/design/exporting_trees_to_special_remotes.mdwn
@@ -114,9 +114,8 @@ Here's the changes to the latter:
 * `RENAMEEXPORT Key NewName`  
   Requests the remote rename a file stored on it from the previously
   provided Name to the NewName.  
-  The remote responds with `RENAMEEXPORT-SUCCESS`,
-  `RENAMEEXPORT-FAILURE`, or with `RENAMEEXPORT-UNSUPPORTED` if an efficient
-  rename cannot be done.
+  The remote responds with `RENAMEEXPORT-SUCCESS` or with
+  `RENAMEEXPORT-FAILURE` if an efficient rename cannot be done.
 
 To support old external special remote programs that have not been updated
 to support exports, git-annex will need to handle an `ERROR` response
diff --git a/doc/design/external_special_remote_protocol.mdwn b/doc/design/external_special_remote_protocol.mdwn
index 87a838bd4..8a34bb2d7 100644
--- a/doc/design/external_special_remote_protocol.mdwn
+++ b/doc/design/external_special_remote_protocol.mdwn
@@ -43,7 +43,8 @@ the version of the protocol it is using.
 
 Once it knows the version, git-annex will generally 
 send a message telling the special remote to start up.
-(Or it might send a INITREMOTE, so don't hardcode this order.)
+(Or it might send an INITREMOTE or EXPORTSUPPORTED,
+so don't hardcode this order.)
 
 	PREPARE
 
@@ -102,7 +103,7 @@ The following requests *must* all be supported by the special remote.
   So any one-time setup tasks should be done idempotently.
 * `PREPARE`  
   Tells the remote that it's time to prepare itself to be used.  
-  Only INITREMOTE can come before this.
+  Only INITREMOTE or EXPORTSUPPORTED can come before this.
 * `TRANSFER STORE|RETRIEVE Key File`  
   Requests the transfer of a key. For STORE, the File is the file to upload;
   for RETRIEVE the File is where to store the download.  
@@ -143,6 +144,46 @@ replying with `UNSUPPORTED-REQUEST` is acceptable.
   network access.  
   This is not needed when `SETURIPRESENT` is used, since such uris are
   automatically displayed by `git annex whereis`.  
+* `EXPORTSUPPORTED`  
+  Used to check if a special remote supports exports. The remote
+  responds with either `EXPORTSUPPORTED-SUCCESS` or
+  `EXPORTSUPPORTED-FAILURE`. Note that this request may be made before
+  or after `PREPARE`.
+* `EXPORT Name`  
+  Comes immediately before each of the following export-related requests, 
+  specifying the name of the exported file. It will be in the form
+  of a relative path, and may contain path separators, whitespace,
+  and other special characters.
+* `TRANSFEREXPORT STORE|RETRIEVE Key File`  
+  Requests the transfer of a File on local disk to or from the previously 
+  provided Name on the special remote.  
+  Note that it's important that, while a file is being stored,
+  CHECKPRESENTEXPORT not indicate it's present until all the data has
+  been transferred.  
+  The remote responds with either `TRANSFER-SUCCESS` or
+  `TRANSFER-FAILURE`, and a remote where exports do not make sense
+  may always fail.
+* `CHECKPRESENTEXPORT Key`  
+  Requests the remote to check if the previously provided Name is present
+  in it.  
+  The remote responds with `CHECKPRESENT-SUCCESS`, `CHECKPRESENT-FAILURE`,
+  or `CHECKPRESENT-UNKNOWN`.
+* `REMOVEEXPORT Key`  
+  Requests the remote to remove content stored by `TRANSFEREXPORT`
+  with the previously provided Name.  
+  The remote responds with either `REMOVE-SUCCESS` or
+  `REMOVE-FAILURE`.  
+  If the content was already not present in the remote, it should
+  respond with `REMOVE-SUCCESS`.
+* `RENAMEEXPORT Key NewName`  
+  Requests the remote rename a file stored on it from the previously
+  provided Name to the NewName.  
+  The remote responds with `RENAMEEXPORT-SUCCESS` or
+  `RENAMEEXPORT-FAILURE`.
+
+To support old external special remote programs that have not been updated
+to support exports, git-annex will need to handle an `ERROR` response
+when using any of the above.
 
 More optional requests may be added, without changing the protocol version,
 so if an unknown request is seen, reply with `UNSUPPORTED-REQUEST`.
@@ -210,6 +251,15 @@ while it's handling a request.
   stored in the special remote.
 * `WHEREIS-FAILURE`  
   Indicates that no location is known for a key.
+* `EXPORTSUPPORTED-SUCCESS`  
+  Indicates that it makes sense to use this special remote as an export.
+* `EXPORTSUPPORTED`  
+  Indicates that it does not make sense to use this special remote as an
+  export.
+* `RENAMEEXPORT-SUCCESS`  
+  Indicates that a `RENAMEEXPORT` was done successfully.
+* `RENAMEEXPORT-FAILURE`  
+  Indicates that a `RENAMEEXPORT` failed for whatever reason.
 * `UNSUPPORTED-REQUEST`  
   Indicates that the special remote does not know how to handle a request.
 

diff --git a/doc/forum/Shared_directory_with_non_git-annex_users.mdwn b/doc/forum/Shared_directory_with_non_git-annex_users.mdwn
new file mode 100644
index 000000000..b6e9d5aaa
--- /dev/null
+++ b/doc/forum/Shared_directory_with_non_git-annex_users.mdwn
@@ -0,0 +1,15 @@
+Hello,
+
+I want to use git-annex on a directory that also other persons, without git-annex use, i.e. modify, add and remove files to.
+
+What need I consider when doing that?
+
+ * Use indirect mode
+ * The the .git directory somewhere else?
+ * Be prepared that a ```git annex get``` can always bring up checksum problems because someone modified the file
+
+I assume git-annex checksums before pushing to the repo, so that I don't otherwrite someone elses changes.
+
+What else?
+
+Thanks!

Added a comment
diff --git a/doc/forum/Is_there_a_way_to_unannex_some_file___63__/comment_1_82095cef37dd815bcd1d101a1715e3c7._comment b/doc/forum/Is_there_a_way_to_unannex_some_file___63__/comment_1_82095cef37dd815bcd1d101a1715e3c7._comment
new file mode 100644
index 000000000..0f87f1460
--- /dev/null
+++ b/doc/forum/Is_there_a_way_to_unannex_some_file___63__/comment_1_82095cef37dd815bcd1d101a1715e3c7._comment
@@ -0,0 +1,12 @@
+[[!comment format=mdwn
+ username="Horus"
+ avatar="http://cdn.libravatar.org/avatar/8f0ee08b98ea5bba76c3fe112c08851c"
+ subject="comment 1"
+ date="2017-09-07T09:30:53Z"
+ content="""
+Well, there is ```git unannex ...``` 
+
+<https://git-annex.branchable.com/git-annex-unannex/>
+
+Is that what you're looking for?
+"""]]

Added a comment
diff --git a/doc/devblog/day_468__export_renames/comment_1_4c37e6f9ac1e1571495b0d355c8af9ff._comment b/doc/devblog/day_468__export_renames/comment_1_4c37e6f9ac1e1571495b0d355c8af9ff._comment
new file mode 100644
index 000000000..b9bc07ee4
--- /dev/null
+++ b/doc/devblog/day_468__export_renames/comment_1_4c37e6f9ac1e1571495b0d355c8af9ff._comment
@@ -0,0 +1,11 @@
+[[!comment format=mdwn
+ username="anthony@ad39673d230d75cbfd19d2757d754030049c7673"
+ nickname="anthony"
+ avatar="http://cdn.libravatar.org/avatar/05b48b72766177b3b0a6ff4afdb70790"
+ subject="comment 1"
+ date="2017-09-06T22:01:57Z"
+ content="""
+Looking forward to the export support! 
+
+Just curious (and I apologize in advance for any nightmares induced) did you consider the case of four files, A–D, where (content-wise) A=B and C=D and the change is to swap A/C and B/D? That'd potentially be an issue, since four files want to share two temporary names. Unless of course it's all only done with pairwise swaps.
+"""]]

devblog
diff --git a/doc/devblog/day_468__export_renames.mdwn b/doc/devblog/day_468__export_renames.mdwn
new file mode 100644
index 000000000..e40ebac90
--- /dev/null
+++ b/doc/devblog/day_468__export_renames.mdwn
@@ -0,0 +1,23 @@
+I knew that making `git annex export` handle renames efficiently would take
+a whole day somehow.
+
+Indeed, thinking it over, it is a seriously super hairy thing. Renames can swap
+contents between two or more files, and so temp files are needed. It has to
+handle cleaning up temp files after interrupted exports, which may be
+resumed with the same or a different tree. It also has to recover from
+export conflicts, which could cause the wrong content to be renamed to a file.
+
+I think I've thought through everything and found a way to deal with it all.
+Here's how it looks in operation swapping two files:
+
+	git annex export master --to dir
+	rename bar -> .git-annex-tmp-content-SHA256E-s30--472b01bf6234c98ce03d1386483ae578f6e58033974a1363da2606f9fa0e222a ok
+	rename foo -> .git-annex-tmp-content-SHA256E-s4--b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c ok
+	rename .git-annex-tmp-content-SHA256E-s4--b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c -> bar ok
+	rename .git-annex-tmp-content-SHA256E-s30--472b01bf6234c98ce03d1386483ae578f6e58033974a1363da2606f9fa0e222a -> foo ok
+	(recording state in git...)
+
+The export todo list is only getting longer.. But the branch may
+be close to being merged.
+
+Today's work was supported by the NSF-funded DataLad project.

fix consistency bug reading from export database
The export database has writes made to it and then expects to read back
the same data immediately. But, the way that Database.Handle does
writes, in order to support multiple writers, makes that not work, due
to caching issues. This resulted in export re-uploading files it had
already successfully renamed into place.
Fixed by allowing databases to be opened in MultiWriter or SingleWriter
mode. The export database only needs to support a single writer; it does
not make sense for multiple exports to run at the same time to the same
special remote.
All other databases still use MultiWriter mode. And by inspection,
nothing else in git-annex seems to be relying on being able to
immediately query for changes that were just written to the database.
This commit was supported by the NSF-funded DataLad project.
diff --git a/Database/Export.hs b/Database/Export.hs
index dcef88854..00c6ab251 100644
--- a/Database/Export.hs
+++ b/Database/Export.hs
@@ -48,7 +48,7 @@ openDb u = do
 	unlessM (liftIO $ doesFileExist db) $ do
 		initDb db $ void $
 			runMigrationSilent migrateExport
-	h <- liftIO $ H.openDbQueue db "exported"
+	h <- liftIO $ H.openDbQueue H.SingleWriter db "exported"
 	return $ ExportHandle h
 
 closeDb :: ExportHandle -> Annex ()
diff --git a/Database/Fsck.hs b/Database/Fsck.hs
index 9affeac85..1ce513dcf 100644
--- a/Database/Fsck.hs
+++ b/Database/Fsck.hs
@@ -63,7 +63,7 @@ openDb u = do
 		initDb db $ void $
 			runMigrationSilent migrateFsck
 	lockFileCached =<< fromRepo (gitAnnexFsckDbLock u)
-	h <- liftIO $ H.openDbQueue db "fscked"
+	h <- liftIO $ H.openDbQueue H.MultiWriter db "fscked"
 	return $ FsckHandle h u
 
 closeDb :: FsckHandle -> Annex ()
diff --git a/Database/Handle.hs b/Database/Handle.hs
index 7827be749..f5a0a5dda 100644
--- a/Database/Handle.hs
+++ b/Database/Handle.hs
@@ -9,6 +9,7 @@
 
 module Database.Handle (
 	DbHandle,
+	DbConcurrency(..),
 	openDb,
 	TableName,
 	queryDb,
@@ -35,27 +36,49 @@ import System.IO
 
 {- A DbHandle is a reference to a worker thread that communicates with
  - the database. It has a MVar which Jobs are submitted to. -}
-data DbHandle = DbHandle (Async ()) (MVar Job)
+data DbHandle = DbHandle DbConcurrency (Async ()) (MVar Job)
 
 {- Name of a table that should exist once the database is initialized. -}
 type TableName = String
 
+{- Sqlite only allows a single write to a database at a time; a concurrent
+ - write will crash. 
+ -
+ - While a DbHandle serializes concurrent writes from
+ - multiple threads. But, when a database can be written to by
+ - multiple processes concurrently, use MultiWriter to make writes
+ - to the database be done robustly.
+ - 
+ - The downside of using MultiWriter is that after writing a change to the
+ - database, the a query using the same DbHandle will not immediately see
+ - the change! This is because the change is actually written using a
+ - separate database connection, and caching can prevent seeing the change.
+ - Also, consider that if multiple processes are writing to a database,
+ - you can't rely on seeing values you've just written anyway, as another
+ - process may change them.
+ -
+ - When a database can only be written to by a single process, use
+ - SingleWriter. Changes written to the database will always be immediately
+ - visible then.
+ -}
+data DbConcurrency = SingleWriter | MultiWriter
+
 {- Opens the database, but does not perform any migrations. Only use
- - if the database is known to exist and have the right tables. -}
-openDb :: FilePath -> TableName -> IO DbHandle
-openDb db tablename = do
+ - once the database is known to exist and have the right tables. -}
+openDb :: DbConcurrency -> FilePath -> TableName -> IO DbHandle
+openDb dbconcurrency db tablename = do
 	jobs <- newEmptyMVar
 	worker <- async (workerThread (T.pack db) tablename jobs)
 	
 	-- work around https://github.com/yesodweb/persistent/issues/474
 	liftIO $ fileEncoding stderr
 
-	return $ DbHandle worker jobs
+	return $ DbHandle dbconcurrency worker jobs
 
 {- This is optional; when the DbHandle gets garbage collected it will
  - auto-close. -}
 closeDb :: DbHandle -> IO ()
-closeDb (DbHandle worker jobs) = do
+closeDb (DbHandle _ worker jobs) = do
 	putMVar jobs CloseJob
 	wait worker
 
@@ -68,9 +91,12 @@ closeDb (DbHandle worker jobs) = do
  - Only one action can be run at a time against a given DbHandle.
  - If called concurrently in the same process, this will block until
  - it is able to run.
+ -
+ - Note that when the DbHandle was opened in MultiWriter mode, recent
+ - writes may not be seen by queryDb.
  -}
 queryDb :: DbHandle -> SqlPersistM a -> IO a
-queryDb (DbHandle _ jobs) a = do
+queryDb (DbHandle _ _ jobs) a = do
 	res <- newEmptyMVar
 	putMVar jobs $ QueryJob $
 		liftIO . putMVar res =<< tryNonAsync a
@@ -79,9 +105,9 @@ queryDb (DbHandle _ jobs) a = do
 
 {- Writes a change to the database.
  -
- - If a database is opened multiple times and there's a concurrent writer,
- - the write could fail. Retries repeatedly for up to 10 seconds, 
- - which should avoid all but the most exceptional problems.
+ - In MultiWriter mode, catches failure to write to the database,
+ - and retries repeatedly for up to 10 seconds,  which should avoid
+ - all but the most exceptional problems.
  -}
 commitDb :: DbHandle -> SqlPersistM () -> IO ()
 commitDb h wa = robustly Nothing 100 (commitDb' h wa)
@@ -97,15 +123,22 @@ commitDb h wa = robustly Nothing 100 (commitDb' h wa)
 				robustly (Just e) (n-1) a
 
 commitDb' :: DbHandle -> SqlPersistM () -> IO (Either SomeException ())
-commitDb' (DbHandle _ jobs) a = do
+commitDb' (DbHandle MultiWriter _ jobs) a = do
 	res <- newEmptyMVar
-	putMVar jobs $ ChangeJob $ \runner ->
+	putMVar jobs $ RobustChangeJob $ \runner ->
 		liftIO $ putMVar res =<< tryNonAsync (runner a)
 	takeMVar res
+commitDb' (DbHandle SingleWriter _ jobs) a = do
+	res <- newEmptyMVar
+	putMVar jobs $ ChangeJob $
+		liftIO . putMVar res =<< tryNonAsync a
+	takeMVar res
+		`catchNonAsync` (const $ error "sqlite commit crashed")
 
 data Job
 	= QueryJob (SqlPersistM ())
-	| ChangeJob ((SqlPersistM () -> IO ()) -> IO ())
+	| ChangeJob (SqlPersistM ())
+	| RobustChangeJob ((SqlPersistM () -> IO ()) -> IO ())
 	| CloseJob
 
 workerThread :: T.Text -> TableName -> MVar Job -> IO ()
@@ -127,10 +160,12 @@ workerThread db tablename jobs =
 			Left BlockedIndefinitelyOnMVar -> return ()
 			Right CloseJob -> return ()
 			Right (QueryJob a) -> a >> loop
-			-- change is run in a separate database connection
+			Right (ChangeJob a) -> a >> loop
+			-- Change is run in a separate database connection
 			-- since sqlite only supports a single writer at a
 			-- time, and it may crash the database connection
-			Right (ChangeJob a) -> liftIO (a (runSqliteRobustly tablename db)) >> loop
+			-- that the write is made to.
+			Right (RobustChangeJob a) -> liftIO (a (runSqliteRobustly tablename db)) >> loop
 	
 -- like runSqlite, but calls settle on the raw sql Connection.
 runSqliteRobustly :: TableName -> T.Text -> (SqlPersistM a) -> IO a
diff --git a/Database/Keys.hs b/Database/Keys.hs
index b9440ac1a..282da9f94 100644
--- a/Database/Keys.hs
+++ b/Database/Keys.hs
@@ -124,7 +124,7 @@ openDb createdb _ = catchPermissionDenied permerr $ withExclusiveLock gitAnnexKe
 			open db
 		(False, False) -> return DbUnavailable
   where
-	open db = liftIO $ DbOpen <$> H.openDbQueue db SQL.containedTable
+	open db = liftIO $ DbOpen <$> H.openDbQueue H.MultiWriter db SQL.containedTable
 	-- If permissions don't allow opening the database, treat it as if
 	-- it does not exist.
 	permerr e = case createdb of
diff --git a/Database/Queue.hs b/Database/Queue.hs
index 143871079..f0a2d2b65 100644
--- a/Database/Queue.hs
+++ b/Database/Queue.hs
@@ -9,6 +9,7 @@
 
 module Database.Queue (
 	DbQueue,
+	DbConcurrency(..),
 	openDbQueue,
 	queryDbQueue,
 	closeDbQueue,
@@ -35,9 +36,9 @@ data DbQueue = DQ DbHandle (MVar Queue)
 {- Opens the database queue, but does not perform any migrations. Only use
  - if the database is known to exist and have the right tables; ie after
  - running initDb. -}
-openDbQueue :: FilePath -> TableName -> IO DbQueue
-openDbQueue db tablename = DQ
-	<$> openDb db tablename
+openDbQueue :: DbConcurrency -> FilePath -> TableName -> IO DbQueue
+openDbQueue dbconcurrency db tablename = DQ
+	<$> openDb dbconcurrency db tablename
 	<*> (newMVar =<< emptyQueue)
 
 {- This or flushDbQueue must be called, eg at program exit to ensure

(Diff truncated)
todo
diff --git a/doc/git-annex-export.mdwn b/doc/git-annex-export.mdwn
index e3cbcbd7a..72319a8fc 100644
--- a/doc/git-annex-export.mdwn
+++ b/doc/git-annex-export.mdwn
@@ -15,8 +15,9 @@ keys. That is great for reliable data storage, but your filenames are
 obscured. Exporting replicates the tree to the special remote as-is.
 
 Mixing key/value storage and exports in the same remote would be a mess and
-so is not allowed. You have to configure a remote with `exporttree=yes`
-when initially setting it up with [[git-annex-initremote]](1).
+so is not allowed. You have to configure a special remote with
+`exporttree=yes` when initially setting it up with
+[[git-annex-initremote]](1).
 
 Repeated exports are done efficiently, by diffing the old and new tree,
 and transferring only the changed files.
@@ -54,7 +55,7 @@ export`, it will detect the export conflict, and resolve it.
 
 [[git-annex]](1)
 
-[[git-annex-export]](1)
+[[git-annex-initremote]](1)
 
 # AUTHOR
 
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index f345534e8..24b49ca85 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -17,6 +17,10 @@ there need to be a new interface in supported remotes?
 
 Work is in progress. Todo list:
 
+* initremote: Don't allow "exporttree=yes" to be set when the special remote
+  does not support exports. That would be confusing since the user would
+  set up a special remote for exports, but `git annex export` to it would
+  later fail..
 * `git annex get --from export` works in the repo that exported to it,
   but in another repo, the export db won't be populated, so it won't work.
   Maybe just show a useful error message in this case?  

export file renaming
This is seriously super hairy. It has to handle interrupted exports,
which may be resumed with the same or a different tree. It also has to
recover from export conflicts, which could cause the wrong content
to be renamed to a file.
I think this works, or is close to working. See the update to the design
for how it works.
This is definitely not optimal, in that it does more renames than are
necessary. It would probably be worth finding the keys that are really
renamed and only renaming those. But let's get the "simple" approach to
work first..
This commit was supported by the NSF-funded DataLad project.
diff --git a/Command/Export.hs b/Command/Export.hs
index 878cda8e3..6090b2603 100644
--- a/Command/Export.hs
+++ b/Command/Export.hs
@@ -67,6 +67,12 @@ exportKey sha = mk <$> catKey sha
 		, keyChunkNum = Nothing
 		}
 
+-- To handle renames which swap files, the exported file is first renamed
+-- to a stable temporary name based on the key.
+exportTempName :: ExportKey -> ExportLocation
+exportTempName ek = ExportLocation $ 
+	".git-annex-tmp-content-" ++ key2file (asKey (ek))
+
 seek :: ExportOptions -> CommandSeek
 seek o = do
 	r <- getParsed (exportRemote o)
@@ -78,23 +84,51 @@ seek o = do
 		-- or tag.
 		inRepo (Git.Ref.tree (exportTreeish o))
 	old <- getExport (uuid r)
-
 	recordExportBeginning (uuid r) new
-	when (length old > 1) $
-		warning "Export conflict detected. Different trees have been exported to the same special remote. Resolving.."
-
 	db <- openDb (uuid r)
 	
-	-- First, diff the old and new trees and delete all changed
-	-- files in the export. Every file that remains in the export will
-	-- have the content from the new treeish.
+	-- Clean up after incomplete export of a tree, in which
+	-- the next block of code below may have renamed some files to
+	-- temp files. Diff from the incomplete tree to the new tree,
+	-- and delete any temp files that the new tree can't use.
+	forM_ (concatMap incompleteExportedTreeish old) $ \incomplete ->
+		mapdiff (startUnexportTempName r db . Git.DiffTree.srcsha) incomplete new
+
+	-- Diff the old and new trees, and delete or rename to new name all
+	-- changed files in the export. After this, every file that remains
+	-- in the export will have the content from the new treeish.
 	-- 
 	-- (Also, when there was an export conflict, this resolves it.)
-	forM_ (map exportedTreeish old) $ \oldtreesha -> do
-		(diff, cleanup) <- inRepo $
-			Git.DiffTree.diffTreeRecursive oldtreesha new
-		seekActions $ pure $ map (startUnexport r db) diff
-		void $ liftIO cleanup
+	case map exportedTreeish old of
+		[] -> return ()
+		[oldtreesha] -> do
+			-- Rename all old files to temp.
+			mapdiff
+				(\diff -> startMoveToTempName r db (Git.DiffTree.file diff) (Git.DiffTree.srcsha diff))
+				oldtreesha new
+			-- Rename from temp to new files.
+			mapdiff (\diff -> startMoveFromTempName r db (Git.DiffTree.dstsha diff) (Git.DiffTree.file diff))
+				new oldtreesha
+			-- Remove all remaining temps.
+			mapdiff
+				(startUnexportTempName r db . Git.DiffTree.srcsha)
+				oldtreesha new
+		ts -> do
+			warning "Export conflict detected. Different trees have been exported to the same special remote. Resolving.."
+			forM_ ts $ \oldtreesha -> do
+				-- Unexport both the srcsha and the dstsha,
+				-- because the wrong content may have
+				-- been renamed to the dstsha due to the
+				-- export conflict.
+				let unexportboth d = 
+					[ Git.DiffTree.srcsha d 
+					, Git.DiffTree.dstsha d
+					]
+				-- Don't rename to temp, because the
+				-- content is unknown; unexport instead.
+				mapdiff
+					(\diff -> startUnexport r db (Git.DiffTree.file diff) (unexportboth diff))
+					oldtreesha new
 
 	-- Waiting until now to record the export guarantees that,
 	-- if this export is interrupted, there are no files left over
@@ -110,6 +144,12 @@ seek o = do
 	void $ liftIO cleanup'
 
 	closeDb db
+  where
+	mapdiff a oldtreesha newtreesha = do
+		(diff, cleanup) <- inRepo $
+			Git.DiffTree.diffTreeRecursive oldtreesha newtreesha
+		seekActions $ pure $ map a diff
+		void $ liftIO cleanup
 
 startExport :: Remote -> ExportHandle -> Git.LsTree.TreeItem -> CommandStart
 startExport r db ti = do
@@ -127,7 +167,7 @@ performExport r db ek contentsha loc = do
 	sent <- case ek of
 		AnnexKey k -> ifM (inAnnex k)
 			( metered Nothing k $ \m -> do
-				let rollback = void $ performUnexport r db ek loc
+				let rollback = void $ performUnexport r db [ek] loc
 				sendAnnex k rollback
 					(\f -> storer f k loc m)
 			, do
@@ -151,32 +191,89 @@ cleanupExport r db ek loc = do
 	logChange (asKey ek) (uuid r) InfoPresent
 	return True
 
-startUnexport :: Remote -> ExportHandle -> Git.DiffTree.DiffTreeItem -> CommandStart
-startUnexport r db diff
-	| Git.DiffTree.srcsha diff /= nullSha = do
-		showStart "unexport" f
-		ek <- exportKey (Git.DiffTree.srcsha diff)
-		next $ performUnexport r db ek loc
-	| otherwise = stop
+startUnexport :: Remote -> ExportHandle -> TopFilePath -> [Git.Sha] -> CommandStart
+startUnexport r db f shas = do
+	eks <- forM (filter (/= nullSha) shas) exportKey
+	if null eks
+		then stop
+		else do
+			showStart "unexport" f'
+			next $ performUnexport r db eks loc
   where
-	loc = ExportLocation $ toInternalGitPath f
-	f = getTopFilePath $ Git.DiffTree.file diff
-
-performUnexport :: Remote -> ExportHandle -> ExportKey -> ExportLocation -> CommandPerform
-performUnexport r db ek loc = do
-	let remover = removeExport $ exportActions r
-	ok <- remover (asKey ek) loc
-	if ok
-		then next $ cleanupUnexport r db ek loc
-		else stop
+	loc = ExportLocation $ toInternalGitPath f'
+	f' = getTopFilePath f
+
+performUnexport :: Remote -> ExportHandle -> [ExportKey] -> ExportLocation -> CommandPerform
+performUnexport r db eks loc = do
+	ifM (allM (\ek -> removeExport (exportActions r) (asKey ek) loc) eks)
+		( next $ cleanupUnexport r db eks loc
+		, stop
+		)
 
-cleanupUnexport :: Remote -> ExportHandle -> ExportKey -> ExportLocation -> CommandCleanup
-cleanupUnexport r db ek loc = do
+cleanupUnexport :: Remote -> ExportHandle -> [ExportKey] -> ExportLocation -> CommandCleanup
+cleanupUnexport r db eks loc = do
 	liftIO $ do
-		removeExportLocation db (asKey ek) loc
+		forM_ eks $ \ek ->
+			removeExportLocation db (asKey ek) loc
 		-- Flush so that getExportLocation sees this and any
 		-- other removals of the key.
 		flushDbQueue db
-	whenM (liftIO $ null <$> getExportLocation db (asKey ek)) $
-		logChange (asKey ek) (uuid r) InfoMissing
+	remaininglocs <- liftIO $ 
+		concat <$> forM eks (\ek -> getExportLocation db (asKey ek))
+	when (null remaininglocs) $
+		forM_ eks $ \ek ->
+			logChange (asKey ek) (uuid r) InfoMissing
+	return True
+
+startUnexportTempName :: Remote -> ExportHandle -> Git.Sha -> CommandStart
+startUnexportTempName r db sha
+	| sha == nullSha = stop
+	| otherwise = do
+		ek <- exportKey sha
+		let loc@(ExportLocation f) = exportTempName ek
+		stopUnless (liftIO $ elem loc <$> getExportLocation db (asKey ek)) $ do
+			showStart "unexport" f
+			next $ performUnexport r db [ek] loc
+
+startMoveToTempName :: Remote -> ExportHandle -> TopFilePath -> Git.Sha -> CommandStart
+startMoveToTempName r db f sha
+	| sha == nullSha = stop
+	| otherwise = do
+		ek <- exportKey sha
+		let tmploc@(ExportLocation tmpf) = exportTempName ek
+		showStart "rename" (f' ++ " -> " ++ tmpf)
+		next $ performRename r db ek loc tmploc
+  where
+	loc = ExportLocation $ toInternalGitPath f'
+	f' = getTopFilePath f
+
+startMoveFromTempName :: Remote -> ExportHandle -> Git.Sha -> TopFilePath -> CommandStart
+startMoveFromTempName r db sha f
+	| sha == nullSha = stop
+	| otherwise = do
+		ek <- exportKey sha
+		stopUnless (liftIO $ elem loc <$> getExportLocation db (asKey ek)) $ do
+			let tmploc@(ExportLocation tmpf) = exportTempName ek
+			showStart "rename" (tmpf ++ " -> " ++ f')
+			next $ performRename r db ek tmploc loc
+  where
+	loc = ExportLocation $ toInternalGitPath f'
+	f' = getTopFilePath f
+
+performRename :: Remote -> ExportHandle -> ExportKey -> ExportLocation -> ExportLocation -> CommandPerform
+performRename r db ek src dest = do

(Diff truncated)
record incomplete exports in export.log
Not yet used, but essential for resuming cleanly.
Note that, in normmal operation, only one commit is made to export.log
during an export; the incomplete version only gets to the journal and
is then overwritten.
This commit was supported by the NSF-funded DataLad project.
diff --git a/Command/Export.hs b/Command/Export.hs
index 3387a14ad..878cda8e3 100644
--- a/Command/Export.hs
+++ b/Command/Export.hs
@@ -79,9 +79,10 @@ seek o = do
 		inRepo (Git.Ref.tree (exportTreeish o))
 	old <- getExport (uuid r)
 
+	recordExportBeginning (uuid r) new
 	when (length old > 1) $
 		warning "Export conflict detected. Different trees have been exported to the same special remote. Resolving.."
-	
+
 	db <- openDb (uuid r)
 	
 	-- First, diff the old and new trees and delete all changed
@@ -89,7 +90,7 @@ seek o = do
 	-- have the content from the new treeish.
 	-- 
 	-- (Also, when there was an export conflict, this resolves it.)
-	forM_ old $ \oldtreesha -> do
+	forM_ (map exportedTreeish old) $ \oldtreesha -> do
 		(diff, cleanup) <- inRepo $
 			Git.DiffTree.diffTreeRecursive oldtreesha new
 		seekActions $ pure $ map (startUnexport r db) diff
@@ -99,7 +100,7 @@ seek o = do
 	-- if this export is interrupted, there are no files left over
 	-- from a previous export, that are not part of this export.
 	recordExport (uuid r) $ ExportChange
-		{ oldTreeish = old
+		{ oldTreeish = map exportedTreeish old
 		, newTreeish = new
 		}
 
diff --git a/Logs/Export.hs b/Logs/Export.hs
index 1fd1460fc..3ba77cd24 100644
--- a/Logs/Export.hs
+++ b/Logs/Export.hs
@@ -14,22 +14,29 @@ import qualified Annex.Branch
 import qualified Git
 import qualified Git.Branch
 import Git.Tree
+import Git.Sha
 import Git.FilePath
 import Logs
 import Logs.UUIDBased
 import Annex.UUID
 
--- | Get the treeish that was exported to a special remote.
+data Exported = Exported
+	{ exportedTreeish :: Git.Ref
+	, incompleteExportedTreeish :: [Git.Ref]
+	}
+	deriving (Eq)
+
+-- | Get what's been exported to a special remote.
 --
 -- If the list contains multiple items, there was an export conflict,
 -- and different trees were exported to the same special remote.
-getExport :: UUID -> Annex [Git.Ref]
+getExport :: UUID -> Annex [Exported]
 getExport remoteuuid = nub . mapMaybe get . M.elems . simpleMap 
 	. parseLogNew parseExportLog
 	<$> Annex.Branch.get exportLog
   where
-	get (ExportLog t u)
-		| u == remoteuuid = Just t
+	get (ExportLog exported u)
+		| u == remoteuuid = Just exported
 		| otherwise = Nothing
 
 data ExportChange = ExportChange
@@ -39,6 +46,10 @@ data ExportChange = ExportChange
 
 -- | Record a change in what's exported to a special remote.
 --
+-- This is called before an export begins uploading new files to the
+-- remote, but after it's cleaned up any files that need to be deleted
+-- from the old treeish.
+--
 -- Any entries in the log for the oldTreeish will be updated to the
 -- newTreeish. This way, when multiple repositories are exporting to
 -- the same special remote, there's no conflict as long as they move
@@ -50,27 +61,48 @@ recordExport :: UUID -> ExportChange -> Annex ()
 recordExport remoteuuid ec = do
 	c <- liftIO currentVectorClock
 	u <- getUUID
-	let val = ExportLog (newTreeish ec) remoteuuid
+	let val = ExportLog (Exported (newTreeish ec) []) remoteuuid
 	Annex.Branch.change exportLog $
 		showLogNew formatExportLog 
 			. changeLog c u val 
 			. M.mapWithKey (updateothers c u)
 			. parseLogNew parseExportLog
-	graftTreeish (newTreeish ec)
   where
-	updateothers c u theiru le@(LogEntry _ (ExportLog t remoteuuid'))
+	updateothers c u theiru le@(LogEntry _ (ExportLog exported@(Exported { exportedTreeish = t }) remoteuuid'))
 		| u == theiru || remoteuuid' /= remoteuuid || t `notElem` oldTreeish ec = le
-		| otherwise = LogEntry c (ExportLog (newTreeish ec) theiru)
+		| otherwise = LogEntry c (ExportLog (exported { exportedTreeish = newTreeish ec }) theiru)
+
+-- | Record the beginning of an export, to allow cleaning up from
+-- interrupted exports.
+--
+-- This is called before any changes are made to the remote.
+recordExportBeginning :: UUID -> Git.Ref -> Annex ()
+recordExportBeginning remoteuuid newtree = do
+	c <- liftIO currentVectorClock
+	u <- getUUID
+	ExportLog old _ <- fromMaybe (ExportLog (Exported emptyTree []) remoteuuid)
+		. M.lookup u . simpleMap 
+		. parseLogNew parseExportLog
+		<$> Annex.Branch.get exportLog
+	let new = old { incompleteExportedTreeish = newtree:incompleteExportedTreeish old }
+	Annex.Branch.change exportLog $
+		showLogNew formatExportLog 
+			. changeLog c u (ExportLog new remoteuuid)
+			. parseLogNew parseExportLog
+	graftTreeish newtree
 
-data ExportLog = ExportLog Git.Ref UUID
+data ExportLog = ExportLog Exported UUID
 
 formatExportLog :: ExportLog -> String
-formatExportLog (ExportLog treeish remoteuuid) =
-	Git.fromRef treeish ++ " " ++ fromUUID remoteuuid
+formatExportLog (ExportLog exported remoteuuid) = unwords $
+	[ Git.fromRef (exportedTreeish exported)
+	, fromUUID remoteuuid
+	] ++ map Git.fromRef (incompleteExportedTreeish exported)
 
 parseExportLog :: String -> Maybe ExportLog
 parseExportLog s = case words s of
-	(t:u:[]) -> Just $ ExportLog (Git.Ref t) (toUUID u)
+	(et:u:it) -> Just $
+		ExportLog (Exported (Git.Ref et) (map Git.Ref it)) (toUUID u)
 	_ -> Nothing
 
 -- To prevent git-annex branch merge conflicts, the treeish is
diff --git a/doc/internals.mdwn b/doc/internals.mdwn
index 4b24ce443..ccf1e09b6 100644
--- a/doc/internals.mdwn
+++ b/doc/internals.mdwn
@@ -187,12 +187,21 @@ Tracks what trees have been exported to special remotes by
 
 Each line starts with a timestamp, then the uuid of the repository
 that exported to the special remote, followed by the sha1 of the tree
-that was exported, and then by the uuid of the special remote. For example:
+that was exported, and then by the uuid of the special remote.
 
-	1317929189.157237s e605dca6-446a-11e0-8b2a-002170d25c55 bb08b1abd207aeecccbc7060e523b011d80cb35b 26339d22-446b-11e0-9101-002170d25c55
+There can also be subsequent sha1s, of trees that have started to be
+exported but whose export is not yet complete. The sha1 of the exported
+tree can be the empty tree (4b825dc642cb6eb9a060e54bf8d69288fbee4904)
+in order to record the beginning of the first export.
+
+For example:
+
+	1317929100.012345s e605dca6-446a-11e0-8b2a-002170d25c55 4b825dc642cb6eb9a060e54bf8d69288fbee4904 26339d22-446b-11e0-9101-002170d25c55 bb08b1abd207aeecccbc7060e523b011d80cb35b
+	1317929100.012345s e605dca6-446a-11e0-8b2a-002170d25c55 bb08b1abd207aeecccbc7060e523b011d80cb35b 26339d22-446b-11e0-9101-002170d25c55
+	1317929189.157237s e605dca6-446a-11e0-8b2a-002170d25c55 bb08b1abd207aeecccbc7060e523b011d80cb35b 26339d22-446b-11e0-9101-002170d25c55 7c7af825782b7c8706039b855c72709993542be4
 	1317923000.251111s e605dca6-446a-11e0-8b2a-002170d25c55 7c7af825782b7c8706039b855c72709993542be4 26339d22-446b-11e0-9101-002170d25c55
 
-(The exported tree is also grafted into the git-annex branch, at
+(The trees are also grafted into the git-annex branch, at
 `export.tree`, to prevent git from garbage collecting it. However, the head
 of the git-annex branch should never contain such a grafted in tree;
 the grafted tree is removed in the same commit that updates `export.log`.)

thoughts on handling renames efficiently
This gets complicated, but I think this design will work!
This commit was supported by the NSF-funded DataLad project.
diff --git a/doc/design/exporting_trees_to_special_remotes.mdwn b/doc/design/exporting_trees_to_special_remotes.mdwn
index 7ff1df870..0469a4fcc 100644
--- a/doc/design/exporting_trees_to_special_remotes.mdwn
+++ b/doc/design/exporting_trees_to_special_remotes.mdwn
@@ -237,11 +237,37 @@ for the current treeish. (Unless a conflicting export was made from
 elsewhere, but in that case, the conflict resolution will have to fix up
 later.)
 
-Efficient resuming can then first check if the location log says the
-export contains the content. (If not, transfer a copy.) If the location
-log says the export contains the content, use CHECKPRESENTEXPORT to see if
-the file exists, and if not transfer a copy. The CHECKPRESENTEXPORT check
-deals with the case where the treeish has two files with the same content.
-If we have a key-to-files map for the export, then we can skip the 
-CHECKPRESENTEXPORT check when there's only one file using a key. So,
-resuming can be quite efficient.
+## handling renames efficiently
+
+To handle two files that swap names, a temp name is required.
+
+Difficulty with a temp name is picking a name that won't ever be used by
+any exported file.
+
+Interrupted exports also complicate this. While a name could be picked that
+is in neither the old nor the new tree, an export could be interrupted,
+leaving the file at the temp name. There needs to be something to clean
+that up when the export is resumed, even if it's resumed with a different 
+tree.
+
+Could use something like ".git-annex-tmp-content-$key" as the temp name.
+This hides it from casual view, which is good, and it's not depedent on the
+tree, so no state needs to be maintained to clean it up. Also, using the
+key in the name simplifies calculation of complicated renames (eg, renaming
+A to B, B to C, C to A)
+
+Export can first try to rename the temp name of all keys
+whose files are added in the diff. Followed by deleting the temp name
+of all keys whose files are removed in the diff. That is more renames and
+deletes than strictly necessary, but it will statelessly clean up 
+an interruped export as long as it's run again with the same new tree.
+
+But, an export of tree B should clean up after 
+an interrupted export of tree A. Some state is needed to handle this.
+Before starting the export of tree A, record it somewhere. Then when
+resuming, diff A..B, and rename/delete the temp names of the keys in the
+diff. As well as diffing from the last fully exported tree to B and doing
+the same rename/delete. 
+
+So, before an export does anything, need to record the tree that's about
+to be exported to export.log, not as an exported tree, but as a goal.
diff --git a/doc/todo/export.mdwn b/doc/todo/export.mdwn
index 5813cd869..f345534e8 100644
--- a/doc/todo/export.mdwn
+++ b/doc/todo/export.mdwn
@@ -19,7 +19,11 @@ Work is in progress. Todo list:
 
 * `git annex get --from export` works in the repo that exported to it,
   but in another repo, the export db won't be populated, so it won't work.
-  Maybe just show a useful error message in this case?
+  Maybe just show a useful error message in this case?  
+  However, exporting from one repository and then trying to update the
+  export from another repository also doesn't work right, because the
+  export database is not populated. So, seems that the export database needs
+  to get populated based on the export log in these cases.
 * Efficient handling of renames.
 * Support export to aditional special remotes (S3 etc)
 * Support export to external special remotes.

update
diff --git a/doc/thanks/list b/doc/thanks/list
index e56ecc917..e586794f4 100644
--- a/doc/thanks/list
+++ b/doc/thanks/list
@@ -74,3 +74,4 @@ Lukas Platz,
 Sergey Karpukhin, 
 Silvio Ankermann, 
 Paul Tötterman, 
+Erik Bjäreholt, 

move line break to fix broken link
diff --git a/doc/todo/export/comment_1_cb87f7518da252b950d70c60352e848e._comment b/doc/todo/export/comment_1_cb87f7518da252b950d70c60352e848e._comment
index 9158d123c..c07acc5ca 100644
--- a/doc/todo/export/comment_1_cb87f7518da252b950d70c60352e848e._comment
+++ b/doc/todo/export/comment_1_cb87f7518da252b950d70c60352e848e._comment
@@ -4,8 +4,8 @@
  subject="sounds like the dumb backend, except not dumb"
  date="2017-04-08T20:21:41Z"
  content="""
-This sounds a lot like what i was trying to do in [[todo/dumb, unsafe,
-human-readable_backend]], except done properly. :)
+This sounds a lot like what i was trying to do in
+[[todo/dumb, unsafe, human-readable_backend]], except done properly. :)
 
 I was wondering about that asymmetry recentrly, and it would seem like
 a good idea to fix this. the `--to remote` flag could especially be

diff --git a/doc/forum/Is_there_a_way_to_unannex_some_file___63__.mdwn b/doc/forum/Is_there_a_way_to_unannex_some_file___63__.mdwn
new file mode 100644
index 000000000..78aa6744a
--- /dev/null
+++ b/doc/forum/Is_there_a_way_to_unannex_some_file___63__.mdwn
@@ -0,0 +1,3 @@
+Having by mistake annex a full repo i look for a way of unannex some file to make them
+managed by the "standard" git proccess again - mostly some source code file -
+Is there a way  to do that ?