Recent changes to this wiki:

Added a comment: User expectations and what git annex unannex does.
diff --git a/doc/bugs/unannex__58___Cannot_proceed_with_uncommitted_changes_staged_in_the_index/comment_3_6c99a97d56b1a3b12092c15fafcf8761._comment b/doc/bugs/unannex__58___Cannot_proceed_with_uncommitted_changes_staged_in_the_index/comment_3_6c99a97d56b1a3b12092c15fafcf8761._comment
new file mode 100644
index 0000000..47d417c
--- /dev/null
+++ b/doc/bugs/unannex__58___Cannot_proceed_with_uncommitted_changes_staged_in_the_index/comment_3_6c99a97d56b1a3b12092c15fafcf8761._comment
@@ -0,0 +1,35 @@
+[[!comment format=mdwn
+ username="https://launchpad.net/~stephane-gourichon-lpad"
+ nickname="stephane-gourichon-lpad"
+ avatar="http://cdn.libravatar.org/avatar/02d4a0af59175f9123720b4481d55a769ba954e20f6dd9b2792217d9fa0c6089"
+ subject="User expectations and what git annex unannex does."
+ date="2017-07-24T08:06:55Z"
+ content="""
+# Where we are
+
+@joey thank you for these explanations, more detailed than when I reported the same problem 8 months ago commenting https://git-annex.branchable.com/git-annex-unannex/ (@tom.prince had already written this page but I did not find it).
+
+Yet all this happens in a git world, where private history can be rewritten, so *there must be a simpler way*.
+
+# What people expect from \"undo accidental add command\"
+
+@tom.prince thanks for explaining what you expected `unannex` to do. Looks like I expected exactly the same behavior.
+
+IMHO current behavior of `git annex unannex` does not match what people expect of \"undo accidental add command\".
+
+# What current `git-annex unannex` actually does
+
+If behavior does not match words, perhaps behavior is interesting but should be matched with different words?
+
+Looking at what `git-annex unannex`, here are the words that came to mind:
+
+> git-annex unannex - turn a path which points to annexed content into a plain file that is store in regular git.
+
+Notice that:
+
+* `git-annex` retains history
+* other paths may still refer to the same content, so the annex may still contain a copy of the same data.  Else it becomes unused content subject to `git-annex dropunused`.
+
+Thank you for your attention.
+
+"""]]

Added a comment
diff --git a/doc/tips/googledriveannex/comment_12_365af6d37e8d391d5e59993bae0cc915._comment b/doc/tips/googledriveannex/comment_12_365af6d37e8d391d5e59993bae0cc915._comment
new file mode 100644
index 0000000..8004d21
--- /dev/null
+++ b/doc/tips/googledriveannex/comment_12_365af6d37e8d391d5e59993bae0cc915._comment
@@ -0,0 +1,11 @@
+[[!comment format=mdwn
+ username="lykos@d125a37d89b1cfac20829f12911656c40cb70018"
+ nickname="lykos"
+ avatar="http://cdn.libravatar.org/avatar/085df7b04d3408ba23c19f9c49be9ea2"
+ subject="comment 12"
+ date="2017-07-21T23:56:12Z"
+ content="""
+I couldn't get this to run and had a lot of performance issues with rclone on Google Drive, so I adapted the rclone wrapper to [gdrive](https://github.com/prasmussen/gdrive). It's running fine so far, so I thought I share it:
+
+[[https://github.com/Lykos153/git-annex-remote-gdrive]]
+"""]]

diff --git a/doc/special_remotes.mdwn b/doc/special_remotes.mdwn
index 1dc3d87..23cb1cc 100644
--- a/doc/special_remotes.mdwn
+++ b/doc/special_remotes.mdwn
@@ -33,7 +33,8 @@ for using git-annex with various services:
 * [Amazon Cloud drive](https://github.com/DanielDent/git-annex-remote-rclone)
 * [[tips/Internet_Archive_via_S3]]
 * [[Box.com|tips/using_box.com_as_a_special_remote]]
-* [[Google drive|tips/googledriveannex]]
+* [Google Drive](https://github.com/Lykos153/git-annex-remote-gdrive)
+* [[Google Drive (old)|tips/googledriveannex]]
 * [[Google Cloud Storage|tips/using_Google_Cloud_Storage]]
 * [[Mega.co.nz|tips/megaannex]]
 * [[SkyDrive|tips/skydriveannex]]

Added a comment
diff --git a/doc/bugs/unannex__58___Cannot_proceed_with_uncommitted_changes_staged_in_the_index/comment_2_4609f9a1545b08e08021bf786561f5e5._comment b/doc/bugs/unannex__58___Cannot_proceed_with_uncommitted_changes_staged_in_the_index/comment_2_4609f9a1545b08e08021bf786561f5e5._comment
new file mode 100644
index 0000000..1c3169e
--- /dev/null
+++ b/doc/bugs/unannex__58___Cannot_proceed_with_uncommitted_changes_staged_in_the_index/comment_2_4609f9a1545b08e08021bf786561f5e5._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ username="tom.prince@6bf26c878bf6103667f28d70cf49d4fb74d33df7"
+ nickname="tom.prince"
+ avatar="http://cdn.libravatar.org/avatar/e81edff3af564b86f4c9d780a8023299"
+ subject="comment 2"
+ date="2017-07-21T18:22:28Z"
+ content="""
+As far as I can tell, from looking at the code, the pre-commit hook only looks at files in the index. Thus, if unannexing an uncommited file removed it from the index, the pre-commit will do the right thing. This would be nice to have, at least for the case where the files have never been committed.
+"""]]

initial whining
diff --git a/doc/todo/git_annex_info___60__remote__62___does_not_list_all_the_parameters_for_the_remote.mdwn b/doc/todo/git_annex_info___60__remote__62___does_not_list_all_the_parameters_for_the_remote.mdwn
new file mode 100644
index 0000000..d83d3b4
--- /dev/null
+++ b/doc/todo/git_annex_info___60__remote__62___does_not_list_all_the_parameters_for_the_remote.mdwn
@@ -0,0 +1,29 @@
+[[!format sh """
+$> git annex info  gdrive2 --verbose    
+remote: gdrive2   
+description: [gdrive2]
+uuid: d7e13bf3-0c0e-44c9-a626-c7af6a628df7
+trust: semitrusted
+cost: 200.0
+type: external
+externaltype: rclone
+encryption: none
+chunking: 52.43 megabyteschunks
+remote annex keys: 3
+remote annex size: 112.51 megabytes
+(dev) 2 29865.....................................:Fri 21 Jul 2017 12:35:35 PM EDT:.
+hopa:/tmp/testds1
+$> git co git-annex
+Switched to branch 'git-annex'
+W: git-annex repositories not (yet) supported in the prompt                                                     
+(dev) 2 29866.....................................:Fri 21 Jul 2017 12:35:40 PM EDT:.
+hopa:/tmp/testds1
+$> cat remote.log 
+ace2983e-5e2b-4c6a-8251-5344392d563c chunk=50MiB encryption=none externaltype=rclone mac=HMACSHA512 name=gdrive1 prefix=git-annex/testds1 rclone_layout=lower target=google-drive1 type=external timestamp=1500494328.845312147s
+d7e13bf3-0c0e-44c9-a626-c7af6a628df7 chunk=50MiB encryption=none externaltype=rclone mac=HMACSHA512 name=gdrive2 prefix=git-annex/raiders2 rclone_layout=lower target=google-drive1 type=external timestamp=1500654564.923893997s
+
+"""]]
+
+needed to see what is the prefix -- which is stored in remote.log -- but not printed by 'git annex info' neither in --verbose nor --json mode
+
+[[!meta author=yoh]]

diff --git a/doc/todo/git_annex_repair__58___performance_can_be_abysmal__44___huge_improvements_possible.mdwn b/doc/todo/git_annex_repair__58___performance_can_be_abysmal__44___huge_improvements_possible.mdwn
new file mode 100644
index 0000000..407e099
--- /dev/null
+++ b/doc/todo/git_annex_repair__58___performance_can_be_abysmal__44___huge_improvements_possible.mdwn
@@ -0,0 +1,28 @@
+# Situation
+
+Since yesterday evening (18 hours ago), `git annex repair --verbose` is repairing a repository from a remote.  This is on a fast machine (i7 4 physical cores, 8 logical CPUs @2.6Ghz sitting idle, 16GB RAM mostly unused, hard drive with measured 111MB/s sustained capacity). `.git` folder to repair grew to 8Gb while remote was only about 640MB.
+
+# What `git annex repair` does
+
+Currently, `git annex repair` appears to :
+
+* make a complete local clone from the remotes it finds, 
+* expand all packs into individual objects files,
+* then pour (with rsync) all those objects into the repository
+* and I guess it ends with a git fsck/gc/whatever to glue things back.
+
+The expected result (a complete repaired repo) is great but it didn't work without help and the performance is disappointing.
+
+# Suggested room for improvements
+
+I would be willing to contribute some patches and although I have a respectable experience in programming, including some functional traits, I'm not savvy enough in Haskell at the time.
+
+(1) a complete clone in this case means *between one and two hours* of download and easily gets interrupted losing all eforts (just like a plain git clone). Actually I tried several times and it never completed.  I worked around this by doing a `git clone` on the server and `rsync`ing that to a local storage and adding that locally as a git remote.
+
+(2) Even when a local "git remote" is available, `git annex repair` first tried the network one instead. Perhaps it would be better if it sorted git remotes and first try the ones that appear to be available locally (no URL or file:// URL scheme)? Workaround: manually break the transfer to that `git repair` switches to the next remote.
+
+(3) does `git annex repair` really have to explode the repository into individual objects? In my case it took about one hour to create 1454978 (one million four hundred thousands) object files for a total of 6.8GB. (I should have put `TEMP` and `TMPDIR` to point to a SSD-based storage as a workaround or even dare a tmpfs.) `git-annex` then runs a `rsync` that has been going on thrashing the disk (I can hear and feel it) for 7 hours and a half, with an expected total time of 8 hours 20 minutes. That's a very inefficient way to copy 6.8GB (incidentally, rsync does it in alphabetical order of hash as shown by `strace` and confirmed by man page and [here](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=640492) and [there](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=160982)). There must be a more efficient way, right? 
+
+A a sidenote, I don't know how a repo containing about 300k files jumped to 1400k git objects within the last 2 months.
+
+Any feedback welcome, thanks.

Added a comment
diff --git a/doc/forum/Rebuild_a_set_of_repos_from_checked-out_trees_and_.git__47__annex__47__objects__63__/comment_1_456772b5b0deb0c121d7feb5dcf41979._comment b/doc/forum/Rebuild_a_set_of_repos_from_checked-out_trees_and_.git__47__annex__47__objects__63__/comment_1_456772b5b0deb0c121d7feb5dcf41979._comment
new file mode 100644
index 0000000..a6389d9
--- /dev/null
+++ b/doc/forum/Rebuild_a_set_of_repos_from_checked-out_trees_and_.git__47__annex__47__objects__63__/comment_1_456772b5b0deb0c121d7feb5dcf41979._comment
@@ -0,0 +1,12 @@
+[[!comment format=mdwn
+ username="CandyAngel"
+ avatar="http://cdn.libravatar.org/avatar/15c0aade8bec5bf004f939dd73cf9ed8"
+ subject="comment 1"
+ date="2017-07-21T09:25:25Z"
+ content="""
+My solution is very roundabout but preserves a lot of information, but did involve buying another drive (and exclusively using v5 indirect mode!).
+
+I create a new repository (on the new drive) which I import all the contents of the \"keyfiles\" (contents of .git/annex/objects). Then I create another repository with the filelinks (symlinks pointing to .git/annex/objects). After adding the keyfiles remote to this, this lets me see which content is present and valid, which got corrupted, is missing etc.
+
+Then I can move the valid content from this recovery annex into a proper annex and try and repair/find the corrupted/missing.
+"""]]

Added a comment
diff --git a/doc/forum/Workflow_for_using_git-annex_only_on_external_drives/comment_2_f29471761e95b25e35e4cb09ad89737a._comment b/doc/forum/Workflow_for_using_git-annex_only_on_external_drives/comment_2_f29471761e95b25e35e4cb09ad89737a._comment
new file mode 100644
index 0000000..6dbe05f
--- /dev/null
+++ b/doc/forum/Workflow_for_using_git-annex_only_on_external_drives/comment_2_f29471761e95b25e35e4cb09ad89737a._comment
@@ -0,0 +1,15 @@
+[[!comment format=mdwn
+ username="CandyAngel"
+ avatar="http://cdn.libravatar.org/avatar/15c0aade8bec5bf004f939dd73cf9ed8"
+ subject="comment 2"
+ date="2017-07-21T09:15:40Z"
+ content="""
+1) Check out the source repository group, which will drop files once enough numcopies are available elsewhere
+2) This is pretty much why git-annex exists :)
+3) Set numcopies to 2 and use 'git annex fsck' to find out when there aren't enough copies
+4) You can use 'import' or 'reinject --known' to clean up known content outside of git-annex
+5) git-annex will run on 'crippled' filesystems like FAT32. Would recommend avoiding this if possible though
+6) This is presumed :)
+
+Sorry for brevity, but this should give some direction/keywords.
+"""]]

Added a comment
diff --git a/doc/forum/Fix_wrong_rsync_with_git_annex_reinit__63__/comment_2_14ddf5ddf2b85c9d5ed5ca070744f1fe._comment b/doc/forum/Fix_wrong_rsync_with_git_annex_reinit__63__/comment_2_14ddf5ddf2b85c9d5ed5ca070744f1fe._comment
new file mode 100644
index 0000000..aee6435
--- /dev/null
+++ b/doc/forum/Fix_wrong_rsync_with_git_annex_reinit__63__/comment_2_14ddf5ddf2b85c9d5ed5ca070744f1fe._comment
@@ -0,0 +1,10 @@
+[[!comment format=mdwn
+ username="https://launchpad.net/~stephane-gourichon-lpad"
+ nickname="stephane-gourichon-lpad"
+ avatar="http://cdn.libravatar.org/avatar/02d4a0af59175f9123720b4481d55a769ba954e20f6dd9b2792217d9fa0c6089"
+ subject="comment 2"
+ date="2017-07-20T17:55:03Z"
+ content="""
+More specific information is in https://git-annex.branchable.com/forum/Fix_duplicate_UUID/
+
+"""]]

Added a comment
diff --git a/doc/forum/Workflow_for_using_git-annex_only_on_external_drives/comment_1_c3c495efb7cdd242b51cc7fa7719d95f._comment b/doc/forum/Workflow_for_using_git-annex_only_on_external_drives/comment_1_c3c495efb7cdd242b51cc7fa7719d95f._comment
new file mode 100644
index 0000000..95154d3
--- /dev/null
+++ b/doc/forum/Workflow_for_using_git-annex_only_on_external_drives/comment_1_c3c495efb7cdd242b51cc7fa7719d95f._comment
@@ -0,0 +1,21 @@
+[[!comment format=mdwn
+ username="https://launchpad.net/~stephane-gourichon-lpad"
+ nickname="stephane-gourichon-lpad"
+ avatar="http://cdn.libravatar.org/avatar/02d4a0af59175f9123720b4481d55a769ba954e20f6dd9b2792217d9fa0c6089"
+ subject="comment 1"
+ date="2017-07-20T17:53:04Z"
+ content="""
+(Another user here.)
+
+I can't take you by the hand but all this seems like regular use of git-annex.
+
+I was overwhelmed at the beginning, followed http://git-annex.branchable.com/walkthrough , visited a number of pages.
+I finally had something that worked.
+
+In some cases you might consider, instead of moving files then using git annex add, using https://git-annex.branchable.com/git-annex-import/ with or without some of the options.
+
+If you have specific problems, ask a more specific question.
+
+Happy journey!
+
+"""]]

Added a comment
diff --git a/doc/forum/Create_lightweight_checkouts_on_the_same_filesystem/comment_1_dc219eec54b62803831c854a620aceae._comment b/doc/forum/Create_lightweight_checkouts_on_the_same_filesystem/comment_1_dc219eec54b62803831c854a620aceae._comment
new file mode 100644
index 0000000..2c49a61
--- /dev/null
+++ b/doc/forum/Create_lightweight_checkouts_on_the_same_filesystem/comment_1_dc219eec54b62803831c854a620aceae._comment
@@ -0,0 +1,20 @@
+[[!comment format=mdwn
+ username="https://launchpad.net/~stephane-gourichon-lpad"
+ nickname="stephane-gourichon-lpad"
+ avatar="http://cdn.libravatar.org/avatar/02d4a0af59175f9123720b4481d55a769ba954e20f6dd9b2792217d9fa0c6089"
+ subject="comment 1"
+ date="2017-07-20T17:47:42Z"
+ content="""
+Symbolic links point to `......./.git/annex/objects/.....`
+
+So, you can have them work by making your `.git/annex/objects` a link to the main repo's `.git/annex/objects`.
+
+    cd $mylightweightclone/.git/annex
+    mv objects objects.empty # move away but keep, just in case
+    ln -s $centralrepository/.git/annex/objects
+
+If the lightweight clone only performs read operations, I would expect things to work fine.
+
+I don't know if it can be dangerous to the health of your central repository besides that, so be careful.
+
+"""]]

diff --git a/doc/forum/Rebuild_a_set_of_repos_from_checked-out_trees_and_.git__47__annex__47__objects__63__.mdwn b/doc/forum/Rebuild_a_set_of_repos_from_checked-out_trees_and_.git__47__annex__47__objects__63__.mdwn
new file mode 100644
index 0000000..9757465
--- /dev/null
+++ b/doc/forum/Rebuild_a_set_of_repos_from_checked-out_trees_and_.git__47__annex__47__objects__63__.mdwn
@@ -0,0 +1,69 @@
+# Situation/trouble
+
+I have a set of big repos.  Each full replica manages about 172.000 annexed files plus a number of small regular files in git history, for a total of about 1.8TB. At filesystem level, `find` reports about 924.000 entries (directories, files, symlinks).
+
+That worked rather well for a while except that a number of operations are slow (even outside git-annex, e.g. a plain `find` takes more than our hour the first time).  Also, the git part got rather heavy.  Hundreds of megabytes that should have been annexed were committed as regular files and vice versa.
+
+The whole setup even survived some catastrophe rather well, for example 6 months ago when the first 1.5GB of one hard drive was accidentally overwritten.  `fsck` with an alternate superblock fixed the lower level, while `git annex repair` fixed the rest nicely.
+
+Last night, though, the git parts got corrupted and I'm struggling to get things back to a sane state.  `git log` shows only recent history then fails.  Various attempts with `git annex repair` failed so far, I'll try again adding a new local "bare" git clone of the server as remote for `git annex repair` to use.
+
+Storage media and filesystems seem sane, still.  Software has been unchanged for a long time:
+
+* client run Ubuntu 16.04 with locally compiled git-annex version: 6.20161001-gade6ab4
+* server runs with locally compiled (in a Debian unstable chroot) git-annex version: 6.20161011-g3135d35 .
+
+
+# Considered solution
+
+I'm considering:
+
+* recreating a new set of replicas
+* each replica on same filesystem as old one
+* recreated only from the checked-out tree and the `.git/annex.objects` tree.
+* *without* copying data (re-reading on import is okay, but no room on a 2TB disk for duplicating 1.8TB)
+* non-constraint: this will lose detailed history which is an inconvenience we can live with.
+
+# Solution, practically
+
+## (1) Assuming `git annex fsck` can take into account objects manually placed into `.git/annex/objects`
+
+	mkdir $newrepo
+	cd $newrepo
+	git init
+	git annex init
+	cp -al $oldrepo/* $newrepo/ # ignores .git and other .* (dotfiles)
+	cp -al $oldrepo/.git/annex/objects/* $newrepo/.git/annex/objects/*
+	git annex fsck # will this find and use result of cp ?
+	# git annex fsck will also tell if some checked-out files lack their annexed data
+	git remote add ...(other replicas)...
+	git annex sync ...(other replicas)...
+	git annex unused # will tell if some files don't appear in checked-out tree?
+
+## (2) Else...
+
+	mkdir $newrepo
+	cd $newrepo
+	git init
+	git annex init
+	cp -al $oldrepo/* $newrepo/ # ignores .git and other .* (dotfiles)
+	cp -al $oldrepo/.git/annex/objects/* ${newrepo}.objectdup
+	git annex reinject --known ${newrepo}.objectdup  # will that perform a copy? I must not in this case.
+	# or something like find "${newrepo}.objectdup" -type f -exec git annex reinject --known {} \;
+	git annex fsck # will tell if some checked-out files lack their annexed data
+	git remote add ...(other replicas)...
+	git annex sync ...(other replicas)...
+	# if some files in $oldrepo/.git/annex/objects/* don't appear in checked-out tree, the won't be picked up by reinject and remain in ${newrepo}.objectdup
+
+# Questions
+
+No one wins when a lot of time is spent on dead ends. :-) Before I
+spend time testing if solutions 1 and 2 can work, is there any caveat
+to mention?
+
+For example, perhaps one must clone from a common empty ancestor
+instead of creating independent annexed then sync?
+
+What else? Is the whole approach sane? Doomed? Anything simpler/better?
+
+Thanks a lot.

Added a comment
diff --git a/doc/forum/Fix_duplicate_UUID/comment_2_b0f06b58ef63dbb770773f28dfa1a29d._comment b/doc/forum/Fix_duplicate_UUID/comment_2_b0f06b58ef63dbb770773f28dfa1a29d._comment
new file mode 100644
index 0000000..19323dc
--- /dev/null
+++ b/doc/forum/Fix_duplicate_UUID/comment_2_b0f06b58ef63dbb770773f28dfa1a29d._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ username="https://launchpad.net/~stephane-gourichon-lpad"
+ nickname="stephane-gourichon-lpad"
+ avatar="http://cdn.libravatar.org/avatar/02d4a0af59175f9123720b4481d55a769ba954e20f6dd9b2792217d9fa0c6089"
+ subject="comment 2"
+ date="2017-07-20T16:33:02Z"
+ content="""
+Thanks for sharing.  I stumbled upon a similar problem and (IIRC) ended up doing at least one new clone.  I don't know if your solution is fully clean but it seems to have lower cost.
+"""]]

Added a comment
diff --git a/doc/bugs/git-annex_sucking_up_all_available_RAM_after_startup/comment_7_2b0ba2a15af04731a966a029be9b81aa._comment b/doc/bugs/git-annex_sucking_up_all_available_RAM_after_startup/comment_7_2b0ba2a15af04731a966a029be9b81aa._comment
new file mode 100644
index 0000000..39e0e44
--- /dev/null
+++ b/doc/bugs/git-annex_sucking_up_all_available_RAM_after_startup/comment_7_2b0ba2a15af04731a966a029be9b81aa._comment
@@ -0,0 +1,11 @@
+[[!comment format=mdwn
+ username="https://launchpad.net/~stephane-gourichon-lpad"
+ nickname="stephane-gourichon-lpad"
+ avatar="http://cdn.libravatar.org/avatar/02d4a0af59175f9123720b4481d55a769ba954e20f6dd9b2792217d9fa0c6089"
+ subject="comment 7"
+ date="2017-07-20T06:16:46Z"
+ content="""
+Hello.
+
+Any progress on this? (With regular git annex I now have a big repository with some corruption and `git annex repair` causes oom-kill.)
+"""]]

Added a comment: How about find .git/annex/transfer .git/annex/bad -type f -print0 | xargs -0 rm -fv
diff --git a/doc/forum/Can_I_purge_.git__47__annex__47__transfer_directory___63__/comment_2_bfd829aed28817a5aada3f237496c865._comment b/doc/forum/Can_I_purge_.git__47__annex__47__transfer_directory___63__/comment_2_bfd829aed28817a5aada3f237496c865._comment
new file mode 100644
index 0000000..cd4a3bf
--- /dev/null
+++ b/doc/forum/Can_I_purge_.git__47__annex__47__transfer_directory___63__/comment_2_bfd829aed28817a5aada3f237496c865._comment
@@ -0,0 +1,25 @@
+[[!comment format=mdwn
+ username="https://launchpad.net/~stephane-gourichon-lpad"
+ nickname="stephane-gourichon-lpad"
+ avatar="http://cdn.libravatar.org/avatar/02d4a0af59175f9123720b4481d55a769ba954e20f6dd9b2792217d9fa0c6089"
+ subject="How about find .git/annex/transfer .git/annex/bad -type f -print0 | xargs -0 rm -fv"
+ date="2017-07-19T19:42:25Z"
+ content="""
+> I'm not certain, but I think
+> 
+> $> git annex unused --fast
+> $> git annex drop --unused
+> 
+> will work.
+
+This (quickly) finds only two entries: one regarding a file in `.git/annex/transfer` and one other in `.git/annex/bad/`.
+
+Anyway, `git annex drop --unused` is too general and would potentially involve a number of other files.
+
+I'm just considering something that should probably be safe:
+
+    find .git/annex/transfer .git/annex/bad -type f -print0 | xargs -0 rm -fv
+
+unless people tell me it's dangerous somehow.  Having to do a `git annex fsck` in the future is okay.  Having important features broken until I do a `git annex fsck` (or worse) is less cool. :-)
+
+"""]]

Added a comment
diff --git a/doc/forum/Can_I_purge_.git__47__annex__47__transfer_directory___63__/comment_1_3510190e7c5d08f906b24e5743245f87._comment b/doc/forum/Can_I_purge_.git__47__annex__47__transfer_directory___63__/comment_1_3510190e7c5d08f906b24e5743245f87._comment
new file mode 100644
index 0000000..2a0bfa1
--- /dev/null
+++ b/doc/forum/Can_I_purge_.git__47__annex__47__transfer_directory___63__/comment_1_3510190e7c5d08f906b24e5743245f87._comment
@@ -0,0 +1,13 @@
+[[!comment format=mdwn
+ username="supernaught"
+ avatar="http://cdn.libravatar.org/avatar/55f92a50f2617099e2dc7509130ce158"
+ subject="comment 1"
+ date="2017-07-18T17:03:54Z"
+ content="""
+I'm not certain, but I think 
+    
+    $> git annex unused --fast
+    $> git annex drop --unused
+
+will work.
+"""]]

Feature request: invert remote selection.
diff --git a/doc/todo/Invert_remote_selection.mdwn b/doc/todo/Invert_remote_selection.mdwn
new file mode 100644
index 0000000..a091569
--- /dev/null
+++ b/doc/todo/Invert_remote_selection.mdwn
@@ -0,0 +1,30 @@
+Say I have:
+
+    $> git remote
+    Alpha
+    Beta
+    Gamma
+    Delta
+
+It is easy to sync with all repos via:
+
+    $> git annex sync
+
+Or specific repos via:
+
+    $> git annex sync Alpha Beta
+
+However, it is currently awkward to exclude specific repos. Is it possible to 'invert' or 'negate' specific remotes, so that the following are equivalent?
+
+    $> git annex sync \! Gamma
+    $> git annex sync Alpha Beta Delta
+
+This problem comes up surprisingly often due to:
+
+  1. An external host being temporarily down (which causes sync to hang for a while),
+  2. Repos being available, but the machine is under heavy IO load or memory pressure,
+  3. Repos on external drives that are swapped out and mounted to a specific location (e.g., /mnt/),
+  4. Wanting to roll out repo-wide changes in stages, or keeping 1-2 repos untouched for whatever reason, or
+  5. Some repos being too large for a machine (e.g., repacking fails due to low memory), but which can still act like a dumb file-store.
+
+The problem gets worse when you have a lot of remotes or a lot of repos to manage (I have both). My impression is that this feature would require a syntax addition for git-annex-sync only. I like '!' because it behaves the same in GNU find and sh.

Added a comment: Android
diff --git a/doc/bugs/_impossible_to_switch_repositories_on_android__in_webapp/comment_6_ee75ed3c65e0c3a0aaf22ea2ad7e9832._comment b/doc/bugs/_impossible_to_switch_repositories_on_android__in_webapp/comment_6_ee75ed3c65e0c3a0aaf22ea2ad7e9832._comment
new file mode 100644
index 0000000..1ada17d
--- /dev/null
+++ b/doc/bugs/_impossible_to_switch_repositories_on_android__in_webapp/comment_6_ee75ed3c65e0c3a0aaf22ea2ad7e9832._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ username="Crystalvonwerder@a141a1e27afcd463daccce74ba4df918a01dfd9e"
+ nickname="Crystalvonwerder"
+ avatar="http://cdn.libravatar.org/avatar/508da5fc6a8669b0c7dc674259f78075"
+ subject="Android"
+ date="2017-07-18T14:43:43Z"
+ content="""
+Not working
+"""]]

removed
diff --git a/doc/tuning/comment_3_7ffb2734b8a6c8fe6b1aa01d48ef6d10._comment b/doc/tuning/comment_3_7ffb2734b8a6c8fe6b1aa01d48ef6d10._comment
deleted file mode 100644
index e218f7c..0000000
--- a/doc/tuning/comment_3_7ffb2734b8a6c8fe6b1aa01d48ef6d10._comment
+++ /dev/null
@@ -1,14 +0,0 @@
-[[!comment format=mdwn
- username="https://launchpad.net/~stephane-gourichon-lpad"
- nickname="stephane-gourichon-lpad"
- avatar="http://cdn.libravatar.org/avatar/02d4a0af59175f9123720b4481d55a769ba954e20f6dd9b2792217d9fa0c6089"
- subject="Syncing between untuned and tuned repo?"
- date="2017-07-18T11:25:42Z"
- content="""
-> Also, it's not safe to merge two separate git repositories that have been tuned differently (or one tuned and the other one not). git-annex will prevent merging their git-annex branches together, but it cannot prevent git merge remote/master merging two branches, and the result will be ugly at best (git annex fix can fix up the mess somewhat).
-
-I'm highly interested in the lowercase option since current behavior has lead to a number of file duplications that are still not solved.  My main use repo is 1.7TB large and holds 172.000+ annexed files and I'm definitely not the only one with a big repo.
-
-Does migrating to a tuned repository mean unannexing everything and reimporting into a newly created annex, replica by replica then sync again? That's a high price in some setup.  Or is there a way to somehow `git annex sync` between a newly created repo and an old, untuned one?
-
-"""]]

Added a comment: Syncing between untuned and tuned repo?
diff --git a/doc/tuning/comment_4_5b782975263480a405c5e8dcfe058007._comment b/doc/tuning/comment_4_5b782975263480a405c5e8dcfe058007._comment
new file mode 100644
index 0000000..d7ee5b4
--- /dev/null
+++ b/doc/tuning/comment_4_5b782975263480a405c5e8dcfe058007._comment
@@ -0,0 +1,17 @@
+[[!comment format=mdwn
+ username="https://launchpad.net/~stephane-gourichon-lpad"
+ nickname="stephane-gourichon-lpad"
+ avatar="http://cdn.libravatar.org/avatar/02d4a0af59175f9123720b4481d55a769ba954e20f6dd9b2792217d9fa0c6089"
+ subject="Syncing between untuned and tuned repo?"
+ date="2017-07-18T11:49:10Z"
+ content="""
+> Also, it's not safe to merge two separate git repositories that have been tuned differently (or one tuned and the other one not). git-annex will prevent merging their git-annex branches together, but it cannot prevent git merge remote/master merging two branches, and the result will be ugly at best (git annex fix can fix up the mess somewhat).
+
+My main use repo is 1.7TB large and holds 172.000+ annexed files.
+Variations in filename case has lead to a number of file duplications that are still not solved (I have base scripts that can be used to flatten filename case and fix references in other files, but it will probably mean handling some corner cases and there are more urgent matters for now).  
+
+For these reasons I'm highly interested in the lowercase option and I'm probably not the only one in a similar case.
+
+Does migrating to a tuned repository mean unannexing everything and reimporting into a newly created annex, replica by replica then sync again? That's a high price in some setup.  Or is there a way to somehow `git annex sync` between a newly created repo and an old, untuned one?
+
+"""]]

Added a comment: Syncing between untuned and tuned repo?
diff --git a/doc/tuning/comment_3_7ffb2734b8a6c8fe6b1aa01d48ef6d10._comment b/doc/tuning/comment_3_7ffb2734b8a6c8fe6b1aa01d48ef6d10._comment
new file mode 100644
index 0000000..e218f7c
--- /dev/null
+++ b/doc/tuning/comment_3_7ffb2734b8a6c8fe6b1aa01d48ef6d10._comment
@@ -0,0 +1,14 @@
+[[!comment format=mdwn
+ username="https://launchpad.net/~stephane-gourichon-lpad"
+ nickname="stephane-gourichon-lpad"
+ avatar="http://cdn.libravatar.org/avatar/02d4a0af59175f9123720b4481d55a769ba954e20f6dd9b2792217d9fa0c6089"
+ subject="Syncing between untuned and tuned repo?"
+ date="2017-07-18T11:25:42Z"
+ content="""
+> Also, it's not safe to merge two separate git repositories that have been tuned differently (or one tuned and the other one not). git-annex will prevent merging their git-annex branches together, but it cannot prevent git merge remote/master merging two branches, and the result will be ugly at best (git annex fix can fix up the mess somewhat).
+
+I'm highly interested in the lowercase option since current behavior has lead to a number of file duplications that are still not solved.  My main use repo is 1.7TB large and holds 172.000+ annexed files and I'm definitely not the only one with a big repo.
+
+Does migrating to a tuned repository mean unannexing everything and reimporting into a newly created annex, replica by replica then sync again? That's a high price in some setup.  Or is there a way to somehow `git annex sync` between a newly created repo and an old, untuned one?
+
+"""]]

diff --git a/doc/forum/Can_I_purge_.git__47__annex__47__transfer_directory___63__.mdwn b/doc/forum/Can_I_purge_.git__47__annex__47__transfer_directory___63__.mdwn
new file mode 100644
index 0000000..9650f26
--- /dev/null
+++ b/doc/forum/Can_I_purge_.git__47__annex__47__transfer_directory___63__.mdwn
@@ -0,0 +1,11 @@
+Hello.
+
+There is a git-annex repository managing about 172.000 annexed files (plus a number of small regular files in git history), for a total of about 1.7TB. At filesystem level, `find` reports about 924.000 entries (directories, files, symlinks).
+
+Inspecting it I see that `.git/annex/transfer` containes over 86000 entries (mostly files).
+
+I tried to find more information about it and only https://git-annex.branchable.com/design/assistant/syncing/ seemed to provide some information but not enough.
+
+Are they needed permanently? Can I just delete `.git/annex/transfer` without damage?
+
+Thanks.

removed
diff --git a/doc/design/exporting_trees_to_special_remotes/comment_11_fca19e7c7abcb75adec624ba7ddc461d._comment b/doc/design/exporting_trees_to_special_remotes/comment_11_fca19e7c7abcb75adec624ba7ddc461d._comment
deleted file mode 100644
index 3d2e788..0000000
--- a/doc/design/exporting_trees_to_special_remotes/comment_11_fca19e7c7abcb75adec624ba7ddc461d._comment
+++ /dev/null
@@ -1,11 +0,0 @@
-[[!comment format=mdwn
- username="timothy.sanders@a7ce3a8bae11a60e0c4cda9cb4aef24ec459bbab"
- nickname="timothy.sanders"
- avatar="http://cdn.libravatar.org/avatar/3bcbe0c9e9825637ad7efa70f458640d"
- subject="Google Drive and Archive.org"
- date="2017-07-17T21:00:39Z"
- content="""
-I just wanted to throw out there, that I am playing with some stuff in the vein of this. I've been wanting to be able to export to Google Drive, but take advantage of Google Drive's api for file revisions. And I also think that being able to preserve the file structure and filenames in a special remote has a lot of uses, such as with collections exported to archive.org.
-
-I will share my findings.
-"""]]

Added a comment: Google Drive and Archive.org
diff --git a/doc/design/exporting_trees_to_special_remotes/comment_11_fca19e7c7abcb75adec624ba7ddc461d._comment b/doc/design/exporting_trees_to_special_remotes/comment_11_fca19e7c7abcb75adec624ba7ddc461d._comment
new file mode 100644
index 0000000..3d2e788
--- /dev/null
+++ b/doc/design/exporting_trees_to_special_remotes/comment_11_fca19e7c7abcb75adec624ba7ddc461d._comment
@@ -0,0 +1,11 @@
+[[!comment format=mdwn
+ username="timothy.sanders@a7ce3a8bae11a60e0c4cda9cb4aef24ec459bbab"
+ nickname="timothy.sanders"
+ avatar="http://cdn.libravatar.org/avatar/3bcbe0c9e9825637ad7efa70f458640d"
+ subject="Google Drive and Archive.org"
+ date="2017-07-17T21:00:39Z"
+ content="""
+I just wanted to throw out there, that I am playing with some stuff in the vein of this. I've been wanting to be able to export to Google Drive, but take advantage of Google Drive's api for file revisions. And I also think that being able to preserve the file structure and filenames in a special remote has a lot of uses, such as with collections exported to archive.org.
+
+I will share my findings.
+"""]]

Added a comment
diff --git a/doc/forum/Fix_duplicate_UUID/comment_1_1186caf5aa6830c2412f8584a43d8d86._comment b/doc/forum/Fix_duplicate_UUID/comment_1_1186caf5aa6830c2412f8584a43d8d86._comment
new file mode 100644
index 0000000..e224f1d
--- /dev/null
+++ b/doc/forum/Fix_duplicate_UUID/comment_1_1186caf5aa6830c2412f8584a43d8d86._comment
@@ -0,0 +1,18 @@
+[[!comment format=mdwn
+ username="https://launchpad.net/~liori"
+ nickname="liori"
+ avatar="http://cdn.libravatar.org/avatar/e1d0fdc746b3d21bb147160d40815e37b257b9119774d21784939b2d3ba95a91"
+ subject="comment 1"
+ date="2017-07-16T13:08:41Z"
+ content="""
+So, the procedure that worked for me was:
+
+ 1. edit the annex.uuid configuration setting in one of the repositories that had a duplicate
+
+ 2. edit the remote.*.annex.uuid configuration setting in all repositories that had the repository edited in (1) as a remote
+
+ 3. `git annex fsck` in both repositories that had the duplicate uuid—this is because these repositories did not have correct information as to which files they contained, due to my previous syncing efforts
+
+ 4. `git annex sync` till convergence
+
+"""]]

diff --git a/doc/forum/Fix_duplicate_UUID.mdwn b/doc/forum/Fix_duplicate_UUID.mdwn
new file mode 100644
index 0000000..f1f7ee8
--- /dev/null
+++ b/doc/forum/Fix_duplicate_UUID.mdwn
@@ -0,0 +1,5 @@
+Hi,
+
+For some reason, two of my repositories share the same UUID. Honestly I don't remember how this happened, as I set up these remotes years ago and not used them for a long time; maybe I just accidentally made a copy instead of git clone… no idea. Anyway, I'd like to use these remotes now. What's the best way to correct this problem? Is just editing UUID manually enough?
+
+Thank you,

Added a comment: export "each revision" -- thinking about quiltdata
diff --git a/doc/design/exporting_trees_to_special_remotes/comment_10_75ba45174d3c4b927113a6908061b742._comment b/doc/design/exporting_trees_to_special_remotes/comment_10_75ba45174d3c4b927113a6908061b742._comment
new file mode 100644
index 0000000..15a28e7
--- /dev/null
+++ b/doc/design/exporting_trees_to_special_remotes/comment_10_75ba45174d3c4b927113a6908061b742._comment
@@ -0,0 +1,15 @@
+[[!comment format=mdwn
+ username="yarikoptic"
+ avatar="http://cdn.libravatar.org/avatar/f11e9c84cb18d26a1748c33b48c924b4"
+ subject="export "each revision" -- thinking about quiltdata"
+ date="2017-07-14T20:10:42Z"
+ content="""
+In some cases, if remote supports versioning, might be cool to be able to export all versions (from previously exported point, assuming linear progression).
+Having a chat with [https://quiltdata.com/](https://quiltdata.com/) folks, project which I just got to know about.
+1. They claim/hope to provide infinite storage for public datasets
+2. They do support \"File\" model, so dataset could simply contain files.  If we could (ab)use that -- sounds like a lovely free ride
+3. They do support versioning. If we could export all the versions -- super lovely.  
+
+Might also help to establish interoperability between the tools
+
+"""]]

Added a comment: Link broken?
diff --git a/doc/devblog/day_462__the_feature_youve_all_been_waiting_for/comment_1_6b13defcdf76691049b9bcd7ec675da6._comment b/doc/devblog/day_462__the_feature_youve_all_been_waiting_for/comment_1_6b13defcdf76691049b9bcd7ec675da6._comment
new file mode 100644
index 0000000..a4638c1
--- /dev/null
+++ b/doc/devblog/day_462__the_feature_youve_all_been_waiting_for/comment_1_6b13defcdf76691049b9bcd7ec675da6._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ username="https://openid.stackexchange.com/user/3ee5cf54-f022-4a71-8666-3c2b5ee231dd"
+ nickname="Anthony DeRobertis"
+ avatar="http://cdn.libravatar.org/avatar/1007bfece547e9f2d86fafa142cd240a62456c37f104a9d96ba9db5bf18e1934"
+ subject="Link broken?"
+ date="2017-07-14T07:37:40Z"
+ content="""
+The link isn't working for me, but found it at [[http://git-annex.branchable.com/design/exporting_trees_to_special_remotes/]]
+"""]]

initial concern about timeouts
diff --git a/doc/bugs/could_webdav_be_more_resilient_to_timeouts__63__.mdwn b/doc/bugs/could_webdav_be_more_resilient_to_timeouts__63__.mdwn
new file mode 100644
index 0000000..083f135
--- /dev/null
+++ b/doc/bugs/could_webdav_be_more_resilient_to_timeouts__63__.mdwn
@@ -0,0 +1,44 @@
+### Please describe the problem.
+
+Upload to box.com (corporate account so filesizes could be large, chunking to 50MB anyways) WebDAV times out quite frequently... I wonder if annex could try more times etc?
+
+
+### What version of git-annex are you using? On what operating system?
+
+6.20170525+gitge1cf095ae-1~ndall+1
+
+### Please provide any additional information below.
+
+[[!format sh """
+hopa:/tmp/testbox
+$> git annex copy -J4 --to=box-dartm3.com --json --json-progress video.mp4 
+...
+{"byte-progress":13166304,"action":{"command":"copy","note":"to box-dartm3.com...","key":"MD5E-s1073741824--e06da14afc6face003121641e60593bb.mp4","file":"video.mp4"},"total-size":1073741824,"percent-progress":"1.23%"}
+{"command":"copy","note":"checking box-dartm3.com...","success":false,"key":"MD5E-s1073741824--e06da14afc6face003121641e60593bb.mp4","file":"video.mp4"}
+  DAV failure: 408 "Request Timeout"
+  CallStack (from HasCallStack):
+    error, called at ./Remote/WebDAV.hs:324:47 in main:Remote.WebDAV
+git-annex: copy: 1 failed
+
+$> git annex copy -J4 --to=box-dartm3.com --json --json-progress video.mp4
+{"byte-progress":0,"action":{"command":"copy","note":"to box-dartm3.com...","key":"MD5E-s1073741824--e06da14afc6face003121641e60593bb.mp4","file":"video.mp4"},"total-size":1073741824,"percent-progress":"0%"}
+{"byte-progress":32752,"action":{"command":"copy","note":"to box-dartm3.com...","key":"MD5E-s1073741824--e06da14afc6face003121641e60593bb.mp4","file":"video.mp4"},"total-size":1073741824,"percent-progress":"0%"}
+
+...
+{"byte-progress":225153536,"action":{"command":"copy","note":"to box-dartm3.com...","key":"MD5E-s1073741824--e06da14afc6face003121641e60593bb.mp4","file":"video.mp4"},"total-size":1073741824,"percent-progress":"20.97%"}
+{"command":"copy","note":"checking box-dartm3.com...","success":false,"key":"MD5E-s1073741824--e06da14afc6face003121641e60593bb.mp4","file":"video.mp4"}
+  DAV failure: 408 "Request Timeout"
+  CallStack (from HasCallStack):
+    error, called at ./Remote/WebDAV.hs:324:47 in main:Remote.WebDAV
+git-annex: copy: 1 failed
+
+$> git annex copy -J4 --to=box-dartm3.com --json --json-progress video.mp4
+{"byte-progress":200000000,"action":{"command":"copy","note":"to box-dartm3.com...","key":"MD5E-s1073741824--e06da14afc6face003121641e60593bb.mp4","file":"video.mp4"},"total-size":1073741824,"percent-progress":"18.63%"}
+...
+
+"""]]
+
+apparently it is actually timing out on checking (I guess after chunk completion?), not even when copying... could there be multiple attempts and some grace time period for webdav server possibly to "finish receiving" a particular file?
+
+
+[[!meta author="yoh"]]

initial whining about init --backend
diff --git a/doc/bugs/--backend_for_init_is_in_no_effect__63__.mdwn b/doc/bugs/--backend_for_init_is_in_no_effect__63__.mdwn
new file mode 100644
index 0000000..a039cbe
--- /dev/null
+++ b/doc/bugs/--backend_for_init_is_in_no_effect__63__.mdwn
@@ -0,0 +1,75 @@
+### Please describe the problem.
+
+`annex init --help` talks about supporting `--backend` so I expected either modification of .gitattributes to add backend, or at least non-persistent setting in .git/config ... but seems nothing is actually done?
+
+
+### What version of git-annex are you using? On what operating system?
+
+6.20170525+gitge1cf095ae-1~ndall+1
+
+### Please provide any additional information below.
+
+[[!format sh """
+
+$> git annex init --backend=MD5E
+init  ok
+(recording state in git...)
+(dev) 2 27354 [1].....................................:Wed 12 Jul 2017 07:08:21 PM EDT:.
+(git)hopa:/tmp/1233[master]
+$> cat .git/config 
+[core]
+	repositoryformatversion = 0
+	filemode = true
+	bare = false
+	logallrefupdates = true
+[annex]
+	uuid = 8a045a11-af4a-43b6-922f-cf402efab619
+	version = 5
+(dev) 2 27355 [1].....................................:Wed 12 Jul 2017 07:08:26 PM EDT:.
+(git)hopa:/tmp/1233[master]
+$> git annex init --help        
+git-annex init - initialize git-annex
+
+Usage: git-annex init [DESC] [--version VALUE]
+
+Available options:
+  --version VALUE          Override default annex.version
+  --force                  allow actions that may lose annexed data
+  -F,--fast                avoid slow operations
+  -q,--quiet               avoid verbose output
+  -v,--verbose             allow verbose output (default)
+  -d,--debug               show debug messages
+  --no-debug               don't show debug messages
+  -b,--backend NAME        specify key-value backend to use
+  -N,--numcopies NUMBER    override default number of copies
+  --trust REMOTE           override trust setting
+  --semitrust REMOTE       override trust setting back to default
+  --untrust REMOTE         override trust setting to untrusted
+  -c,--config NAME=VALUE   override git configuration setting
+  --user-agent NAME        override default User-Agent
+  --trust-glacier          Trust Amazon Glacier inventory
+  --notify-finish          show desktop notification after transfer finishes
+  --notify-start           show desktop notification after transfer starts
+  -h,--help                Show this help text
+
+For details, run: git-annex help init
+(dev) 2 27356 [1].....................................:Wed 12 Jul 2017 07:08:31 PM EDT:.
+(git)hopa:/tmp/1233[master]
+$> ls
+(dev) 2 27357 [1].....................................:Wed 12 Jul 2017 07:08:39 PM EDT:.
+(git)hopa:/tmp/1233[master]
+$> ls -lta
+total 312
+drwx------   9 yoh  yoh    4096 Jul 12 19:08 .git/
+drwx------   3 yoh  yoh    4096 Jul 12 19:08 ./
+drwxrwxrwt 143 root root 307200 Jul 12 19:08 ../
+(dev) 2 27358 [1].....................................:Wed 12 Jul 2017 07:08:40 PM EDT:.
+(git)hopa:/tmp/1233[master]
+$> grep -r MD5E .
+
+
+"""]]
+
+### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
+
+kinda

Added a comment: comments on protocol
diff --git a/doc/design/exporting_trees_to_special_remotes/comment_9_6c588170f0b53c74c3c28ff08ed3509d._comment b/doc/design/exporting_trees_to_special_remotes/comment_9_6c588170f0b53c74c3c28ff08ed3509d._comment
new file mode 100644
index 0000000..54b1f8a
--- /dev/null
+++ b/doc/design/exporting_trees_to_special_remotes/comment_9_6c588170f0b53c74c3c28ff08ed3509d._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ username="yarikoptic"
+ avatar="http://cdn.libravatar.org/avatar/f11e9c84cb18d26a1748c33b48c924b4"
+ subject="comments on protocol"
+ date="2017-07-12T22:09:54Z"
+ content="""
+- `TRANSFEREXPORT STORE|RETRIEVE Key File Name`  -- note that File could also contain spaces etc (not only the Name), so should be encoded somehow?
+- `old external special remote programs ... need to handle an ERROR response` -- why not just to boost protocol `VERSION` to e.g. `2` so those which implement this would reply with a new version #?
+"""]]

Added a comment: regarding setting a URL by custom special remote
diff --git a/doc/design/exporting_trees_to_special_remotes/comment_8_7e512ef81c529b0392071b8a6dfe853b._comment b/doc/design/exporting_trees_to_special_remotes/comment_8_7e512ef81c529b0392071b8a6dfe853b._comment
new file mode 100644
index 0000000..eef7d08
--- /dev/null
+++ b/doc/design/exporting_trees_to_special_remotes/comment_8_7e512ef81c529b0392071b8a6dfe853b._comment
@@ -0,0 +1,11 @@
+[[!comment format=mdwn
+ username="yarikoptic"
+ avatar="http://cdn.libravatar.org/avatar/f11e9c84cb18d26a1748c33b48c924b4"
+ subject="regarding setting a URL by custom special remote"
+ date="2017-07-12T22:04:38Z"
+ content="""
+I also wonder if `SETURLPRESENT Key Url` could also be extended to be `SETURLPRESENT Key Url Remote`, i.e. that a custom remote could register a URL to Web remote?
+In many cases I expect a \"custom uploader/exporter\" but then public URL being available, so demanding a custom external remote to fetch it would be a bit overkill.
+
+N.B. I already was burnt once on a large scale with our custom remote truthfully replying to CLAIMURL (since it can handle them if needed) to public URLs, thus absorbing them into it instead of relaying responsibility to 'Web' remote. Had to traverse dozens of datasets and duplicate urls from 'datalad' to 'Web' remote.
+"""]]

Added a comment: side-note about WebDAV&DeltaV
diff --git a/doc/design/exporting_trees_to_special_remotes/comment_7_43a98b4b9d9eb54720a9c92cd8bb3a30._comment b/doc/design/exporting_trees_to_special_remotes/comment_7_43a98b4b9d9eb54720a9c92cd8bb3a30._comment
new file mode 100644
index 0000000..0ba8327
--- /dev/null
+++ b/doc/design/exporting_trees_to_special_remotes/comment_7_43a98b4b9d9eb54720a9c92cd8bb3a30._comment
@@ -0,0 +1,14 @@
+[[!comment format=mdwn
+ username="yarikoptic"
+ avatar="http://cdn.libravatar.org/avatar/f11e9c84cb18d26a1748c33b48c924b4"
+ subject="side-note about WebDAV&DeltaV"
+ date="2017-07-12T21:54:49Z"
+ content="""
+DAV = “Distributed Authoring and Versioning.”, but versioning was forgotten about in the original RFC. Only some servers/clients implement DeltaV spec (RFC 3253) which came later to fill that gap. 
+But in principle, any DeltaV-compliant WebDAV special remote could then be used for \"export\" while retaining access to all the versions.
+References: 
+- [WebDAV and Autoversioning - Version Control with Subversion](http://archive.oreilly.com/pub/a/opensource/excerpts/9780596510336/webdav-and-autoversioning.html)
+- [RFC 3253](http://www.webdav.org/specs/rfc3253.html)
+
+I have got interested whenever saw that box.com is supported through WebDAV but not sure if DeltaV is anyhow supported and apparently number of versions stored per file is anyways depends on type of the account (and no versions for a free personal one): https://community.box.com/t5/How-to-Guides-for-Managing/How-To-Track-Your-Files-and-File-Versions-Version-History/ta-p/329 
+"""]]

comment
diff --git a/doc/design/exporting_trees_to_special_remotes/comment_6_3217c2f852e5d9b1e4be2adff995dd24._comment b/doc/design/exporting_trees_to_special_remotes/comment_6_3217c2f852e5d9b1e4be2adff995dd24._comment
new file mode 100644
index 0000000..4b1265b
--- /dev/null
+++ b/doc/design/exporting_trees_to_special_remotes/comment_6_3217c2f852e5d9b1e4be2adff995dd24._comment
@@ -0,0 +1,19 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 6"""
+ date="2017-07-12T18:09:00Z"
+ content="""
+That would almost work without any smarts on the git-annex side.
+When it tells the special remote to `REMOVEEXPORT`, the special remote
+could remove the file from the HEAD equivilant but retain the content in its
+versioned snapshots, and keep the url to that registered. But, that
+doesn't actually work, because the url is registered for that special
+remote, not the web special remote. Once git-annex thinks the file has been
+removed from the special remote, it will never try to use the url
+registered for that special remote.
+
+So, to support versioning-capable special remotes, there would need to be
+an additional response to `REMOVEEXPORT` that says "I removed it from HEAD,
+but I still have a copy in this url, which can be accessed using
+the web special remote".
+"""]]

devblog
diff --git a/doc/devblog/day_462__the_feature_youve_all_been_waiting_for.mdwn b/doc/devblog/day_462__the_feature_youve_all_been_waiting_for.mdwn
new file mode 100644
index 0000000..8f78fd8
--- /dev/null
+++ b/doc/devblog/day_462__the_feature_youve_all_been_waiting_for.mdwn
@@ -0,0 +1,12 @@
+Have been working on a design for [[exporting_trees_to_special_remotes]].
+As well as being handy for publishing scientific data sets out of git-annex
+repositories, that covers long-requested features like
+[[todo/dumb, unsafe, human-readable_backend]].
+
+I had not been optimistic about such requests, which seemed half-baked, but
+Yoh came up with idea of exporting a git treeish, and remembering the last
+exported treeish so a subsequent export can be done incrementally, and can
+fully sync the exported tree.
+
+Please take a look at the design if you've wanted to use git-annex for some
+sort of tree export before, and see if it meets your needs.

Added a comment: For Debian/Ubuntu users -- get git-annex-standalone from NeuroDebian
diff --git a/doc/install/comment_2_ba3985a5cbd9f5682807d2bdbb9874e2._comment b/doc/install/comment_2_ba3985a5cbd9f5682807d2bdbb9874e2._comment
new file mode 100644
index 0000000..4db93ac
--- /dev/null
+++ b/doc/install/comment_2_ba3985a5cbd9f5682807d2bdbb9874e2._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="yarikoptic"
+ avatar="http://cdn.libravatar.org/avatar/f11e9c84cb18d26a1748c33b48c924b4"
+ subject="For Debian/Ubuntu users -- get git-annex-standalone from NeuroDebian"
+ date="2017-07-12T17:57:21Z"
+ content="""
+We provide quite an up-to-date standalone backport build of git-annex (package name [git-annex-standalone](http://neuro.debian.net/pkgs/git-annex-standalone.html)) through NeuroDebian for all Debian/Ubuntus, so you might want to enable NeuroDebian repository (`apt-get install neurodebian`  on a recent debian/ubuntu or follow [NeuroDebian website](http://neuro.debian.net) for instructions). 
+"""]]

Added a comment: special remotes with versioning support
diff --git a/doc/design/exporting_trees_to_special_remotes/comment_5_fcd9890013371dae6ffcd00561b8c625._comment b/doc/design/exporting_trees_to_special_remotes/comment_5_fcd9890013371dae6ffcd00561b8c625._comment
new file mode 100644
index 0000000..fb73f06
--- /dev/null
+++ b/doc/design/exporting_trees_to_special_remotes/comment_5_fcd9890013371dae6ffcd00561b8c625._comment
@@ -0,0 +1,28 @@
+[[!comment format=mdwn
+ username="yarikoptic"
+ avatar="http://cdn.libravatar.org/avatar/f11e9c84cb18d26a1748c33b48c924b4"
+ subject="special remotes with versioning support"
+ date="2017-07-12T17:30:33Z"
+ content="""
+thanks -- I will check those all out!
+
+Meanwhile a quick one regarding \"I'm not clear about what you're suggesting be done with versioning support in external special remotes?\".
+
+I meant that in some cases there might be no need for any custom/special tracking per exported file would be needed -- upon export we could just register a unique URL for that particular version of the file for the corresponding KEY so later on it could be 'annex get'ed even if a new version of the file gets uploaded or removed.  So annex could just store those treeish(es) hexsha on what was exported last without any explicit additional tracking per file.  URL might be some custom one to be handled by the special remote backend.
+
+E.g. here is a list of versions (and corresponding urls) for a sample file on the s3 bucket
+
+[[!format sh \"\"\"
+$> datalad ls -aL s3://datalad-test0-versioned/3versions-allversioned.txt 
+Connecting to bucket: datalad-test0-versioned
+[INFO   ] S3 session: Connecting to the bucket datalad-test0-versioned 
+Bucket info:
+  Versioning: {'MfaDelete': 'Disabled', 'Versioning': 'Enabled'}
+     Website: datalad-test0-versioned.s3-website-us-east-1.amazonaws.com
+         ACL: <Policy: yoh@cs.unm.edu (owner) = FULL_CONTROL>
+3versions-allversioned.txt            ...  http://datalad-test0-versioned.s3.amazonaws.com/3versions-allversioned.txt?versionId=Kvuind11HZh._dCPaDAb0OY9dRrQoTMn [OK]
+3versions-allversioned.txt            ...  http://datalad-test0-versioned.s3.amazonaws.com/3versions-allversioned.txt?versionId=b.qCuh7Sg58VIYj8TVHzbRS97EvejzEl [OK]
+3versions-allversioned.txt            ...  http://datalad-test0-versioned.s3.amazonaws.com/3versions-allversioned.txt?versionId=pNsV5jJrnGATkmNrP8.i_xNH6CY4Mo5s [OK]
+3versions-allversioned.txt_sameprefix ...  http://datalad-test0-versioned.s3.amazonaws.com/3versions-allversioned.txt_sameprefix?versionId=Mvsc4FgJWc6gExwSw1d6wsLrnk6wdDVa [OK]
+\"\"\"]]
+"""]]

comment
diff --git a/doc/design/exporting_trees_to_special_remotes/comment_4_126ee5332ff88b3993d33d59328d4148._comment b/doc/design/exporting_trees_to_special_remotes/comment_4_126ee5332ff88b3993d33d59328d4148._comment
new file mode 100644
index 0000000..89bc18d
--- /dev/null
+++ b/doc/design/exporting_trees_to_special_remotes/comment_4_126ee5332ff88b3993d33d59328d4148._comment
@@ -0,0 +1,24 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 4"""
+ date="2017-07-12T16:45:51Z"
+ content="""
+I've added a section with changes to the external special remote protocol.
+I included the Key in each of the new protocol commands, although it's not
+strictly neeed, to allow the implementation to use SETURLPRESENT, SETSTATE,
+etc.
+
+`git annex copy $file --to myexport` could perhaps work; the difficulty
+though is, what if you've exported branch foo, and then checked out bar,
+and so you told it to export one version of the file, and are running
+git-annex copy on a different version? It seems that git-annex would have
+to cross-check in this and similar commands, to detect such a situation. 
+Unsure how much more work that would be, both CPU time and implementation
+time.
+
+I do think that `git annex get` could download files from exports easily
+enough, but see the "location tracking" section for trust caveats.
+
+I'm not clear about what you're suggesting be done with versioning support
+in external special remotest?
+"""]]

protocol design
diff --git a/doc/design/exporting_trees_to_special_remotes.mdwn b/doc/design/exporting_trees_to_special_remotes.mdwn
index 5864e21..773fe68 100644
--- a/doc/design/exporting_trees_to_special_remotes.mdwn
+++ b/doc/design/exporting_trees_to_special_remotes.mdwn
@@ -4,6 +4,8 @@ and content from the tree.
 
 (See also [[todo/export]] and [[todo/dumb, unsafe, human-readable_backend]])
 
+[[!toc ]]
+
 ## configuring a special remote for tree export
 
 If a special remote already has files stored in it, switching it to be a
@@ -62,6 +64,11 @@ To efficiently update an export, git-annex can diff the tree
 that was exported with the new tree. The naive approach is to upload
 new and modified files and remove deleted files.
 
+Note that a file may have been partially uploaded to an export, and then
+the export updated to a tree without that file. So, need to try to delete
+all removed files, even if location tracking does not say that the special
+remote contains them.
+
 With rename detection, if the special remote supports moving files,
 more efficient updates can be done. It gets complicated; consider two files
 that swap names.
@@ -93,7 +100,45 @@ if the file instead still has the old key's content. Instead, the whole
 file needs to be re-uploaded.
 
 Alternative: Keep an index file that's the current state of the export.
-See comment #4 of [[todo/export]]. Not sure if that works?
+See comment #4 of [[todo/export]]. Not sure if that works? Perhaps it
+would be overkill if it's only used to support resuming partial uploads.
+
+## changes to special remote interface
+
+This needs some additional methods added to special remotes, and to
+the [[external_special_remote_protocol]].
+
+* `TRANSFEREXPORT STORE|RETRIEVE Key File Name`  
+  Requests the transfer of a File on local disk to or from a given
+  Name on the special remote.  
+  The Name will be in the form of a relative path, and may contain
+  path separators, whitespace, and other special characters.  
+  The Key is provided in case the special remote wants to use eg
+  `SETURIPRESENT`.  
+  The remote responds with either `TRANSFER-SUCCESS` or
+  `TRANSFER-FAILURE`, and a remote where exports do not make sense
+  may always fail.
+* `CHECKPRESENTEXPORT Key Name`
+  Requests the remote to check if a Name is present in it.  
+  The remote responds with `CHECKPRESENT-SUCCESS`, `CHECKPRESENT-FAILURE`,
+  or `CHECKPRESENT-UNKNOWN`.
+* `REMOVEEXPORT Key Name`
+  Requests the remote to remove content stored by `TRANSFEREXPORT`.  
+  The Key is provided in case the remote wants to use eg
+  `SETURIMISSING`.
+  The remote responds with either `REMOVE-SUCCESS` or
+  `REMOVE-FAILURE`.
+* `RENAMEEXPORT Key OldName NewName`
+  Requests the remote rename a file stored on it from OldName to NewName.  
+  The Key is provided in case the remote wants to use eg 
+  `SETURIMISSING` and `SETURIPRESENT`.  
+  The remote responds with `RENAMEEXPORT-SUCCESS,
+  `RENAMEEXPORT-FAILURE`, or with `RENAMEEXPORT-UNSUPPORTED` if an efficient
+  rename cannot be done.
+
+To support old external special remote programs that have not been updated
+to support exports, git-annex will need to handle an `ERROR` response
+when using any of the above.
 
 ## location tracking
 
@@ -141,7 +186,8 @@ find what changes need to be made to the special remote.
 If the filenames are stored in the location tracking log, the exported tree
 could be reconstructed, but it would take O(N) queries to git, where N is
 the total number of keys git-annex knows about; updating exports of small
-subsets of large repositories would be expensive.
+subsets of large repositories would be expensive. So grafting in the
+exported tree seems the better approach.
 
 ## export conflicts
 

Added a comment: does it really need to be a new command ("export") or could be the same old "copy"?
diff --git a/doc/design/exporting_trees_to_special_remotes/comment_3_cb063cdc66df79c40039bce247b7170c._comment b/doc/design/exporting_trees_to_special_remotes/comment_3_cb063cdc66df79c40039bce247b7170c._comment
new file mode 100644
index 0000000..377d167
--- /dev/null
+++ b/doc/design/exporting_trees_to_special_remotes/comment_3_cb063cdc66df79c40039bce247b7170c._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ username="yarikoptic"
+ avatar="http://cdn.libravatar.org/avatar/f11e9c84cb18d26a1748c33b48c924b4"
+ subject="does it really need to be a new command (&quot;export&quot;) or could be the same old &quot;copy&quot;?"
+ date="2017-07-11T22:14:39Z"
+ content="""
+or it could be just a mode of operation for a special remote depending on \"exporttree=true\" being set, where in one (old) case it would operate based on keys associated with the files pointed on the cmdline (or just keys for --auto or pointed by metadata), whenever when \"exporttree=true\" -- it would operate on filenames pointed on command line (or files found to be associated with the keys as pointed by --auto or by metadata)? 
+Then the same 'copy --to' could be used in both cases, streamlining user experience ;) 
+"""]]

Added a comment: couldn't STATE be used for KEY -> FILENAME(s) mapping?
diff --git a/doc/design/exporting_trees_to_special_remotes/comment_2_d414fb575845770e003a3c8ca4a986be._comment b/doc/design/exporting_trees_to_special_remotes/comment_2_d414fb575845770e003a3c8ca4a986be._comment
new file mode 100644
index 0000000..f0fc092
--- /dev/null
+++ b/doc/design/exporting_trees_to_special_remotes/comment_2_d414fb575845770e003a3c8ca4a986be._comment
@@ -0,0 +1,10 @@
+[[!comment format=mdwn
+ username="yarikoptic"
+ avatar="http://cdn.libravatar.org/avatar/f11e9c84cb18d26a1748c33b48c924b4"
+ subject="couldn't STATE be used for KEY -> FILENAME(s) mapping?"
+ date="2017-07-11T22:05:49Z"
+ content="""
+just wondered...
+at least in my attempt for zenodo special remote I did store zenodo's file deposition ID within the state to be able to request it back later on
+alternative -- URL(s) I guess.  Could be smth like exported:UUID/filename.
+"""]]

Added a comment: note that some remotes could support files versioning "natively"
diff --git a/doc/design/exporting_trees_to_special_remotes/comment_1_ea84ee9de604e05b8e02483ba8452186._comment b/doc/design/exporting_trees_to_special_remotes/comment_1_ea84ee9de604e05b8e02483ba8452186._comment
new file mode 100644
index 0000000..c6e7c7a
--- /dev/null
+++ b/doc/design/exporting_trees_to_special_remotes/comment_1_ea84ee9de604e05b8e02483ba8452186._comment
@@ -0,0 +1,15 @@
+[[!comment format=mdwn
+ username="yarikoptic"
+ avatar="http://cdn.libravatar.org/avatar/f11e9c84cb18d26a1748c33b48c924b4"
+ subject="note that some remotes could support files versioning &quot;natively&quot;"
+ date="2017-07-11T21:59:49Z"
+ content="""
+E.g. when exporting to the S3 bucket with versioning turned on, or OSF (AFAIK).  So upon successful upload special remote could SETURLPRESENT to signal availability of any particular key (associated with the file).
+
+Yet to grasp the cases you outlined better to see if I see any other applicable use-ase
+
+I hope that export would be implemented through extending externals special remote protocol? ;)
+
+[[!meta author=yoh]]
+
+"""]]

improve
diff --git a/doc/design/exporting_trees_to_special_remotes.mdwn b/doc/design/exporting_trees_to_special_remotes.mdwn
index 6ded07b..5864e21 100644
--- a/doc/design/exporting_trees_to_special_remotes.mdwn
+++ b/doc/design/exporting_trees_to_special_remotes.mdwn
@@ -156,14 +156,13 @@ but clocks vary too much to trust it.
 Also, if the exported tree is grafted in to the git-annex branch, 
 there would be a merge conflict. Union merging would *scramble* the exported 
 tree, so even if a smart merge is added, old versions of git-annex would 
-corrupt the exported tree. To avoid this problem, add a log file 
-`exported/uuid.log` that lists the sha1 of the exported tree and the uuid
-of the repository that exported it. Still graft in the exported tree at 
-`exported/uuid/`  (so it gets transferred to remotes and is not GCed).
-When looking up the exported tree, read the sha1 from the log file,
-and use it rather than what's currently grafted into the git-annex branch.
-(Old versions of git-annex would still union merge the exported tree,
-and the resulting junk would waste some space.)
+corrupt the exported tree.
+
+To avoid that problem, add a log file  `exported/uuid.log` that lists
+the sha1 of the exported tree and the uuid of the repository that exported it.
+To avoid the exported tree being GCed, do graft it in to the git-annex
+branch, but follow that with a commit that removes the tree again,
+and only update `refs/heads/git-annex` after making both commits.
 
 If `exported/uuid.log` contains multiple active exports, there was an
 export conflict. Short of downloading the whole export to checksum it,

add design
diff --git a/doc/design/exporting_trees_to_special_remotes.mdwn b/doc/design/exporting_trees_to_special_remotes.mdwn
new file mode 100644
index 0000000..6ded07b
--- /dev/null
+++ b/doc/design/exporting_trees_to_special_remotes.mdwn
@@ -0,0 +1,181 @@
+For publishing content from a git-annex repository, it would be useful to
+be able to export a tree of files to a special remote, using the filenames
+and content from the tree.
+
+(See also [[todo/export]] and [[todo/dumb, unsafe, human-readable_backend]])
+
+## configuring a special remote for tree export
+
+If a special remote already has files stored in it, switching it to be a
+tree export would result in a mix of files named by key and by filename.
+That's not desirable. So, the user should set up a new special remote
+when they want to export a tree. (It would also be possible to drop all content
+from an existing special remote and reuse it, but there does not seem much
+benefit in doing so.)
+
+Add a new `initremote` configuration `exporttree=true`, that cannot be
+changed by `enableremote`:
+
+	git annex initremote myexport type=... exporttree=true
+
+It does not make sense to encrypt an export, so exporttree=true requires
+(and can even imply) encryption=false.
+
+Note that the particular tree to export is not specified yet. This is
+because the tree that is exported to a special remote may change.
+
+## exporting a treeish
+
+To export a treeish, the user can run:
+
+	git annex export $treeish --to myexport
+
+That does all necessary uploads etc to make the special remote contain
+the tree of files. The treeish can be a tag, a branch, or a tree.
+
+Users may sometimes want to export multiple treeishes to a single special 
+remote. For example, exporting several tags. This interface could be
+complicated to support that, putting the treeishes in subdirectories on the
+special remote etc. But that's not necessary, because the user can use git
+commands to graft trees together into a larger tree, and export that larger
+tree.
+
+If an export is interrupted, running it again should resume where it left
+off.
+
+It would also be nice to have a way to say, "I want to export the master branch", 
+and have git-annex sync and the assistant automatically update the export.
+This could be done by recording the treeish in eg, refs/remotes/myexport/HEAD.
+git-annex export could do this by default (if the user doesn't want the export
+to track the branch, they could instead export a tree or a tag).
+
+## updating an export
+
+The user can at any time re-run git-annex export with a new treeish
+to change what's exported. While some use cases for git annex export
+involve publishing datasets that are intended to remain immutable,
+other use cases include eg, making a tree of files available to a computer
+that can't run git-annex, and in such use cases, the tree needs to be able
+to be updated.
+
+To efficiently update an export, git-annex can diff the tree
+that was exported with the new tree. The naive approach is to upload
+new and modified files and remove deleted files.
+
+With rename detection, if the special remote supports moving files,
+more efficient updates can be done. It gets complicated; consider two files
+that swap names.
+
+If the special remote supports copying files, that would also make some
+updates more efficient.
+
+## resuming exports
+
+Resuming an interrupted export needs to work well.
+
+There are two cases here:
+
+1. Some of the files in the tree have been uploaded; others have not.
+2. A file has been partially uploaded.
+
+These two cases need to be disentangled somehow in order to handle
+them. One way is to use the location log as follows:
+
+* Before a file is uploaded, look up what key is currently exported
+  using that filename. If there is one, update the location log,
+  saying it's not present in the special remote.
+* Upload the file.
+* Update the location log for the newly exported key.
+
+Note that this method does not allow resuming a partial upload by appending to
+a file, because we don't know if the file actually started to be uploaded, or
+if the file instead still has the old key's content. Instead, the whole
+file needs to be re-uploaded.
+
+Alternative: Keep an index file that's the current state of the export.
+See comment #4 of [[todo/export]]. Not sure if that works?
+
+## location tracking
+
+Does a copy of a file exported to a special remote count as a copy
+of a file as far as [[numcopies]] goes? Should git-annex get download
+a file from an export? Or should exporting not update location tracking?
+
+The problem is that special remotes with exports are not
+key/value stores. The content of a file can change, and if multiple
+repositories can export a special remote, they can be out of sync about
+what files are exported to it. 
+
+To avoid such problems, when updating an exported file on a special remote,
+the key could be recorded there too. But, this would have to be done
+atomically, and checked atomically when downloading the file. Special
+remotes lack atomicity guarantees for file storage, let alone for file
+retrieval.
+
+Possible solution: Make exporttree=true cause the special remote to
+be untrusted, and rely on annex.verify to catch cases where the content
+of a file on a special remote has changed. This would work well enough
+except for when the WORM or URL backend is used. So, prevent the user
+from exporting such keys. Also, force verification on for such special
+remotes, don't let it be turned off.
+
+## recording exported filenames in git-annex branch
+
+In order to download the content of a key from a file exported
+to a special remote, the filename that was exported needs to somehow
+be recorded in the git-annex branch. How to do this? The filename could
+be included in the location tracking log or a related log file, or 
+the exported tree could be grafted into the git-annex branch
+(under eg, `exported/uuid/`). Which way uses less space in the git repository?
+
+Grafting in the exported tree records the necessary data, but the
+file-to-key map needs to be reversed to support downloading from an export.
+It would be too expensive to traverse the tree each time to hunt for a key;
+instead would need a database that gets populated once by traversing the
+tree.
+
+On the other hand, for updating what's exported, having access to the old
+exported tree seems perfect, because it and the new tree can be diffed to
+find what changes need to be made to the special remote. 
+
+If the filenames are stored in the location tracking log, the exported tree
+could be reconstructed, but it would take O(N) queries to git, where N is
+the total number of keys git-annex knows about; updating exports of small
+subsets of large repositories would be expensive.
+
+## export conflicts
+
+What if different repositories can access the same special remote,
+and different trees get exported to it concurrently?
+
+This would be very hard to untangle, because it's hard to know what
+content was exported to a file last, and thus what content the file
+actually has. The location log's timestamps might give a hint, 
+but clocks vary too much to trust it.
+
+Also, if the exported tree is grafted in to the git-annex branch, 
+there would be a merge conflict. Union merging would *scramble* the exported 
+tree, so even if a smart merge is added, old versions of git-annex would 
+corrupt the exported tree. To avoid this problem, add a log file 
+`exported/uuid.log` that lists the sha1 of the exported tree and the uuid
+of the repository that exported it. Still graft in the exported tree at 
+`exported/uuid/`  (so it gets transferred to remotes and is not GCed).
+When looking up the exported tree, read the sha1 from the log file,
+and use it rather than what's currently grafted into the git-annex branch.
+(Old versions of git-annex would still union merge the exported tree,
+and the resulting junk would waste some space.)
+
+If `exported/uuid.log` contains multiple active exports, there was an
+export conflict. Short of downloading the whole export to checksum it,
+or deleting the whole export, what can be done to resolve it?
+
+In this case, git-annex knows both exported trees. Have the user provide
+a tree that resolves the conflict as they desire (it could be the same as
+one of the exported trees, or some merge of them). Then diff each exported
+tree in turn against the resolving tree. If a file differs, re-export that
+file. In some cases this will do unncessary re-uploads, but it's reasonably
+efficient.
+
+The documentation should suggest strongly only exporting to a given special
+remote from a single repository, or having some other rule that avoids
+export conflicts.
diff --git a/doc/todo/export/comment_6_88529583c38bc0c13dbe9a097e97b744._comment b/doc/todo/export/comment_6_88529583c38bc0c13dbe9a097e97b744._comment
new file mode 100644
index 0000000..376e5ad
--- /dev/null
+++ b/doc/todo/export/comment_6_88529583c38bc0c13dbe9a097e97b744._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 6"""
+ date="2017-07-11T15:32:07Z"
+ content="""
+I've started a more detailed/coherent design document at
+[[design/exporting_trees_to_special_remotes]].

(Diff truncated)
note
diff --git a/doc/todo/export/comment_5_e5ac435b73818d9002f5ada84062e933._comment b/doc/todo/export/comment_5_e5ac435b73818d9002f5ada84062e933._comment
new file mode 100644
index 0000000..0377548
--- /dev/null
+++ b/doc/todo/export/comment_5_e5ac435b73818d9002f5ada84062e933._comment
@@ -0,0 +1,21 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 5"""
+ date="2017-07-10T18:15:02Z"
+ content="""
+Using trees like that would not work well in a distributed setting
+where two repositories could be storing content on the same special remote.
+
+The per-remote trees could be stored, by eg grafting them into the
+git-annex branches's tree under the uuid of the special remote.
+
+But, there could then be merge conflicts, when different trees have been
+exported to the same special remote concurrently. And there's no way to
+resolve such a merge: If repo A uploaded F containing K and B uploaded F
+containing L, we don't know which file the special remote ended up with.
+If that happened it would have to delete and re-export from scratch.
+
+I think it's fine for exporting to only be able to be done from one
+repository. But, if a user tries to do the above, it needs to fail in some
+reasonable way, not leave a mess.
+"""]]

thoughts
diff --git a/doc/todo/export/comment_4_da8b5b8c9fc371fe138590744b2adce3._comment b/doc/todo/export/comment_4_da8b5b8c9fc371fe138590744b2adce3._comment
new file mode 100644
index 0000000..1d694e0
--- /dev/null
+++ b/doc/todo/export/comment_4_da8b5b8c9fc371fe138590744b2adce3._comment
@@ -0,0 +1,51 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 4"""
+ date="2017-07-10T17:30:23Z"
+ content="""
+Yoh suggested storing a treeish associated with the export to a special
+remote. Pointed out that the diff from that treeish to the next one could
+be used to update the export.
+
+(That does seem very close to [[todo/dumb, unsafe, human-readable_backend]].)
+
+Would need to somehow deal with partial uploads. There are two ways
+an upload can be partial:
+
+* Some of the files in the treeish have been uploaded; others have not.
+* A file has been partially uploaded.
+
+These two cases need to be disentangled somehow in order to handle
+them. It could use the location log, so once a key gets uploaded
+to the special remote, its content is marked present. However, using the
+location log does not seem sufficient to handle all cases (eg two files
+swapping names between two treeishes, where one of the files has been
+uploaded only partially to the special remote).
+
+It seems promosing to keep track of two separate treeishes:
+
+1. The treeish that is the current goal to have exported to the special
+   remote.
+2. The treeish for the current state of the special remote. Note that
+   after even an interrupted transfer, this treeish needs to be updated to
+   contain the current state of the special remote, which would mean
+   constructing a new treeish. (Perhaps a per-remote index file could be
+   used.)
+
+Having these two treeishes allows:
+
+* Detecting renames of exported files, which some special remotes can do
+  more efficiently.
+* Determining the key that a given file on the special remote is
+  storing the content of.
+* Resuming an interrupted export, without re-uploading all the files.
+* Detecting a partially uploaded file, because the current state treeish
+  for the remote should not contain that file.
+* Knowing what key was in the process of being sent to a partially uploaded
+  file, and so resuming that upload. Look at the goal treeish and see what
+  key it has for the file; as long as the goal treeish is the same goal
+  that was used for the interrupted export, that's the key. (This needs a
+  way to track if the goal has changed.)
+* Optionally, making `git annex sync` and the assistant upload as needed
+  to satisfy goal treeishes for special remotes.
+"""]]

Added a comment
diff --git a/doc/special_remotes/ipfs/comment_9_630964c11465751773d1082e35737e70._comment b/doc/special_remotes/ipfs/comment_9_630964c11465751773d1082e35737e70._comment
new file mode 100644
index 0000000..719f5e1
--- /dev/null
+++ b/doc/special_remotes/ipfs/comment_9_630964c11465751773d1082e35737e70._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="fiatjaf"
+ avatar="http://cdn.libravatar.org/avatar/b760f503c84d1bf47322f401066c753f"
+ subject="comment 9"
+ date="2017-07-10T13:08:53Z"
+ content="""
+There's now a service that pins your IPFS files for a low price: [https://www.eternum.io/](https://www.eternum.io/), I was looking for something like this to start using IPFS with git-annex. Maybe I'll do try now.
+"""]]

How to use git-annex for content on external drives only?
diff --git a/doc/forum/Workflow_for_using_git-annex_only_on_external_drives.mdwn b/doc/forum/Workflow_for_using_git-annex_only_on_external_drives.mdwn
new file mode 100644
index 0000000..734be10
--- /dev/null
+++ b/doc/forum/Workflow_for_using_git-annex_only_on_external_drives.mdwn
@@ -0,0 +1,8 @@
+I am starting to use annex and feel a bit overwhelmed. I would appreciate guidance on this use case.
+
+1. Content must exist on multiple external drives only. I do not want those files in my internal HDD.
+2. Not all drives will have all files. Probably none will. Drives might be smaller than the full set.
+3. I should always have two copies of each file somewhere (and be warned if not).
+3. Some files already exist in some drives I intend to use annex with in this way. When setting annex in them, I had rather avoid having to transfer in files from another set up drive if those files already exist in the new drive.
+4. Some drives will necessarily be using FAT32.
+5. I will be adding new content over time.

Added a comment
diff --git a/doc/forum/Best_way_to_remove_old_version_of_specific_file/comment_3_c1f34ec908d7c4ad05c4360e29e68fb4._comment b/doc/forum/Best_way_to_remove_old_version_of_specific_file/comment_3_c1f34ec908d7c4ad05c4360e29e68fb4._comment
new file mode 100644
index 0000000..6fd7012
--- /dev/null
+++ b/doc/forum/Best_way_to_remove_old_version_of_specific_file/comment_3_c1f34ec908d7c4ad05c4360e29e68fb4._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="memeplex"
+ avatar="http://cdn.libravatar.org/avatar/84a611000e819ef825421de06c9bca90"
+ subject="comment 3"
+ date="2017-07-08T15:47:47Z"
+ content="""
+I mean, as I understand it I unlock the file, modify it, re-add it, commit, go to the previous revision and drop the annexed old version there. I see this as an instance of the remove-old-annexed-version general case, which AFAIK is valid. What am I missing here? Should I lock instead of adding the new file?
+"""]]

Added a comment
diff --git a/doc/forum/Best_way_to_remove_old_version_of_specific_file/comment_2_6580f6790404694b55d07a0e57d83336._comment b/doc/forum/Best_way_to_remove_old_version_of_specific_file/comment_2_6580f6790404694b55d07a0e57d83336._comment
new file mode 100644
index 0000000..77b3947
--- /dev/null
+++ b/doc/forum/Best_way_to_remove_old_version_of_specific_file/comment_2_6580f6790404694b55d07a0e57d83336._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="memeplex"
+ avatar="http://cdn.libravatar.org/avatar/84a611000e819ef825421de06c9bca90"
+ subject="comment 2"
+ date="2017-07-08T15:37:13Z"
+ content="""
+Hi Joey, thank you for answering. I don't understand why alternative b won't work. Doesn't the add after the unlock revert to locked state?
+"""]]

Please compile an Intel 64 bit package for linux.
diff --git a/doc/bugs/no_prebuilt_package_for_intel_64_architecture.mdwn b/doc/bugs/no_prebuilt_package_for_intel_64_architecture.mdwn
new file mode 100644
index 0000000..e3caf60
--- /dev/null
+++ b/doc/bugs/no_prebuilt_package_for_intel_64_architecture.mdwn
@@ -0,0 +1,22 @@
+### Please describe the problem.
+On the download page for linux, https://downloads.kitenet.net/git-annex/linux/current/ (from http://git-annex.branchable.com/install/) there are links to amd64 and Intel 386 versions, but no package for Intel 64 bit version.
+
+### What steps will reproduce the problem?
+Visit the site.
+
+### What version of git-annex are you using? On what operating system?
+
+
+### Please provide any additional information below.
+
+[[!format sh """
+# If you can, paste a complete transcript of the problem occurring here.
+# If the problem is with the git-annex assistant, paste in .git/annex/daemon.log
+
+
+# End of transcript or log.
+"""]]
+
+### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
+
+

name update
diff --git a/doc/thanks.mdwn b/doc/thanks.mdwn
index 6d23292..01729bf 100644
--- a/doc/thanks.mdwn
+++ b/doc/thanks.mdwn
@@ -105,7 +105,7 @@ Wiberg, Sam Kleinman, Vincent Demeester, Tristan Helmich, Zero Art Radio,
 Bruno Bigras, Ævar Arnfjörð Bjarmason, Stanley Yamane, Christopher Browne,
 David Whittington, Fredrik Gustafsson, Peter Hogg, Tom Francart, Wouter
 Verhelst, Christian Savard, wundersolutions, Andreas Fuchs, Eric Kidd,
-Georg Lehner, Berin Martini, Stewart Wright, Bence Albertini, Stefan
+Georg Lehner, Berin Martini, Stewart Wright, Bence parhuzamos, Stefan
 Schmitt, Antoine Boegli, jscit, Christopher Kernahan, A Marshall, Jürgen
 Peters, Aaron Whitehouse, Jouni K Seppanen, Michael Albertson, Andreas
 Laas, Thomas Herok, Aurelien Gazagne, Bryan W Stitt, anonymous, Chris

update
diff --git a/doc/thanks/list b/doc/thanks/list
index 80dd6de..3873ec5 100644
--- a/doc/thanks/list
+++ b/doc/thanks/list
@@ -70,3 +70,4 @@ Lars Wallenborn,
 tj, 
 Carlos Pita, 
 Lee Hinman, 
+Lukas Platz, 

Added a comment
diff --git a/doc/todo/tracking_changes_to_metadata/comment_2_03a631beaa20a479f5016def7585d363._comment b/doc/todo/tracking_changes_to_metadata/comment_2_03a631beaa20a479f5016def7585d363._comment
new file mode 100644
index 0000000..96120ce
--- /dev/null
+++ b/doc/todo/tracking_changes_to_metadata/comment_2_03a631beaa20a479f5016def7585d363._comment
@@ -0,0 +1,12 @@
+[[!comment format=mdwn
+ username="glasserc"
+ avatar="http://cdn.libravatar.org/avatar/8e865c04033751520601e9c15e58ddc4"
+ subject="comment 2"
+ date="2017-07-02T02:56:19Z"
+ content="""
+I agree that a human-readable commit message is probably the wrong place to put it. I didn't know about `git annex log` but that sounds helpful. I wouldn't need it to be fast -- it would be for rare interactive use.
+
+I'd like to not just be able to see what changed with the metadata but store a description of why as well, which is why I thought of commit messages. Would your design support that?
+
+Thanks for your consideration.
+"""]]

Added a comment: v6 & manual annexation
diff --git a/doc/tips/largefiles/comment_9_24a5f5ef626109b788e3dfa055d49004._comment b/doc/tips/largefiles/comment_9_24a5f5ef626109b788e3dfa055d49004._comment
new file mode 100644
index 0000000..6930d9c
--- /dev/null
+++ b/doc/tips/largefiles/comment_9_24a5f5ef626109b788e3dfa055d49004._comment
@@ -0,0 +1,10 @@
+[[!comment format=mdwn
+ username="hoxu"
+ avatar="http://cdn.libravatar.org/avatar/95e33a0073f6c06477b3a202f0301dde"
+ subject="v6 & manual annexation"
+ date="2017-06-29T07:25:31Z"
+ content="""
+With v6, is there any way to retain old usage of `git add` and `git annex add` to manually choose which files are kept under plain git and which annexed?
+
+I'm aware of the `-c annex.largefiles=foo` parameter, but that's pretty cumbersome.
+"""]]

diff --git a/doc/forum/Create_lightweight_checkouts_on_the_same_filesystem.mdwn b/doc/forum/Create_lightweight_checkouts_on_the_same_filesystem.mdwn
new file mode 100644
index 0000000..162185f
--- /dev/null
+++ b/doc/forum/Create_lightweight_checkouts_on_the_same_filesystem.mdwn
@@ -0,0 +1,7 @@
+I'm sorry if this has been answered before, I did my best searching and couldn't find a fitting solution.
+
+I envision the following setup. There is one central git-annex repository where all modifications are to be performed. At the same time, I want to create lightweight clones of that repository on the same machine, the same filesystem, that would contain all git metadata (so that I can navigate the history inside the child repository), but would reuse binary objects from the parent repository. The child repositories can be read-only, I don't plan to use them for anything else but checking out the specific version of the parent repository.
+
+I found out about --shared flag and it seemed like it was exactly what I need. However, after cloning the parent repository with --shared, the symlinks in the child repository still pointed to nowhere. After I did `git annex sync --content`, the binary files were copied into the child repository's .git/ directory.
+
+Is it possible to achieve what I want? Thanks in advance!

Added a comment
diff --git a/doc/forum/git-annex_6.20170520_doesn__39__t_build_with_QuickCheck_2.10/comment_1_0e25ab25c0fa3d41336e6b7551ef9e47._comment b/doc/forum/git-annex_6.20170520_doesn__39__t_build_with_QuickCheck_2.10/comment_1_0e25ab25c0fa3d41336e6b7551ef9e47._comment
new file mode 100644
index 0000000..00aa8e5
--- /dev/null
+++ b/doc/forum/git-annex_6.20170520_doesn__39__t_build_with_QuickCheck_2.10/comment_1_0e25ab25c0fa3d41336e6b7551ef9e47._comment
@@ -0,0 +1,11 @@
+[[!comment format=mdwn
+ username="https://launchpad.net/~felixonmars"
+ nickname="felixonmars"
+ avatar="http://cdn.libravatar.org/avatar/17284a3bb2e4ad9d3be8fab31f49865be9c1dc22143c728de731fe800a335d38"
+ subject="comment 1"
+ date="2017-06-28T17:45:48Z"
+ content="""
+This is the same as https://git-annex.branchable.com/bugs/Build_failure__58___Utility__47__QuickCheck.hs__58__38__58__10__58___error__58___Duplicate_instance_declarations/
+
+Somehow I missed it. Sorry for the duplicated report!
+"""]]

diff --git a/doc/forum/git-annex_6.20170520_doesn__39__t_build_with_QuickCheck_2.10.mdwn b/doc/forum/git-annex_6.20170520_doesn__39__t_build_with_QuickCheck_2.10.mdwn
new file mode 100644
index 0000000..ce19721
--- /dev/null
+++ b/doc/forum/git-annex_6.20170520_doesn__39__t_build_with_QuickCheck_2.10.mdwn
@@ -0,0 +1,8 @@
+    [  9 of 561] Compiling Utility.QuickCheck ( Utility/QuickCheck.hs, dist/build/git-annex/git-annex-tmp/Utility/QuickCheck.dyn_o )
+    
+    Utility/QuickCheck.hs:38:10: error:
+        Duplicate instance declarations:
+          instance Arbitrary EpochTime
+            -- Defined at Utility/QuickCheck.hs:38:10
+          instance [safe] Arbitrary Foreign.C.Types.CTime
+            -- Defined in ‘Test.QuickCheck.Arbitrary’

Added a comment
diff --git a/doc/forum/git-repair_doesn__39__t_build_with_GHC_8.0.2/comment_2_bb490f71aed0cfe119beac66cecae99c._comment b/doc/forum/git-repair_doesn__39__t_build_with_GHC_8.0.2/comment_2_bb490f71aed0cfe119beac66cecae99c._comment
new file mode 100644
index 0000000..b9cb6a9
--- /dev/null
+++ b/doc/forum/git-repair_doesn__39__t_build_with_GHC_8.0.2/comment_2_bb490f71aed0cfe119beac66cecae99c._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ username="https://launchpad.net/~felixonmars"
+ nickname="felixonmars"
+ avatar="http://cdn.libravatar.org/avatar/17284a3bb2e4ad9d3be8fab31f49865be9c1dc22143c728de731fe800a335d38"
+ subject="comment 2"
+ date="2017-06-27T15:57:40Z"
+ content="""
+Works nice here. Thanks a lot!
+"""]]

Added a comment: git-annex support dropped
diff --git a/doc/tips/centralized_git_repository_tutorial/comment_5_c6e3468c95bc875e17724ee4a3a283a3._comment b/doc/tips/centralized_git_repository_tutorial/comment_5_c6e3468c95bc875e17724ee4a3a283a3._comment
new file mode 100644
index 0000000..d9a62cc
--- /dev/null
+++ b/doc/tips/centralized_git_repository_tutorial/comment_5_c6e3468c95bc875e17724ee4a3a283a3._comment
@@ -0,0 +1,11 @@
+[[!comment format=mdwn
+ username="spalax@b201acef21dca7798b874036bbbaa9e0079a0b7e"
+ nickname="spalax"
+ avatar="http://cdn.libravatar.org/avatar/3f1353e4135221fc25bfecd1b812bcc8"
+ subject="git-annex support dropped"
+ date="2017-06-27T09:26:11Z"
+ content="""
+For information, gitlab is going to drop support for git-annex (see issue [#1648](https://gitlab.com/gitlab-org/gitlab-ee/issues/1648)).
+
+-- Louis
+"""]]

diff --git a/doc/forum/finding_out_why_git_annex_import_failed.mdwn b/doc/forum/finding_out_why_git_annex_import_failed.mdwn
new file mode 100644
index 0000000..664cf85
--- /dev/null
+++ b/doc/forum/finding_out_why_git_annex_import_failed.mdwn
@@ -0,0 +1,43 @@
+Hi,
+
+I keep importing data to my git annex, which gets bigger.
+
+Today while importing an old archive (with lots of files), git import failed with this unhelpful message:
+
+    $ git annex import /Users/public/Documents
+    import Documents/.DS_Store
+      not importing Documents/.DS_Store which is .gitignored (use --force to override)
+    failed
+    ...
+    import Documents/Recettes/recettes nicoises/20130511_201509.jpg ok
+    (recording state in git...)
+    git-annex: import: 10 failed
+    CallStack (from HasCallStack):
+      error, called at ./CmdLine/Action.hs:41:28 in main:CmdLine.Action
+
+It seems no data was lost: I could finish the import with "git commit" (and diff -R with the backup shows no difference). But I would like to know what did happen.
+And also what does the "10" mean in this context and how to get more information?
+
+Note that I might have a real problem on this system
+
+* many filenames have extend chars
+* I'm not sure the mac OS locale setup is OK
+* I'm quite low on disk space and inodes:
+
+      $ df
+      Filesystem    512-blocks       Used Available Capacity   iused   ifree %iused  Mounted on
+      /dev/disk1    1462744832 1441994016  20238816    99% 180313250 2529852   99%   /
+
+The version is
+
+    $ git annex version
+    git-annex version: 6.20161111
+    build flags: Assistant Webapp Pairing Testsuite S3(multipartupload)(storageclasses) WebDAV FsEvents XMPP ConcurrentOutput TorrentParser MagicMime Feeds Quvi
+    key/value backends: SHA256E SHA256 SHA512E SHA512 SHA224E SHA224 SHA384E SHA384 SHA3_256E SHA3_256 SHA3_512E SHA3_512 SHA3_224E SHA3_224 SHA3_384E SHA3_384 SKEIN256E SKEIN256 SKEIN512E SKEIN512 SHA1E SHA1 MD5E MD5 WORM URL
+    remote types: git gcrypt S3 bup directory rsync web bittorrent webdav tahoe glacier ddar hook external
+    local repository version: 5
+    supported repository versions: 3 5 6
+    upgrade supported from repository versions: 0 1 2 3 4 5
+    operating system: darwin x86_64
+
+Thank you!

fsck: Support --json.
One use case is to get a list of files that fsck fails on, in order to eg,
drop them from a remote.
This commit was sponsored by Nick Daly on Patreon.
diff --git a/CHANGELOG b/CHANGELOG
index 9dbb235..20d277a 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -1,6 +1,7 @@
 git-annex (6.20170521) UNRELEASED; urgency=medium
 
   * Fix build with QuickCheck 2.10.
+  * fsck: Support --json.
 
  -- Joey Hess <id@joeyh.name>  Sat, 17 Jun 2017 13:02:24 -0400
 
diff --git a/Command/Fsck.hs b/Command/Fsck.hs
index 3dfb45e..e38a108 100644
--- a/Command/Fsck.hs
+++ b/Command/Fsck.hs
@@ -42,7 +42,7 @@ import Data.Time.Clock.POSIX
 import System.Posix.Types (EpochTime)
 
 cmd :: Command
-cmd = withGlobalOptions (jobsOption : annexedMatchingOptions) $
+cmd = withGlobalOptions (jobsOption : jsonOption : annexedMatchingOptions) $
 	command "fsck" SectionMaintenance
 		"find and fix problems"
 		paramPaths (seek <$$> optParser)
diff --git a/doc/bugs/Delete_data__47__update_location_log_when_a_special_remote_fails_to_fsck/comment_1_203ebe6fa1bb8d3c6e7c0b948fc7dd6b._comment b/doc/bugs/Delete_data__47__update_location_log_when_a_special_remote_fails_to_fsck/comment_1_203ebe6fa1bb8d3c6e7c0b948fc7dd6b._comment
new file mode 100644
index 0000000..60fd9b8
--- /dev/null
+++ b/doc/bugs/Delete_data__47__update_location_log_when_a_special_remote_fails_to_fsck/comment_1_203ebe6fa1bb8d3c6e7c0b948fc7dd6b._comment
@@ -0,0 +1,33 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 1"""
+ date="2017-06-26T17:14:56Z"
+ content="""
+`fsck --from remote` is supposed to update the location log when it
+determines that the remote does not contain the file.
+
+But in your case, the decryption failure appears to fsck as a transfer
+failure, which as you note can be transient. So it doesn't update the
+location log.
+
+It seems that what's needed is different errors to be returned when
+download fails, vs when download succeeds but decryption/verification fails.
+Then fsck could mark the file as not being present in the remote
+in the latter case.
+
+Although, that would leave the presumably corrupted encrypted data in the
+remote. (Unless fsck also tried to delete it.)
+
+Also, decryption can fail for other reasons, eg missing gpg keys,
+and in such a case, it would be bad for fsck to decide that the remote
+didn't contain any content! (And super bad for it to delete it from the
+remote!!)
+
+So hmm, I'm not sure about that idea.
+
+Your idea of getting a list of files that fsck failed to download
+is certianly useful. Perhaps a good way would be to make `git annex fsck
+--from remote --json` work, then the json output could be parsed to get a list of
+files, and you could use `git annex drop --from remote` to remove the bad
+data. That was the easiest possible thing, so I've made that change.
+"""]]
diff --git a/doc/git-annex-fsck.mdwn b/doc/git-annex-fsck.mdwn
index 2500ba9..a320bb8 100644
--- a/doc/git-annex-fsck.mdwn
+++ b/doc/git-annex-fsck.mdwn
@@ -93,6 +93,11 @@ With parameters, only the specified files are checked.
 
   Runs multiple fsck jobs in parallel. For example: `-J4`
 
+* `--json`
+
+  Enable JSON output. This is intended to be parsed by programs that use
+  git-annex. Each line of output is a JSON object.
+
 # OPTIONS
 
 # SEE ALSO

followup
diff --git a/doc/forum/Problem_with_corrupt_SQLite_DB/comment_4_6a5f8590e6a60be5afe6bc6ffce1c94e._comment b/doc/forum/Problem_with_corrupt_SQLite_DB/comment_4_6a5f8590e6a60be5afe6bc6ffce1c94e._comment
index e1febc1..fd0a05f 100644
--- a/doc/forum/Problem_with_corrupt_SQLite_DB/comment_4_6a5f8590e6a60be5afe6bc6ffce1c94e._comment
+++ b/doc/forum/Problem_with_corrupt_SQLite_DB/comment_4_6a5f8590e6a60be5afe6bc6ffce1c94e._comment
@@ -10,4 +10,8 @@ safely.
 I've done some experimentation with deleting the sqlite database when there
 are unlocked files, and it seems that running `git annex fsck` manages to
 recover the deleted state and restores the database.
+
+Also, since git-annex uses sqlite in WAL mode, it may be possible to
+recover the database to the last good state by deleting
+`.git/annex/keys/db-wal`. You'd still want to run `git annex fsck`.
 """]]
diff --git a/doc/todo/tracking_changes_to_metadata/comment_1_825c15ba36324aad58faf643057b256a._comment b/doc/todo/tracking_changes_to_metadata/comment_1_825c15ba36324aad58faf643057b256a._comment
new file mode 100644
index 0000000..2942dbb
--- /dev/null
+++ b/doc/todo/tracking_changes_to_metadata/comment_1_825c15ba36324aad58faf643057b256a._comment
@@ -0,0 +1,28 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 1"""
+ date="2017-06-26T16:58:17Z"
+ content="""
+I don't see the benefit to having custom commit messages for metadata
+changes. The changes are there in machine-readable format, so why
+involve a human?
+
+I agree that it would be useful to have a way to look at the metadata
+history, much as `git annex log` looks at the location history.
+
+Indeed, a lot of `git annex log` could be reused; `getAllLog` and
+`getKeyLog` are the hard part and would be reusable for metadata logs.
+The result might be something like this, when run on a file "foo":
+
+	+ Thu, 22 Jun 2017 17:07:43 EST foo | author=foo
+	- Thu, 22 Jun 2017 17:07:43 EST foo | author=bar
+	+ Thu, 11 Jun 2017 11:11:11 EST foo | author=bar
+
+Note that git-annex log is necessarily slow when run on a lot of files,
+because it has to run a git command per file to get the log. `git-annex log
+--all` shows a fast stream of changes from newest first, but displays the
+git-annex key that was changed, not a filename. A version of `git annex
+log` for metadata would have these same limitations.
+
+Would this help with your use case?
+"""]]

response
diff --git a/doc/forum/Best_way_to_remove_old_version_of_specific_file/comment_1_1663dd9118cb1fee92ae3f8d812df79c._comment b/doc/forum/Best_way_to_remove_old_version_of_specific_file/comment_1_1663dd9118cb1fee92ae3f8d812df79c._comment
new file mode 100644
index 0000000..8e075df
--- /dev/null
+++ b/doc/forum/Best_way_to_remove_old_version_of_specific_file/comment_1_1663dd9118cb1fee92ae3f8d812df79c._comment
@@ -0,0 +1,17 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 1"""
+ date="2017-06-26T16:52:25Z"
+ content="""
+Your alternative B does not actually remove the old version of the file,
+because git-annex unlock keeps a copy in .git/annex/objects
+(except when using v6 mode with annex.thin).
+
+Alternative A is fine, if you just want to remove the old version
+of the file from the local repository. Other clones of the repository
+may still have the content of that file though.
+
+I use alternative A fairly frequently in my own repositories, and then
+sometimes followup with `git annex unused` in clones and dropping all
+unused files.
+"""]]

followup
diff --git a/doc/forum/Problem_with_corrupt_SQLite_DB/comment_4_6a5f8590e6a60be5afe6bc6ffce1c94e._comment b/doc/forum/Problem_with_corrupt_SQLite_DB/comment_4_6a5f8590e6a60be5afe6bc6ffce1c94e._comment
new file mode 100644
index 0000000..e1febc1
--- /dev/null
+++ b/doc/forum/Problem_with_corrupt_SQLite_DB/comment_4_6a5f8590e6a60be5afe6bc6ffce1c94e._comment
@@ -0,0 +1,13 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 4"""
+ date="2017-06-26T16:20:29Z"
+ content="""
+In a v6 repository, the sqlite database is used to keep track of unlocked
+files. If all files are locked, the sqlite database can still be deleted
+safely.
+
+I've done some experimentation with deleting the sqlite database when there
+are unlocked files, and it seems that running `git annex fsck` manages to
+recover the deleted state and restores the database.
+"""]]

followup
diff --git a/doc/forum/git-repair_doesn__39__t_build_with_GHC_8.0.2/comment_1_0a6ea834f6515b888e5c40fef08f4e2d._comment b/doc/forum/git-repair_doesn__39__t_build_with_GHC_8.0.2/comment_1_0a6ea834f6515b888e5c40fef08f4e2d._comment
new file mode 100644
index 0000000..873e473
--- /dev/null
+++ b/doc/forum/git-repair_doesn__39__t_build_with_GHC_8.0.2/comment_1_0a6ea834f6515b888e5c40fef08f4e2d._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 1"""
+ date="2017-06-26T16:08:20Z"
+ content="""
+AFAIK, this was fixed already in git-repair's master branch.
+
+I've released a new git-repair with the fixes.
+"""]]

diff --git a/doc/bugs/Encoding_error_in_webapp.mdwn b/doc/bugs/Encoding_error_in_webapp.mdwn
new file mode 100644
index 0000000..ee57e8b
--- /dev/null
+++ b/doc/bugs/Encoding_error_in_webapp.mdwn
@@ -0,0 +1,24 @@
+### Please describe the problem.
+
+In the webapp, file names with UTF-8 characters are displayed correctly when they are in the queue but not when they are being transferred.
+
+
+### What steps will reproduce the problem?
+
+- Create a file with accented characters (éèà...) in their path
+- Run the webapp :)
+
+
+### What version of git-annex are you using? On what operating system?
+
+git-annex version: 6.20170612-ge4100fd60e (on Arch Linux)
+
+
+### Please provide any additional information below.
+
+![Screenshot](http://i.imgur.com/xHgdVKc.png)
+
+
+### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
+
+Oh yes! I love git-annex :) I've written the hubiC special remote for git-annex, the zsh completion, contributed to the crowdfunding campaigns, and I'm a supporter on Patreon :)

Added a comment
diff --git a/doc/bugs/sync_claims_data_loss_but_seems_to_just_lose_tracking/comment_4_a6074b68754b0d773385a1e406043952._comment b/doc/bugs/sync_claims_data_loss_but_seems_to_just_lose_tracking/comment_4_a6074b68754b0d773385a1e406043952._comment
new file mode 100644
index 0000000..9eb06cf
--- /dev/null
+++ b/doc/bugs/sync_claims_data_loss_but_seems_to_just_lose_tracking/comment_4_a6074b68754b0d773385a1e406043952._comment
@@ -0,0 +1,10 @@
+[[!comment format=mdwn
+ username="olaf"
+ avatar="http://cdn.libravatar.org/avatar/4ae498d3d6ee558d6b65caa658f72572"
+ subject="comment 4"
+ date="2017-06-22T04:55:35Z"
+ content="""
+So, you're suggesting manually running `git annex sync --content`?  I can `cron` that up, so I can live with it.
+
+Do you need any more info to identify the issue you found with the assistant?
+"""]]

Added a comment
diff --git a/doc/bugs/MacOSX__58___archive_folders_not_working_as_expected/comment_4_45c1e00e64f916ef8a5f9200144861eb._comment b/doc/bugs/MacOSX__58___archive_folders_not_working_as_expected/comment_4_45c1e00e64f916ef8a5f9200144861eb._comment
new file mode 100644
index 0000000..4d33e50
--- /dev/null
+++ b/doc/bugs/MacOSX__58___archive_folders_not_working_as_expected/comment_4_45c1e00e64f916ef8a5f9200144861eb._comment
@@ -0,0 +1,37 @@
+[[!comment format=mdwn
+ username="olaf"
+ avatar="http://cdn.libravatar.org/avatar/4ae498d3d6ee558d6b65caa658f72572"
+ subject="comment 4"
+ date="2017-06-22T04:50:13Z"
+ content="""
+I can confirm this is still a problem.
+
+## OS:
+macOS Sierra, version 10.12.5 (16F73)
+
+## git-annex versions:
+
+### OSX DMG install
+```
+Version: 6.20170611-gb493ac8
+Build flags: Assistant Webapp Pairing Testsuite S3(multipartupload)(storageclasses) WebDAV FsEvents ConcurrentOutput TorrentParser MagicMime Feeds Quvi Dependency versions: aws-0.16 bloomfilter-2.0.1.0 cryptonite-0.21 DAV-1.3.1 feed-0.3.12.0 ghc-8.0.2 http-client-0.5.6.1 persistent-sqlite-2.6.2 torrent-10000.1.1 uuid-1.3.13 yesod-1.4.5
+```
+
+### Homebrew install
+```
+git-annex version: 6.20170521
+build flags: Assistant Webapp Pairing Testsuite S3(multipartupload)(storageclasses) WebDAV FsEvents ConcurrentOutput TorrentParser MagicMime Feeds Quvi
+dependency versions: aws-0.16 bloomfilter-2.0.1.0 cryptonite-0.23 DAV-1.3.1 feed-0.3.12.0 ghc-8.0.2 http-client-0.5.7.0 persistent-sqlite-2.6.2 torrent-10000.1.1 uuid-1.3.13 yesod-1.4.5
+key/value backends: SHA256E SHA256 SHA512E SHA512 SHA224E SHA224 SHA384E SHA384 SHA3_256E SHA3_256 SHA3_512E SHA3_512 SHA3_224E SHA3_224 SHA3_384E SHA3_384 SKEIN256E SKEIN256 SKEIN512E SKEIN512 SHA1E SHA1 MD5E MD5 WORM URL
+remote types: git gcrypt p2p S3 bup directory rsync web bittorrent webdav tahoe glacier ddar hook external
+local repository version: 5
+supported repository versions: 3 5 6
+upgrade supported from repository versions: 0 1 2 3 4 5
+operating system: darwin x86_64
+```
+
+
+## Question
+
+@nanotech - can you elaborate on your recompile with Kqueue comment above?
+"""]]

diff --git a/doc/forum/git-repair_doesn__39__t_build_with_GHC_8.0.2.mdwn b/doc/forum/git-repair_doesn__39__t_build_with_GHC_8.0.2.mdwn
new file mode 100644
index 0000000..bd6ae58
--- /dev/null
+++ b/doc/forum/git-repair_doesn__39__t_build_with_GHC_8.0.2.mdwn
@@ -0,0 +1,10 @@
+This should be related to directory 1.3.
+
+    Common.hs:3:16: error:
+        Conflicting exports for ‘getFileSize’:
+           ‘module X’ exports ‘X.getFileSize’
+             imported from ‘Utility.FileSize’ at Common.hs:32:1-28
+             (and originally defined at Utility/FileSize.hs:26:1-11)
+           ‘module X’ exports ‘X.getFileSize’
+             imported from ‘Utility.Directory’ at Common.hs:33:1-29
+             (and originally defined in ‘System.Directory’)

removed
diff --git a/doc/forum/__34__git_add__34___vs___34__git_annex_add__34___in_v6/comment_2_5431413cb3332b56f769e235daa9c5c4._comment b/doc/forum/__34__git_add__34___vs___34__git_annex_add__34___in_v6/comment_2_5431413cb3332b56f769e235daa9c5c4._comment
deleted file mode 100644
index bda6895..0000000
--- a/doc/forum/__34__git_add__34___vs___34__git_annex_add__34___in_v6/comment_2_5431413cb3332b56f769e235daa9c5c4._comment
+++ /dev/null
@@ -1,20 +0,0 @@
-[[!comment format=mdwn
- username="unqueued"
- avatar="http://cdn.libravatar.org/avatar/3bcbe0c9e9825637ad7efa70f458640d"
- subject="comment 2"
- date="2017-06-18T14:45:12Z"
- content="""
-I'm honestly concerned about this. It sounds like compatibility with people who don't have git annex installed will go from being on by default, to off by default.
-
-I really do not want my text files to be annexed by default. It would be great to at least have a much easier way to restore the old behavior.
-
-Also, it sounds like if I did the default behavior and annexed all of my files, I would not have to put a lot more effort into thinking about where all of my files are. Right now, my smaller files are going to be cloned by default, and if I need larger files, I can get them.
-
-But now, I will have to select all of the files I want after each clone, it appears? Even if I specify what files I want with --get, it is still more work and effort than just having the old behavior.
-
-Basically, files tracked with git were \"pinned\", and files in the annex were optional. I understand that there are ways to get most of this functionality back, but, it would appear to still take more effort than the existing functionality.
-
-Finally, I really hope you reconsider putting git in the back seat. It is invaluable that others can interact with my repos without having to have git annex installed, and without my having to take any extra steps. I only started using git annex because it so painlessly integrated with git. I love being able to share my repos that contain annexed files with others who haven't installed git annex.
-
-Thanks for your work joey. I understand the technical reasons you did this. I hope you'll reconsider this.
-"""]]

Added a comment
diff --git a/doc/forum/__34__git_add__34___vs___34__git_annex_add__34___in_v6/comment_2_5431413cb3332b56f769e235daa9c5c4._comment b/doc/forum/__34__git_add__34___vs___34__git_annex_add__34___in_v6/comment_2_5431413cb3332b56f769e235daa9c5c4._comment
new file mode 100644
index 0000000..bda6895
--- /dev/null
+++ b/doc/forum/__34__git_add__34___vs___34__git_annex_add__34___in_v6/comment_2_5431413cb3332b56f769e235daa9c5c4._comment
@@ -0,0 +1,20 @@
+[[!comment format=mdwn
+ username="unqueued"
+ avatar="http://cdn.libravatar.org/avatar/3bcbe0c9e9825637ad7efa70f458640d"
+ subject="comment 2"
+ date="2017-06-18T14:45:12Z"
+ content="""
+I'm honestly concerned about this. It sounds like compatibility with people who don't have git annex installed will go from being on by default, to off by default.
+
+I really do not want my text files to be annexed by default. It would be great to at least have a much easier way to restore the old behavior.
+
+Also, it sounds like if I did the default behavior and annexed all of my files, I would not have to put a lot more effort into thinking about where all of my files are. Right now, my smaller files are going to be cloned by default, and if I need larger files, I can get them.
+
+But now, I will have to select all of the files I want after each clone, it appears? Even if I specify what files I want with --get, it is still more work and effort than just having the old behavior.
+
+Basically, files tracked with git were \"pinned\", and files in the annex were optional. I understand that there are ways to get most of this functionality back, but, it would appear to still take more effort than the existing functionality.
+
+Finally, I really hope you reconsider putting git in the back seat. It is invaluable that others can interact with my repos without having to have git annex installed, and without my having to take any extra steps. I only started using git annex because it so painlessly integrated with git. I love being able to share my repos that contain annexed files with others who haven't installed git annex.
+
+Thanks for your work joey. I understand the technical reasons you did this. I hope you'll reconsider this.
+"""]]

remove reference to some old docker image of mine
diff --git a/doc/install/Docker.mdwn b/doc/install/Docker.mdwn
index dff4632..9f3f3bd 100644
--- a/doc/install/Docker.mdwn
+++ b/doc/install/Docker.mdwn
@@ -3,4 +3,4 @@ easy to add it to an image.
 
 For example:
 
-	docker run -i -t joeyh/debian-unstable apt-get install git-annex
+	docker run -i -t debian apt-get install git-annex

update, removing notes about by now quite old ubuntu releases
diff --git a/doc/install/Ubuntu.mdwn b/doc/install/Ubuntu.mdwn
index 1e44226..6c9d860 100644
--- a/doc/install/Ubuntu.mdwn
+++ b/doc/install/Ubuntu.mdwn
@@ -1,48 +1,7 @@
-## Saucy
-
-	sudo apt-get install git-annex
-
-Warning: The version of git-annex shipped in Ubuntu Saucy had
-[a bug that can cause problems when creating repositories using the webapp](http://git-annex.branchable.com/bugs/Freshly_initialized_repo_has_staged_change___34__deleted:_uuid.log__34__/).
-
-## Raring
+Ubuntu has a git-annex package, so to install simply run:
 	
 	sudo apt-get install git-annex
 
-Note: This version is too old to include the [[assistant]] or its WebApp,
-but is otherwise usable.
-
-## Precise
-
-	sudo apt-get install git-annex
-
-Note: This version is too old to include the [[assistant]] or its WebApp,
-but is otherwise usable.
-
-## Precise PPA
-
-<https://launchpad.net/~fmarier/+archive/git-annex>
-
-A newer version of git-annex, including the [[assistant]] and WebApp.
-(Maintained by François Marier)
-
-	sudo add-apt-repository ppa:fmarier/git-annex
-	sudo apt-get update
-	sudo apt-get install git-annex
-
-If you don't have add-apt-repository installed run this command first:
-
-	sudo apt-get install software-properties-common python-software-properties
-
-
-## Oneiric
-
-	sudo apt-get install git-annex
-
-Warning: The version of git-annex shipped in Ubuntu Oneiric
-had [a bug that prevents upgrades from v1 git-annex repositories](https://bugs.launchpad.net/ubuntu/+source/git-annex/+bug/875958).
-If you need to upgrade such a repository, get a newer version of git-annex.
-
 ## backport
 
 If the version shipped with Ubuntu is too old, 

update for debian release
diff --git a/doc/install/Debian.mdwn b/doc/install/Debian.mdwn
index 736ac7d..2c3a486 100644
--- a/doc/install/Debian.mdwn
+++ b/doc/install/Debian.mdwn
@@ -1,6 +1,13 @@
 ## Debian testing or unstable
 
-Debian unstable and testing are usually fairly up to date, so this should be enough:
+Debian unstable and testing are usually fairly up to date,
+so this should be enough:
+
+	sudo apt install git-annex
+
+## Debian 9.0 "stretch"
+
+Debian's stable release contains git-annex version 6.20170101.
 
 	sudo apt install git-annex
 
@@ -11,30 +18,3 @@ the [NeuroDebian team](http://neuro.debian.net/) provides a
 [standalone build package](http://neuro.debian.net/pkgs/git-annex-standalone.html)
 that is regularly updated and that should work across all releases of
 Debian.
-
-## Debian 8.0 "jessie"
-
-	sudo apt install git-annex
-
-There is also a backport for jessie, even though it is [often out of date](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=760787).
-
-Follow the instructions to [enable backports](http://backports.debian.org/Instructions/).
-
-	sudo apt -t jessie-backports install git-annex
-
-## Debian 7.0 "wheezy":
-
-	sudo apt-get install git-annex
-
-Note: This version does not include support for the [[assistant]].
-A backport is available with the assistant and other new features.
-
-Follow the instructions to [enable backports](http://backports.debian.org/Instructions/).
-
-	sudo apt-get -t wheezy-backports install git-annex
-
-## Debian 6.0 "squeeze"
-
-Follow the instructions to [enable backports](http://backports.debian.org/Instructions/).
-
-	sudo apt-get -t squeeze-backports install git-annex

close
diff --git a/doc/bugs/Build_failure__58___Utility__47__QuickCheck.hs__58__38__58__10__58___error__58___Duplicate_instance_declarations.mdwn b/doc/bugs/Build_failure__58___Utility__47__QuickCheck.hs__58__38__58__10__58___error__58___Duplicate_instance_declarations.mdwn
index 5d97f92..400b7ae 100644
--- a/doc/bugs/Build_failure__58___Utility__47__QuickCheck.hs__58__38__58__10__58___error__58___Duplicate_instance_declarations.mdwn
+++ b/doc/bugs/Build_failure__58___Utility__47__QuickCheck.hs__58__38__58__10__58___error__58___Duplicate_instance_declarations.mdwn
@@ -37,3 +37,5 @@ ExitFailure 1
 
 Yes, thanks!
 
+> Fixed in [[!commit 75cecbbe3fdafdb6652e95ac17cd755c28e67f20]] [[done]]
+> --[[Joey]]

diff --git a/doc/bugs/Build_failure__58___Utility__47__QuickCheck.hs__58__38__58__10__58___error__58___Duplicate_instance_declarations.mdwn b/doc/bugs/Build_failure__58___Utility__47__QuickCheck.hs__58__38__58__10__58___error__58___Duplicate_instance_declarations.mdwn
new file mode 100644
index 0000000..5d97f92
--- /dev/null
+++ b/doc/bugs/Build_failure__58___Utility__47__QuickCheck.hs__58__38__58__10__58___error__58___Duplicate_instance_declarations.mdwn
@@ -0,0 +1,39 @@
+### Please describe the problem.
+git-annex-6.20170520 no longer builds successfully. My last successful build of git-annex-6.20170520 was Jun 12, 2017, but something has probably changed in at least one of the dependencies on Hackage since then.
+
+The build fails on all three Homebrew CI nodes (macOS 10.10, 10.11, and 10.12).
+
+A full log is here: <https://jenkins.brew.sh/job/Homebrew%20Core%20Pull%20Requests/3175/version=sierra/consoleText>
+
+Duplicate since that will eventually be deleted: <https://gist.github.com/ilovezfs/e93d135243d03567444eb10be8902a95>
+
+### What steps will reproduce the problem?
+Attempt to build git-annex in a cabal sandbox using cabal install.
+
+
+### What version of git-annex are you using? On what operating system?
+git-annex-6.20170520
+
+### Please provide any additional information below.
+
+The build error is
+
+[[!format sh """
+[  8 of 559] Compiling Utility.QuickCheck ( Utility/QuickCheck.hs, dist/dist-sandbox-758c2984/build/git-annex/git-annex-tmp/Utility/QuickCheck.o )
+
+Utility/QuickCheck.hs:38:10: error:
+    Duplicate instance declarations:
+      instance Arbitrary EpochTime
+        -- Defined at Utility/QuickCheck.hs:38:10
+      instance [safe] Arbitrary Foreign.C.Types.CTime
+        -- Defined in ‘Test.QuickCheck.Arbitrary’
+cabal: Leaving directory '.'
+cabal: Error: some packages failed to install:
+git-annex-6.20170520 failed during the building phase. The exception was:
+ExitFailure 1
+"""]]
+
+### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
+
+Yes, thanks!
+

Added a comment
diff --git a/doc/forum/Problem_with_corrupt_SQLite_DB/comment_3_079359513b6b9fcf0030f49df0eaf64d._comment b/doc/forum/Problem_with_corrupt_SQLite_DB/comment_3_079359513b6b9fcf0030f49df0eaf64d._comment
new file mode 100644
index 0000000..1bc48c8
--- /dev/null
+++ b/doc/forum/Problem_with_corrupt_SQLite_DB/comment_3_079359513b6b9fcf0030f49df0eaf64d._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="avar"
+ avatar="http://cdn.libravatar.org/avatar/57332d67a86eb51e06bf78d2baf42c3c"
+ subject="comment 3"
+ date="2017-06-16T20:37:47Z"
+ content="""
+What about if you are in v6 mode? I just read the wiki about that version upgrade, sounds exciting, but I got this with <v6 and I'm definitely not switching over if some sqlite error can hose my entire checkout. How do you actually recover from this when sqlite is a hard dependency?
+"""]]

diff --git a/doc/forum/Best_way_to_remove_old_version_of_specific_file.mdwn b/doc/forum/Best_way_to_remove_old_version_of_specific_file.mdwn
new file mode 100644
index 0000000..4727c75
--- /dev/null
+++ b/doc/forum/Best_way_to_remove_old_version_of_specific_file.mdwn
@@ -0,0 +1,24 @@
+Hi,
+
+I store lots of books under annex and every once in a while one of them gets updated and I don't really want to keep the old version. I know about the `unused` subcommand but it makes a bit difficult to identify what I would be dropping. I mean, at the point in time when I'm replacing a file I already know I don't want to keep in the annex, maybe there is a method that directly referrs to that file instead of globally garbage collecting obscurely named files. Now, this is what comes to my mind:
+
+Alternative a
+
+1. git-annex drop f
+2. git rm f
+3. mv new-f f
+4. git-annex add f
+
+Alternative b
+
+1. git-annex unlock f
+2. mv new-f f
+3. git-annex add f
+4. git commit -am blah
+5. git checkout HEAD^
+6. git-annex drop f
+7. git checkout master
+
+As you can see neither option is that pretty. Is there any way to say 'drop f in revision r' or something similar?
+
+Thank you!

Added a comment
diff --git a/doc/bugs/add_fails_with_v6_repo_when_four_levels_deep/comment_11_499cdb675327aaed59877226f77b7077._comment b/doc/bugs/add_fails_with_v6_repo_when_four_levels_deep/comment_11_499cdb675327aaed59877226f77b7077._comment
new file mode 100644
index 0000000..a92d882
--- /dev/null
+++ b/doc/bugs/add_fails_with_v6_repo_when_four_levels_deep/comment_11_499cdb675327aaed59877226f77b7077._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="t.z.mates"
+ avatar="http://cdn.libravatar.org/avatar/90f15fad216078fd08d62cc676487925"
+ subject="comment 11"
+ date="2017-06-15T23:07:48Z"
+ content="""
+More specifically, I noticed it first on version 6.20170519-gc6079c3ce8 (it's possible it was one or two commits before that, but I pull updates daily, so the change occured at most a day before).
+"""]]

Added a comment: Error messages changed
diff --git a/doc/bugs/add_fails_with_v6_repo_when_four_levels_deep/comment_10_43aa4935ee42abc90546d166042d379b._comment b/doc/bugs/add_fails_with_v6_repo_when_four_levels_deep/comment_10_43aa4935ee42abc90546d166042d379b._comment
new file mode 100644
index 0000000..3bf8896
--- /dev/null
+++ b/doc/bugs/add_fails_with_v6_repo_when_four_levels_deep/comment_10_43aa4935ee42abc90546d166042d379b._comment
@@ -0,0 +1,16 @@
+[[!comment format=mdwn
+ username="t.z.mates"
+ avatar="http://cdn.libravatar.org/avatar/90f15fad216078fd08d62cc676487925"
+ subject="Error messages changed"
+ date="2017-06-15T22:59:42Z"
+ content="""
+Just a quick follow-up; it seems that one of the recent updates (I believe the version published of 11-June-2017) modified the error messages shown. Running the same code above, I no longer receive a string of `fatal: '../1/2/3/4/foo' is outside repository` followed by `git-annex: git check-attr EOF: user error`. Instead, it shows `git-annex: 1/2/3/4/foo: getFileStatus: does not exist (No such file or directory)`, followed by the rest of the output I originally posted. That is, the output is now:
+
+    git-annex: 1/2/3/4/foo: getFileStatus: does not exist (No such file or directory)
+    git-annex: smudge: 1 failed
+    error: external filter 'git-annex smudge --clean %f' failed 1
+    error: external filter 'git-annex smudge --clean %f' failed
+    add foo ok
+
+I don't know what this represents, but I find it interesting that something was changed recently.
+"""]]

A wishlist item I would like
diff --git a/doc/todo/tracking_changes_to_metadata.mdwn b/doc/todo/tracking_changes_to_metadata.mdwn
new file mode 100644
index 0000000..ed961c0
--- /dev/null
+++ b/doc/todo/tracking_changes_to_metadata.mdwn
@@ -0,0 +1,12 @@
+I use git-annex to store my music collection, which includes many albums I've ripped but don't like, as well as many albums I do like to listen to regularly but can't put on all my devices because of storage space limits. Previously I've used a `Now Playing` directory to track this, with copies of all the symlinks, but when I edit files (to correct file tags or whatever), things can get out of sync, so now I'm trying to use metadata instead. I was thinking to add a `NowPlaying=machinename` scheme and then on each machine I can `git annex get --metadata NowPlaying=machinename`.
+
+While migrating to this scheme, I've discovered a few places where a file wasn't in `Now Playing` when I expected it to be. A good example is that some songs exist both on an artist's album as well as a compilation album, and I've `drop`ped the track from the compilation album because I already had it on the artist's album.
+
+This has made me think that it would be nice to track changes to metadata, the same way we do with git commits on the files themselves. Right now this is almost possible using the `git-annex` branch. However:
+
+1. Every change to metadata just has the commit message "update". It would be nice if I could pass a commit message using something like `git annex metadata -s 'NowPlaying-=machinename' -m "This already exists on Artist - Album"`.
+2. Looking at the metadata history for a given file requires dereferencing the git-annex symlink and knowing some git-annex internals. It would be nice if there was a `git annex metadata log <filename>` command.
+
+From reading the documentation about [[tips/metadata_driven_views]], it seems almost like #1 would be possible using views and doing a `git commit` myself, but that still just generates an "update" message on the `git-annex` branch. Even if it did work, using `git annex view 'NowPlaying=*'` excludes all files that aren't tagged with any machine, which makes adding new files harder.
+
+What do you think? Is this abuse of the `git-annex` branch? Would this interfere with [[design/caching_database]]?

diff --git a/doc/bugs/Delete_data__47__update_location_log_when_a_special_remote_fails_to_fsck.mdwn b/doc/bugs/Delete_data__47__update_location_log_when_a_special_remote_fails_to_fsck.mdwn
new file mode 100644
index 0000000..a7c2c77
--- /dev/null
+++ b/doc/bugs/Delete_data__47__update_location_log_when_a_special_remote_fails_to_fsck.mdwn
@@ -0,0 +1,72 @@
+### Please describe the problem.
+
+After all the work I did [here](https://git-annex.branchable.com/bugs/Hybrid_encryption_can__39__t_generate_the_right_key_after_moving_files/), I discvovered that I have a Git Annex-encrypted special remote that has some corrupted data on it. I can run `git annex fsck --from=remotename` and have Git Annex complain about all the files that have failed to fsck. But Git Annex is not updating the location log. It still thinks the data is in the special remote, after fscking it and not getting the right data out.
+
+This makes sense if the failure was from a download corrupted in transit, but I think the files are actually corrupted in the remote. How do I make `git annex fsck` update the location log to say that files aren't there when they fail to fsck? Alternately, how do I get a nice list of all the filenames that failed to fsck, so I can have a script drop them from the offending remote?
+
+### What steps will reproduce the problem?
+
+1. Make a Git Annex encrypted special remote.
+
+2. Put some data in it.
+
+3. Modify the encrypted data, to corrupt it.
+
+4. Use `git annex fsck` on the file you broke.
+
+5. See that `git annex whereis` still thinks the file is in the special remote.
+
+### What version of git-annex are you using? On what operating system?
+
+This is Git Annex 6.20170408+gitg804f06baa-1~ndall+1 on Ubuntu 14.04.
+
+### Please provide any additional information below.
+
+[[!format sh """
+# If you can, paste a complete transcript of the problem occurring here.
+# If the problem is with the git-annex assistant, paste in .git/annex/daemon.log
+
+
+$ git annex whereis info.txt
+whereis info.txt (6 copies) 
+  	...
+        ...
+   	21a0c4ba-7255-4a9e-9baa-c638f7df68e5 -- gdrive [amazon]
+   	...
+   	...
+   	...
+ok
+$ git annex fsck --from=amazon info.txt
+...
+gpg: decryption failed: bad key
+2017/06/13 20:14:54 Local file system at /tmp/tmp.Cu2Dsk0jY3: Waiting for checks to finish
+2017/06/13 20:14:54 Local file system at /tmp/tmp.Cu2Dsk0jY3: Waiting for transfers to finish
+2017/06/13 20:14:54 Attempt 1/3 failed with 0 errors and: directory not found
+2017/06/13 20:14:54 Local file system at /tmp/tmp.Cu2Dsk0jY3: Waiting for checks to finish
+2017/06/13 20:14:54 Local file system at /tmp/tmp.Cu2Dsk0jY3: Waiting for transfers to finish
+2017/06/13 20:14:54 Attempt 2/3 failed with 0 errors and: directory not found
+2017/06/13 20:14:55 Local file system at /tmp/tmp.Cu2Dsk0jY3: Waiting for checks to finish
+2017/06/13 20:14:55 Local file system at /tmp/tmp.Cu2Dsk0jY3: Waiting for transfers to finish
+  failed to download file from remote
+2017/06/13 20:14:55 Attempt 3/3 failed with 0 errors and: directory not found
+2017/06/13 20:14:55 Failed to copy: directory not found
+git-annex: fsck: 1 failed
+$ git annex whereis info.txt
+whereis info.txt (6 copies) 
+  	...
+   	...
+   	21a0c4ba-7255-4a9e-9baa-c638f7df68e5 -- gdrive [amazon]
+   	...
+   	...
+   	...
+ok
+
+
+# End of transcript or log.
+"""]]
+
+### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
+
+Yeah, it works great! If not for it I would not have noticed this data corruption until it was too late.
+
+

add news item for git-annex 6.20170520
diff --git a/doc/news/version_6.20170301.mdwn b/doc/news/version_6.20170301.mdwn
deleted file mode 100644
index 0679e25..0000000
--- a/doc/news/version_6.20170301.mdwn
+++ /dev/null
@@ -1,4 +0,0 @@
-git-annex 6.20170301 released with [[!toggle text="these changes"]]
-[[!toggleable text="""
-   * No changes from 6.20170228; a new version number was needed due
-     to a problem with Hackage."""]]
\ No newline at end of file
diff --git a/doc/news/version_6.20170520.mdwn b/doc/news/version_6.20170520.mdwn
new file mode 100644
index 0000000..2eca934
--- /dev/null
+++ b/doc/news/version_6.20170520.mdwn
@@ -0,0 +1,32 @@
+git-annex 6.20170520 released with [[!toggle text="these changes"]]
+[[!toggleable text="""
+   * move --to=here moves from all reachable remotes to the local repository.
+   * initremote, enableremote: Support gpg subkeys suffixed with an
+     exclamation mark, which forces gpg to use a specific subkey.
+   * Improve progress display when watching file size, in cases where
+     a transfer does not resume.
+   * Fix transfer log file locking problem when running concurrent
+     transfers.
+   * Avoid concurrent git-config setting problem when running concurrent
+     threads.
+   * metadata: When setting metadata of a file that did not exist,
+     no error message was displayed, unlike getting metadata and most other
+     git-annex commands. Fixed this oversight.
+   * Added annex.resolvemerge configuration, which can be set to false to
+     disable the usual automatic merge conflict resolution done by git-annex
+     sync and the assistant.
+   * sync: Added --no-resolvemerge option.
+   * Avoid error about git-annex-shell not being found when
+     syncing with -J with a git remote where git-annex-shell is not
+     installed.
+   * Fix bug that prevented transfer locks from working when
+     run on SMB or other filesystem that does not support fcntl locks
+     and hard links.
+   * assistant: Merge changes from refs/remotes/foo/master into master.
+     Previously, only sync branches were merged. This makes regular git push
+     into a repository watched by the assistant auto-merge.
+   * Makefile: Install completions for the fish and zsh shells
+     when git-annex is built with optparse-applicative-0.14.
+   * assistant: Don't trust OSX FSEvents's eventFlagItemModified to be called
+     when the last writer of a file closes it; apparently that sometimes
+     does not happen, which prevented files from being quickly added."""]]
\ No newline at end of file

Added a comment: remote.log
diff --git a/doc/bugs/Strange_case_of_data_loss__44___possibly_linked_to_git-annex_with_encrypted_rsync_remote/comment_1_4038261a4d49694374c37ff029d3540f._comment b/doc/bugs/Strange_case_of_data_loss__44___possibly_linked_to_git-annex_with_encrypted_rsync_remote/comment_1_4038261a4d49694374c37ff029d3540f._comment
new file mode 100644
index 0000000..7c448d6
--- /dev/null
+++ b/doc/bugs/Strange_case_of_data_loss__44___possibly_linked_to_git-annex_with_encrypted_rsync_remote/comment_1_4038261a4d49694374c37ff029d3540f._comment
@@ -0,0 +1,20 @@
+[[!comment format=mdwn
+ username="user4"
+ avatar="http://cdn.libravatar.org/avatar/cd0825fd95c541c0a48c7138b59c240a"
+ subject="remote.log"
+ date="2017-06-11T19:53:27Z"
+ content="""
+Since the most likely culprit (after aliens) is the encrypted rsync remote, here is the remote.log. I've replaced
+sensitive information with variables in double curly braces
+
+    {{uuid}} = cipher={{cipher}} cipherkeys={{keyid}} {{url}}= name={{name}} rsyncurl={{url}} type=rsync timestamp=1345706291.523459s
+    {{uuid}} = cipher={{cipher}} cipherkeys={{keyid}} {{url}}= name={{name}} rsyncurl={{url}} type=rsync timestamp=1482178826.391400069s
+    {{uuid}} = cipher={{cipher}} cipherkeys={{keyid}} {{url}}= name={{name}} rsyncurl={{url}} type=rsync timestamp=1482742412.058733547s
+    {{uuid}} = cipher={{cipher}} cipherkeys={{keyid}} {{url}}= name={{name}} rsyncurl={{url}} type=rsync timestamp=1482742686.603751605s
+    {{uuid}} cipher={{cipher}} cipherkeys={{keyid}} name={{name}} rsyncurl={{url}} type=rsync timestamp=1343199142.018003s
+    {{uuid}} cipher={{cipher}} cipherkeys={{keyid}} name={{name}} rsyncurl={{url}} type=rsync timestamp=1343199310.53916s
+    {{uuid}} cipher={{cipher}} cipherkeys={{keyid}} name={{name}} rsyncurl={{url}} type=rsync timestamp=1343486259.34504s
+
+Here {{uuid}} is the uuid of the encrypted rsync remote. It's odd that there are several lines for the same remote, and not all in the same format.
+Could this be a PEBKAC involving a misconfigured rsync remote?
+"""]]

diff --git a/doc/bugs/Strange_case_of_data_loss__44___possibly_linked_to_git-annex_with_encrypted_rsync_remote.mdwn b/doc/bugs/Strange_case_of_data_loss__44___possibly_linked_to_git-annex_with_encrypted_rsync_remote.mdwn
new file mode 100644
index 0000000..be0060c
--- /dev/null
+++ b/doc/bugs/Strange_case_of_data_loss__44___possibly_linked_to_git-annex_with_encrypted_rsync_remote.mdwn
@@ -0,0 +1,55 @@
+This is not really a proper bug report, but I thought I should post this here
+in case someone can find any sane, non-supernatural reason for a strange case
+of data loss I have experienced with git-annex.
+
+Some time ago I cloned a bunch of git-annex repos from an external drive (let's
+call it disk1) to a new computer (computer3). On one of my repos git-annex
+marked a bunch of files corrupt and moved them to .git/annex/bad. Oops, I
+thought, I must have a failing disk. Luckily I had offsite backups -- no less
+than two other external hard disks (disk2-3), each having a full copy of the
+repo in question. However, **both of these** had the same, corrupt files. The
+files have the correct size, but are filled with zeroes. Other files in the
+repo are fine, and so are other repos.
+
+I have been trying to wrap my head around this but I can't think of any reason
+how this could occur. However the files have gotten corrupted in the first
+place, the corruption should have been picked up when copying the content to
+the external drives disk2 and disk3, right? I have to rule out NSA/MIB/aliens
+from messing with me because these files are not that valuable or sensitive.
+
+The files in question were added to git-annex back in 2012, so the trail is
+cold on this one. Naturally, I have no idea on how to reproduce this, nor can I
+reliably say that git-annex is to blame. I can gather some hints though. The
+files were all added on the same commit in 2012, but not all files from that
+commit are corrupted. The corrupted files have consecutive file names. The
+files were never modified since (except for the corruption), and the content
+*may* have been copied via an encrypted rsync transfer repository. I have
+always used git-annex on Arch Linux and in indirect more. The files used the
+SHA-1 backend.
+
+All these files have a similar tracking log that looks something like this
+(uuids replaced with symbolic names):
+
+    1356690700.542152s 1 computer1			<- first added
+    1356691074.253815s 1 disk1				<- copied to disk1
+    1356719321.145126s 1 rsync				<- copied to rsync repo
+    1358070999.435676s 1 rsync				<- copied to rsync repo (again?)
+    1362166895.310332s 1 disk2				<- copied to disk2
+    1362906850.555869s 1 computer2 (dead)	<- copied to another computer
+    1364926664.362195s 0 computer1			<- dropped from computer1 as enough copies in disks
+    1374412057.409496s 0 computer2 (dead)	<- dropped from computer2, now dead
+    1445691595.764108s 1 disk3				<- copied to disk3
+    1445770764.165792s 0 rsync				<- dropped from rsync repo to save space
+    1482077052.217353646s 0 disk1			<- first noticed as corrupted on disk1
+    1482741278.318274404s 0 disk3			<- WTF, also corrupted on disk3
+    1482926246.268440532s 0 disk2			<- double-WTF, also corrupted on disk2
+
+The only thing that strikes odd to me is the double entry with the rsync
+remote. The non-corrupted files from the same commit do not seem to have such a
+double entry.
+
+So my main question is, has there ever been a bug in git-annex that could have
+caused this behavior? Or is there any other realistic explanation for this? In
+case this is an existing bug, is there any other evidence I can gather?
+Needless to say, the lesson here is to run `git annex fsck` regularly even if
+you have offsite backups...

devblog
diff --git a/doc/devblog/day_461__shell_completions.mdwn b/doc/devblog/day_461__shell_completions.mdwn
new file mode 100644
index 0000000..e2bd652
--- /dev/null
+++ b/doc/devblog/day_461__shell_completions.mdwn
@@ -0,0 +1,8 @@
+A new version of optparse-applicative supports zsh and fish shell
+completions. Got that integrated into git-annex, although it will be a
+while until most builds are updated to that version of the library.
+Also, re-submitted my old patch from 2015 to make "git annex<tab>"
+always tab complete in bash.
+
+Enough other small fixes and improvements have accumulated that a release
+is due soon..

disable closingTracked on OSX
Don't trust OSX FSEvents's eventFlagItemModified to be called when the last
writer of a file closes it; apparently that sometimes does not happen,
which prevented files from being quickly added.
This commit was sponsored by John Peloquin on Patreon.
diff --git a/CHANGELOG b/CHANGELOG
index d84b3cb..305970f 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -27,6 +27,9 @@ git-annex (6.20170520) UNRELEASED; urgency=medium
     into a repository watched by the assistant auto-merge.
   * Makefile: Install completions for the fish and zsh shells
     when git-annex is built with optparse-applicative-0.14.
+  * assistant: Don't trust OSX FSEvents's eventFlagItemModified to be called
+    when the last writer of a file closes it; apparently that sometimes
+    does not happen, which prevented files from being quickly added.
 
  -- Joey Hess <id@joeyh.name>  Wed, 24 May 2017 14:03:40 -0400
 
diff --git a/Utility/DirWatcher.hs b/Utility/DirWatcher.hs
index bde7106..892841f 100644
--- a/Utility/DirWatcher.hs
+++ b/Utility/DirWatcher.hs
@@ -64,18 +64,18 @@ eventsCoalesce = error "eventsCoalesce not defined"
 {- With inotify, file closing is tracked to some extent, so an add event
  - will always be received for a file once its writer closes it, and
  - (typically) not before. This may mean multiple add events for the same file.
- - 
- - fsevents behaves similarly, although different event types are used for
- - creating and modification of the file.
  -
  - OTOH, with kqueue, add events will often be received while a file is
  - still being written to, and then no add event will be received once the
- - writer closes it. -}
+ - writer closes it.
+ - 
+ - fsevents sometimes behaves similarly, but has sometimes been 
+ - seen to behave like kqueue. -}
 closingTracked :: Bool
-#if (WITH_INOTIFY || WITH_FSEVENTS || WITH_WIN32NOTIFY)
+#if (WITH_INOTIFY || WITH_WIN32NOTIFY)
 closingTracked = True
 #else
-#if WITH_KQUEUE
+#if (WITH_KQUEUE || WITH_FSEVENTS)
 closingTracked = False
 #else
 closingTracked = error "closingTracked not defined"
diff --git a/doc/bugs/Git-annex_and_Microsoft_Office_files_on_OS_X/comment_1_0cf11096ceeb6cf93db5609a42a70641._comment b/doc/bugs/Git-annex_and_Microsoft_Office_files_on_OS_X/comment_1_0cf11096ceeb6cf93db5609a42a70641._comment
new file mode 100644
index 0000000..ae27704
--- /dev/null
+++ b/doc/bugs/Git-annex_and_Microsoft_Office_files_on_OS_X/comment_1_0cf11096ceeb6cf93db5609a42a70641._comment
@@ -0,0 +1,41 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 1"""
+ date="2017-06-09T17:52:00Z"
+ content="""
+This must have to do with the fsevents interface used on OSX.
+
+In Assistant.Threads.Committer.safeToAdd, when lsof detects
+a file is still open for write by some process, it cancels
+the add. This relies on events being received when files
+get closed (closingTracked).
+
+In Utility.DirWatcher.FSEvents.watchDir, when an event has
+eventFlagItemModified set, it treats that as a file add event.
+The intent is to emulate inotify's handling of file add events when
+files are closed.
+
+So, two theories:
+
+1. Perhaps eventFlagItemModified only gets set if the file
+   is actually modified. Ie, if MS office writes the file
+   and while it's being written another process opens it to read
+   it (perhaps to index the content), then if the other process
+   doesn't modify it, eventFlagItemModified is not set.
+
+2. Perhaps the way the assistant hard links/moves the file around
+   confuses the FSEvents handling. Perhaps there is an event with
+   eventFlagItemModified, but it's for the locked down file, or
+   something like that, so git-annex ignores it.
+
+In any case, I'm leaning toward thinking that closingTracked should
+not be True for FSEvents. This bug report seems to show, conclusively,
+that FSEvents does not have that property. If closingTracked was False,
+as it is for KQueue, the assistant would postpone adding the file,
+and keep retrying, around once per second, until it no longer had
+any writers, and then add it.
+
+So, I've made that change. I suspect it fixes the bug, but it would
+be pretty hard for me to test it. Could you please download tomorrow's
+daily build of git-annex for OSX, and see if it fixes the problem?
+"""]]
diff --git a/doc/forum/Git-annex_and_Microsoft_Office_files_on_OS_X/comment_1_71fdd5e7061d6deb357290057804cc27._comment b/doc/forum/Git-annex_and_Microsoft_Office_files_on_OS_X/comment_1_71fdd5e7061d6deb357290057804cc27._comment
new file mode 100644
index 0000000..3cd1c59
--- /dev/null
+++ b/doc/forum/Git-annex_and_Microsoft_Office_files_on_OS_X/comment_1_71fdd5e7061d6deb357290057804cc27._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 1"""
+ date="2017-06-09T17:51:35Z"
+ content="""
+A bug was opened about this:
+[[bugs/Git-annex_and_Microsoft_Office_files_on_OS_X]]
+"""]]

comment
diff --git a/doc/forum/Shared_repository_without_git-annex_on_central_server/comment_2_d1f803b68c3789854e483da1318d043d._comment b/doc/forum/Shared_repository_without_git-annex_on_central_server/comment_2_d1f803b68c3789854e483da1318d043d._comment
new file mode 100644
index 0000000..b8200fc
--- /dev/null
+++ b/doc/forum/Shared_repository_without_git-annex_on_central_server/comment_2_d1f803b68c3789854e483da1318d043d._comment
@@ -0,0 +1,17 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 2"""
+ date="2017-06-09T17:47:45Z"
+ content="""
+The gcrypt special remote will work without git-annex being installed
+on the server. The server only needs git and rsync. I think the
+tip is fairly clear about that when it says
+
+"While this will work without git-annex being installed on the server, it
+is recommended to have it installed."
+
+Without git-annex being available on the remote server, some things
+like `git annex notifychanges` and content locking during dropping won't
+work with that remote. The basics should work well enough to use it that
+way.
+"""]]

close as dup
diff --git a/doc/bugs/Out_of_memory_error_when_adding_large_files_to_v6_repository.mdwn b/doc/bugs/Out_of_memory_error_when_adding_large_files_to_v6_repository.mdwn
index ae5c0e6..568b2bf 100644
--- a/doc/bugs/Out_of_memory_error_when_adding_large_files_to_v6_repository.mdwn
+++ b/doc/bugs/Out_of_memory_error_when_adding_large_files_to_v6_repository.mdwn
@@ -53,3 +53,5 @@ I have no idea why it needs to do that, though.
 ### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
 
 I've been using git-annex v5 repositories without any issues, and with smaller files, repository v6 works great!
+
+> dup; [[done]] --[[Joey]]
diff --git a/doc/bugs/Out_of_memory_error_when_adding_large_files_to_v6_repository/comment_1_b9154c38406fca40c4c0edb716707c3a._comment b/doc/bugs/Out_of_memory_error_when_adding_large_files_to_v6_repository/comment_1_b9154c38406fca40c4c0edb716707c3a._comment
new file mode 100644
index 0000000..9ef59ed
--- /dev/null
+++ b/doc/bugs/Out_of_memory_error_when_adding_large_files_to_v6_repository/comment_1_b9154c38406fca40c4c0edb716707c3a._comment
@@ -0,0 +1,19 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 1"""
+ date="2017-06-09T17:34:40Z"
+ content="""
+Unfortunately, git add tries to load the whole file content
+into memory (or at least allocates a buffer for it all),
+even if it's only going to stream it through the clean filter
+used in v6 mode, and even though the git-annex smudge
+filter reads the file content from disk itself.
+
+I submitted a patch to git over a year ago, that IIRC fixes this kind of
+problem, but it was not accepted. Getting an updated version of that patch
+accepted into git is the main blocker for v6 repositories to not be
+experimental.
+
+[[todo/smudge]] documents this problem. I'm going to close this bug
+as it's a duplicate of stuff discussed there.
+"""]]
diff --git a/doc/todo/smudge.mdwn b/doc/todo/smudge.mdwn
index 6117224..266a5d5 100644
--- a/doc/todo/smudge.mdwn
+++ b/doc/todo/smudge.mdwn
@@ -71,6 +71,12 @@ git-annex should use smudge/clean filters.
   avoid the problem for git checkout, since it would use the new interface
   and not the smudge filter.)
 
+* When `git add` is run with a large file, it allocates memory for
+  the whole file content, even though it's only going
+  to stream it to the clean filter. My proposed smudge/clean
+  interface patch also fixed this problem, since it made git not read
+  the file at all.
+
 * Eventually (but not yet), make v6 the default for new repositories.
   Note that the assistant forces repos into direct mode; that will need to
   be changed then, and it should enable annex.thin instead.

followup
diff --git a/doc/bugs/sync_claims_data_loss_but_seems_to_just_lose_tracking/comment_3_fd518d0601587dc5c718944d0b6f51fc._comment b/doc/bugs/sync_claims_data_loss_but_seems_to_just_lose_tracking/comment_3_fd518d0601587dc5c718944d0b6f51fc._comment
new file mode 100644
index 0000000..79e388a
--- /dev/null
+++ b/doc/bugs/sync_claims_data_loss_but_seems_to_just_lose_tracking/comment_3_fd518d0601587dc5c718944d0b6f51fc._comment
@@ -0,0 +1,10 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 3"""
+ date="2017-06-09T17:33:17Z"
+ content="""
+I tried the script with `git annex direct` added at initialization,
+and didn't see the problem as described, but the assistant
+did fail to get the content into the transfer repo at all
+when using direct mode. `git annex sync --content` had no difficulty.
+"""]]

followup
diff --git a/doc/bugs/sync_claims_data_loss_but_seems_to_just_lose_tracking/comment_2_f686273c6be899d2a8bbaabd03a47fbf._comment b/doc/bugs/sync_claims_data_loss_but_seems_to_just_lose_tracking/comment_2_f686273c6be899d2a8bbaabd03a47fbf._comment
new file mode 100644
index 0000000..34ed9f1
--- /dev/null
+++ b/doc/bugs/sync_claims_data_loss_but_seems_to_just_lose_tracking/comment_2_f686273c6be899d2a8bbaabd03a47fbf._comment
@@ -0,0 +1,14 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 2"""
+ date="2017-06-09T17:25:23Z"
+ content="""
+I have not had any luck with reproducing the problem, using
+your script, on Debian unstable. The transfer repo gets
+the file contents and both it and the source repo knows where they are.
+
+Do you see the same behavior if, rather than running the assistant
+for 30 seconds, you run `git annex sync --content` ?
+
+Are you using git-annex in direct mode and/or on an unusual filesystem?
+"""]]

followup
diff --git a/doc/bugs/Hybrid_encryption_can__39__t_generate_the_right_key_after_moving_files/comment_5_ce0342e38a87017ad58c9a79b17d759a._comment b/doc/bugs/Hybrid_encryption_can__39__t_generate_the_right_key_after_moving_files/comment_5_ce0342e38a87017ad58c9a79b17d759a._comment
new file mode 100644
index 0000000..d20a6e6
--- /dev/null
+++ b/doc/bugs/Hybrid_encryption_can__39__t_generate_the_right_key_after_moving_files/comment_5_ce0342e38a87017ad58c9a79b17d759a._comment
@@ -0,0 +1,26 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 5"""
+ date="2017-06-09T17:04:09Z"
+ content="""
+"I think what is happening is that my small file I was testing with somehow
+became corrupted or was modified while on Amazon's servers."
+
+That was also kind of my guess. It's hard to imagine how
+the way that a file is downloaded from The Cloud changes
+how git-annex decrypts it. As long as the content is the same,
+the decrpytion step should behave identially no matter where
+the file is downloaded from.
+
+But, multiple small files getting corrupted seems like it must
+have a cause other than a bit flip. Perhaps something about how
+they were transferred between the two clouds corrupted them..
+
+I suppose there could also be a bug in git-annex or rclone that somehow
+corrupts uploads of small files. Perhaps something to do with chunking..
+What does `git annex info theremote --fast` say about its configuration?
+
+What is the range of sizes of small files that you've found to be
+corrupted? Is there a cut-off point after which all larger files are
+not corrupted? Are any small files not corrupted?
+"""]]