Recent changes to this wiki:

add news item for git-annex 10.20251114
diff --git a/doc/news/version_10.20250721.mdwn b/doc/news/version_10.20250721.mdwn
deleted file mode 100644
index 09ca1b73f0..0000000000
--- a/doc/news/version_10.20250721.mdwn
+++ /dev/null
@@ -1,17 +0,0 @@
-git-annex 10.20250721 released with [[!toggle text="these changes"]]
-[[!toggleable text="""  * Improved workaround for git 2.50 bug, avoding an occasional test suite
-    failure, as well as some situations where an unlocked file did not get
-    populated when adding another file to the repository with the same
-    content.
-  * Add --url option and url= preferred content expression, to match
-    content that is recorded as present in an url.
-  * p2phttp: Scan multilevel directories with --directory.
-  * p2phttp: Added --socket option.
-  * Fix bug in handling of linked worktrees on filesystems not supporting
-    symlinks, that caused annexed file content to be stored in the wrong
-    location inside the git directory, and also caused pointer files to not
-    get populated.
-  * fsck: Fix location of annexed files when run in linked worktrees
-    that have experienced the above bug.
-  * Fix symlinks generated to annexed content when in adjusted unlocked
-    branch in a linked worktree on a filesystem not supporting symlinks."""]]
\ No newline at end of file
diff --git a/doc/news/version_10.20251114.mdwn b/doc/news/version_10.20251114.mdwn
new file mode 100644
index 0000000000..63255f2897
--- /dev/null
+++ b/doc/news/version_10.20251114.mdwn
@@ -0,0 +1,10 @@
+git-annex 10.20251114 released with [[!toggle text="these changes"]]
+[[!toggleable text="""  * p2p --pair: Fix to work with external P2P networks.
+  * p2phttp: Significant robustness fixes for bugs that caused the
+    server to stall.
+  * p2phttp: Fix a file descriptor leak.
+  * p2phttp: Added the --lockedfiles option.
+  * dropunused: Run the annex.secure-erase-command
+    (or .git/hooks/secure-erase-annex) when deleting
+    temp and bad object files.
+  * remotedaemon: Avoid crashing when run with --debug."""]]
\ No newline at end of file

comnment
diff --git a/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_9_024a91c7b0eabc888cd717208e2a7d14._comment b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_9_024a91c7b0eabc888cd717208e2a7d14._comment
new file mode 100644
index 0000000000..9db1df3036
--- /dev/null
+++ b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_9_024a91c7b0eabc888cd717208e2a7d14._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 9"""
+ date="2025-11-13T19:31:57Z"
+ content="""
+After fixing the other bug, I have successfully run the test for several
+hours without any problems.
+"""]]

p2phttp: fix stalling git-annex get
A race condition caused a small fraction of requests to hang with the
object mostly transferred. Or, in some cases, STM deadlock messages to
be displayed without a hang.
See comments for analysis. I don't entirely understand what is going on
with all the filling of endv, but this clearly fixes the race.
diff --git a/CHANGELOG b/CHANGELOG
index 3c056e1588..542fdd4658 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -2,10 +2,10 @@ git-annex (10.20251103) UNRELEASED; urgency=medium
 
   * p2p --pair: Fix to work with external P2P networks.
   * remotedaemon: Avoid crashing when run with --debug.
-  * p2phttp: Fix server stall when there are too many concurrent clients.
-  * p2phttp: Fix a file descriptor leak caused by a race condition.
+  * p2phttp: Significant robustness fixes to bugs that caused the 
+    server to stall.
+  * p2phttp: Fix a file descriptor leak.
   * p2phttp: Added the --lockedfiles option.
-  * p2phttp: Fix server stall when a git-annex drop is interrupted.
   * dropunused: Run the annex.secure-erase-command 
     (or .git/hooks/secure-erase-annex) when deleting
     temp and bad object files.
diff --git a/P2P/Http/Server.hs b/P2P/Http/Server.hs
index d56f0e49ec..bcc11c9207 100644
--- a/P2P/Http/Server.hs
+++ b/P2P/Http/Server.hs
@@ -189,7 +189,7 @@ serveGet mst su apiver (B64Key k) cu bypass baf startat sec auth = do
 				validity <- atomically $ takeTMVar validityv
 				sz <- takeMVar szv
 				atomically $ putTMVar finalv ()
-				atomically $ putTMVar endv ()
+				void $ atomically $ tryPutTMVar endv ()
 				return $ case validity of
 					Nothing -> True
 					Just Valid -> True
diff --git a/doc/bugs/get_from_p2phttp_sometimes_stalls.mdwn b/doc/bugs/get_from_p2phttp_sometimes_stalls.mdwn
index 29dd2cf56e..1e20a74a6e 100644
--- a/doc/bugs/get_from_p2phttp_sometimes_stalls.mdwn
+++ b/doc/bugs/get_from_p2phttp_sometimes_stalls.mdwn
@@ -45,3 +45,5 @@ and running it again does not block waiting for the server.
 > 
 > Using curl as the client and seeing if
 > it always receives the whole object would be a good next step. --[[Joey]] 
+
+>> [[fixed|done]] --[[Joey]]
diff --git a/doc/bugs/get_from_p2phttp_sometimes_stalls/comment_1_d3adc9528152fbb041e27482d674a59b._comment b/doc/bugs/get_from_p2phttp_sometimes_stalls/comment_1_d3adc9528152fbb041e27482d674a59b._comment
new file mode 100644
index 0000000000..712706857d
--- /dev/null
+++ b/doc/bugs/get_from_p2phttp_sometimes_stalls/comment_1_d3adc9528152fbb041e27482d674a59b._comment
@@ -0,0 +1,47 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 1"""
+ date="2025-11-13T17:22:33Z"
+ content="""
+I saw this bug with git-annex built using haskell packages from current
+debian unstable.
+
+On a hunch, I tried a `stack build`, and it does not stall. However, I am
+seeing this from the http server at about the same frequency as the stall,
+and occuring during the `git-annex get`:
+
+	thread blocked indefinitely in an STM transaction
+
+And at the same time, this is reported on the client side:
+
+	get 27 (from origin...)
+	  HttpExceptionRequest Request {
+	    host                 = "localhost"
+	    port                 = 9417
+	    secure               = False
+	    requestHeaders       = [("Accept","application/octet-stream")]
+	    path                 = "/git-annex/a697daef-f8c3-4e64-a3e0-65927e36d06b/v4/k
+	    queryString          = "?clientuuid=9bc0478c-a0ff-4159-89ab-14c13343beb9&ass
+	    method               = "GET"
+	    proxy                = Nothing
+	    rawBody              = False
+	    redirectCount        = 10
+	    responseTimeout      = ResponseTimeoutDefault
+	    requestVersion       = HTTP/1.1
+	    proxySecureMode      = ProxySecureWithConnect
+	  }
+	   IncompleteHeaders
+	ok
+
+(I assume that it succeeded because it did an automatic retry when the
+first download was incomplete.)
+
+I also tried using the
+stack build for the server, and the cabal build for the client, with the same
+result. With the cabal build for the server and stack build for the client, it 
+stalls as before.
+
+So it's a bug on the server side, and whatever it is causes one of the threads to
+get killed in a way that causes another STM transaction to deadlock. 
+And the runtime happenes to detect the deadlock and resolve it when built with stack.
+"""]]
diff --git a/doc/bugs/get_from_p2phttp_sometimes_stalls/comment_2_a5296c101fed02a9ba3f16f94461b7d9._comment b/doc/bugs/get_from_p2phttp_sometimes_stalls/comment_2_a5296c101fed02a9ba3f16f94461b7d9._comment
new file mode 100644
index 0000000000..5dedcf362d
--- /dev/null
+++ b/doc/bugs/get_from_p2phttp_sometimes_stalls/comment_2_a5296c101fed02a9ba3f16f94461b7d9._comment
@@ -0,0 +1,15 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 2"""
+ date="2025-11-13T17:53:48Z"
+ content="""
+Using DebugLocks, found that the deadlock is in checkvalidity,
+the second time it calls `putTMVar endv ()`.
+
+That was added in [[!commit 7bd616e169827568c4ca6bc6e4f8ae5bf796d2d8]] 
+"a bugfix to serveGet, it hung at the end".
+
+Looks like a race between checkvalidity and waitfinal,
+which both fill endv. waitfinal does not deadlock when endv is already
+full, but checkvalidity does.
+"""]]

Adding ON tag
diff --git a/doc/bugs/S3_remote_should_expose_x-amz-tagging_header.mdwn b/doc/bugs/S3_remote_should_expose_x-amz-tagging_header.mdwn
index 5af44ef3de..f8f800558b 100644
--- a/doc/bugs/S3_remote_should_expose_x-amz-tagging_header.mdwn
+++ b/doc/bugs/S3_remote_should_expose_x-amz-tagging_header.mdwn
@@ -10,3 +10,4 @@ An example use case is publishing a private dataset where a bucket policy is use
 ### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
 
 
+[[!tag projects/openneuro]]

Feature request for OpenNeuro
diff --git a/doc/bugs/S3_remote_should_expose_x-amz-tagging_header.mdwn b/doc/bugs/S3_remote_should_expose_x-amz-tagging_header.mdwn
new file mode 100644
index 0000000000..5af44ef3de
--- /dev/null
+++ b/doc/bugs/S3_remote_should_expose_x-amz-tagging_header.mdwn
@@ -0,0 +1,12 @@
+### Please describe the problem.
+Similar to the x-amz-meta-* S3 remote configuration, it would be useful to be able to configure an S3 remote with the x-amz-tagging header passed to putObject. Unlike x-amz-meta values, tags can be updated without copying objects to a new version.
+
+An example use case is publishing a private dataset where a bucket policy is used to limit access by default (tagged private on the initial export) and objects are progressively made public after an embargo period.
+
+### What version of git-annex are you using? On what operating system?
+
+10.20250929 on Fedora 43.
+
+### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
+
+

close one bug and open a new one I found while testing it
diff --git a/doc/bugs/get_from_p2phttp_sometimes_stalls.mdwn b/doc/bugs/get_from_p2phttp_sometimes_stalls.mdwn
new file mode 100644
index 0000000000..29dd2cf56e
--- /dev/null
+++ b/doc/bugs/get_from_p2phttp_sometimes_stalls.mdwn
@@ -0,0 +1,47 @@
+`git-annex get` from a p2phttp remote sometimes stalls out.
+
+This has been observed when using loopback. Eg, run in one repo,
+which contains about 1000 annexed files of size 1 mb each:
+
+    git-annex p2phttp -J2 --bind 127.0.0.1 --wideopen
+
+Then in a clone:
+
+    git config remote.origin.annexUrl annex+http://localhost/git-annex/
+    while true; do git-annex get --from origin -J20; git-annex drop; done
+
+The concurrency is probably not strictly needed to reproduce this.
+But it makes it more likely to occur sooner, at least.
+
+The total stall looks like this:
+
+    1%    7.82 KiB          6 MiB/s 0s
+
+Here is another one:
+
+	1%    7.82 KiB          6 MiB/s 0s
+
+The progress display never updates. Every time
+I've seen the total stall, it's been at 7.82 KiB,
+which seems odd.
+
+Looking at the object in `.git/annex/tmp`, it has the correct
+content, but is 4368 bytes short of the full 1048576 byte size.
+I've verified this is the case every time. So it looks like
+the client didn't get the final chunk of the file in the response.
+
+Note that, despite p2phttp being run with -J2, 
+so only supporting 2 concurrent get operations,
+interrupting the `git-annex get` that stalled out
+and running it again does not block waiting for the server.
+ So p2phttp seems to have finished processing the request.
+ Or possibly failed in a way that returns a worker to the pool.
+--[[Joey]]
+
+> Initial investigation in serveGet seems to show it successfully
+> sending the whole object. At least up to fromActionStep,
+> I've not verified servant always does the right thing with that
+> or doesn't close the connection early sometimes.
+> 
+> Using curl as the client and seeing if
+> it always receives the whole object would be a good next step. --[[Joey]] 
diff --git a/doc/bugs/p2phttp_deadlocks_with_concurrent_clients.mdwn b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients.mdwn
index 78a552343d..021bbd3481 100644
--- a/doc/bugs/p2phttp_deadlocks_with_concurrent_clients.mdwn
+++ b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients.mdwn
@@ -39,3 +39,5 @@ local repository version: 10
 ### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
 
 [[!tag projects/ICE4]]
+
+> [[fixed|done]] --[[Joey]]
diff --git a/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_8_f5018216098c02b4770cced15c94a275._comment b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_8_f5018216098c02b4770cced15c94a275._comment
new file mode 100644
index 0000000000..36dcf454f4
--- /dev/null
+++ b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_8_f5018216098c02b4770cced15c94a275._comment
@@ -0,0 +1,12 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 8"""
+ date="2025-11-12T19:53:41Z"
+ content="""
+Fixed the problem with interrupted `git-annex drop`.
+
+Opened a new bug report about sometimes stalling `git-annex
+drop`: [[get_from_p2phttp_sometimes_stalls]]
+
+I think I've fully addressed this bug report now, so will close it.
+"""]]

fix
diff --git a/doc/videos/distribits2025.mdwn b/doc/videos/distribits2025.mdwn
index a48d57d6f2..663149204e 100644
--- a/doc/videos/distribits2025.mdwn
+++ b/doc/videos/distribits2025.mdwn
@@ -17,14 +17,14 @@ Matthias Riße's talk
 covered this increasingly important integration between git-annex and
 forgejo, and how it is developed and maintained.
 
+The above three speakers were in
+a [panel discussion](https://www.distribits.live/talks/2025/discussion-hess-vogel-szczepanik-risse/) as well.
+
 Michał Szczepanik's talk
 ["Compute on demand"](https://www.distribits.live/talks/2025/szczepanik-compute-on-demand/)
 compared and contrasted the git-annex compute special remote with 
 the datalad-remake special remote.
 
-The above three speakers were in
-a [panel discussion](https://www.distribits.live/talks/2025/discussion-hess-vogel-szczepanik-risse/) as well.
-
 Timothy Sanders's talk
 ["Using Git-annex to enhance the MediaWiki file repository system"](https://www.distribits.live/talks/2025/sanders-using-git-annex-to-enhance-the/)
 presented a git-annex mediawiki backend.

add
diff --git a/doc/videos/distribits2025.mdwn b/doc/videos/distribits2025.mdwn
new file mode 100644
index 0000000000..a48d57d6f2
--- /dev/null
+++ b/doc/videos/distribits2025.mdwn
@@ -0,0 +1,40 @@
+At [Distribits 2025](https://distribits.live/), there were several talks on
+git-annex and closely related subjects.
+
+Joey Hess's talk 
+["git-annex for computer scientists"](https://www.distribits.live/talks/2025/hess-git-annex-for-computer-scientists/)
+explained the core data structures that make up git-annex, and then
+used that as a basis to understand several recent git-annex features.
+([mirror](https://downloads.kitenet.net/talks/distribits_2025_git-annex_for_computer_scientists.webm))
+
+Steffen Vogel's talk
+["Managing Tape Archives with git-annex: A Special Remote for Sequential Media"](https://www.distribits.live/talks/2025/vogel-managing-tape-archives-with-git-annex/)
+presented a soon to be released special remote for LTO tape, breaking new
+ground in what git-annex can use.
+
+Matthias Riße's talk
+["Forgejo-aneksajo: a git-annex/DataLad forge"](https://www.distribits.live/talks/2025/risse-forgejo-aneksajo-a-git-annex-datalad-forge/)
+covered this increasingly important integration between git-annex and
+forgejo, and how it is developed and maintained.
+
+Michał Szczepanik's talk
+["Compute on demand"](https://www.distribits.live/talks/2025/szczepanik-compute-on-demand/)
+compared and contrasted the git-annex compute special remote with 
+the datalad-remake special remote.
+
+The above three speakers were in
+a [panel discussion](https://www.distribits.live/talks/2025/discussion-hess-vogel-szczepanik-risse/) as well.
+
+Timothy Sanders's talk
+["Using Git-annex to enhance the MediaWiki file repository system"](https://www.distribits.live/talks/2025/sanders-using-git-annex-to-enhance-the/)
+presented a git-annex mediawiki backend.
+
+Christopher Markiewicz's talk
+["Maintaining large datasets at scale"](https://www.distribits.live/talks/2025/markiewicz-maintaining-large-datasets-at-scale/)
+covered finding and fixing defects that arise in automatically managed
+git-annex repositories.
+
+Many of the other talks at Distribits also involved git-annex.
+[Playlist](https://www.youtube.com/playlist?list=PLEQHbPfpVqU6_bZ4gUQn_9OX-LvDmKoby)
+
+[[!meta title="git-annex presentations at Distribits 2025"]]

Added a comment
diff --git a/doc/todo/Delayed_drop_from_remote/comment_3_f6914ae82921124e26c31ae89175d6de._comment b/doc/todo/Delayed_drop_from_remote/comment_3_f6914ae82921124e26c31ae89175d6de._comment
new file mode 100644
index 0000000000..f2ffbea4ae
--- /dev/null
+++ b/doc/todo/Delayed_drop_from_remote/comment_3_f6914ae82921124e26c31ae89175d6de._comment
@@ -0,0 +1,30 @@
+[[!comment format=mdwn
+ username="matrss"
+ avatar="http://cdn.libravatar.org/avatar/cd1c0b3be1af288012e49197918395f0"
+ subject="comment 3"
+ date="2025-11-10T21:14:20Z"
+ content="""
+> The deletion could be handled by a cron job that the user is responsible for setting up, which avoids needing to configure a time limit in git-annex, and also avoids the question of what git-annex command(s) would handle the clean up.
+
+Agreed, that makes sense.
+
+> An alternative way to handle this would be to use the \"appendonly\" config of git-annex p2phttp (and git-annex-shell has something similar). Then the repository would refuse to drop. And instead you could have a cron job that uses git-annex unused to drop old objects.
+
+While realistically most force drops probably would be unused files those two things aren't necessarily the same.
+
+> I think there are some benefits to that path, it makes explicit to the user that they data they wanted to drop is not immediately going away from the server.
+
+I think I would deliberately want this to be invisible to the user, since I wouldn't want anyone to actively start relying on it.
+
+> Which might be important for legal reasons (although the prospect of backups of annexed files makes it hard to be sure if a server has really deleted something anyway).
+
+That's a tradeoff for sure, but the expectation should already be that a hosted service like a Forgejo-aneksajo instance will retain backups at least for disaster recovery purposes. But that's on the admin(s) to communicate, and within a personal setting it doesn't matter at all.
+
+> And if the repository had a disk quota, this would make explicit to the user why dropping content from it didn't free up quota.
+
+Actually for that reason I would not count this soft-deleted data towards quotas for my own purposes.
+
+> A third approach would be to have a config setting that makes dropped objects be instead moved to a remote. So the drop would succeed, but whereis would indicate that the object was being retained there. Then a cron job on the remote could finish the deletions.
+
+I like this! Considering that such a \"trash bin\" (special) remote could be initialized with `--private` (right?) it would be possible to make it fully invisible to the user too, while indeed being much more flexible. I suppose the cron job would then be something like `git annex drop --from trash-bin --all --not --accessedwithin=30d`, assuming that moving it there counts as \"accessing\" and no background job on the server accesses it afterwards (maybe an additional matching option for mtime or ctime instead of atime would be useful here?). This feels very much git-annex'y 🙂
+"""]]

comment
diff --git a/doc/todo/Delayed_drop_from_remote/comment_2_b217080d4b983fc9aac4cfe0cd5da0fc._comment b/doc/todo/Delayed_drop_from_remote/comment_2_b217080d4b983fc9aac4cfe0cd5da0fc._comment
new file mode 100644
index 0000000000..ddfe8cb305
--- /dev/null
+++ b/doc/todo/Delayed_drop_from_remote/comment_2_b217080d4b983fc9aac4cfe0cd5da0fc._comment
@@ -0,0 +1,21 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 2"""
+ date="2025-11-10T15:34:06Z"
+ content="""
+A third approach would be to have a config setting that makes dropped
+objects be instead moved to a remote. So the drop would succeed, but
+whereis would indicate that the object was being retained there. Then
+a cron job on the remote could finish the deletions.
+
+This would not be singifinantly more heavyweight than just moving to a
+directory, if you used eg a directory special remote. And it's also a lot
+more flexible.
+
+Of course, this would make dropping take longer than usual, depending on
+how fast the object could be moved to the remote. If it were slow, there
+would be no way to convey progress back to the user without a lot more
+complication than this feature warrants.
+
+Open to your thoughts on these alternatives..
+"""]]

dropunused: Run the annex.secure-erase-command
(or .git/hooks/secure-erase-annex) when deleting
temp and bad object files.
As was already done when deleting unlocked files.
diff --git a/CHANGELOG b/CHANGELOG
index b888e00163..568f553a2a 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -5,6 +5,9 @@ git-annex (10.20251103) UNRELEASED; urgency=medium
   * p2phttp: Fix server stall when there are too many concurrent clients.
   * p2phttp: Fix a file descriptor leak caused by a race condition.
   * p2phttp: Added the --lockedfiles option.
+  * dropunused: Run the annex.secure-erase-command 
+    (or .git/hooks/secure-erase-annex) when deleting
+    temp and bad object files.
 
  -- Joey Hess <id@joeyh.name>  Mon, 03 Nov 2025 14:02:46 -0400
 
diff --git a/Command/DropUnused.hs b/Command/DropUnused.hs
index 6733b42235..e8717f8185 100644
--- a/Command/DropUnused.hs
+++ b/Command/DropUnused.hs
@@ -1,6 +1,6 @@
 {- git-annex command
  -
- - Copyright 2010,2012,2018 Joey Hess <id@joeyh.name>
+ - Copyright 2010-2025 Joey Hess <id@joeyh.name>
  -
  - Licensed under the GNU AGPL version 3 or higher.
  -}
@@ -17,6 +17,7 @@ import qualified Git
 import Command.Unused (withUnusedMaps, UnusedMaps(..), startUnused)
 import Annex.NumCopies
 import Annex.Content
+import Annex.Content.LowLevel
 
 cmd :: Command
 cmd = withAnnexOptions [jobsOption, jsonOptions] $
@@ -79,5 +80,7 @@ perform from numcopies mincopies key = case from of
 performOther :: (Key -> Git.Repo -> OsPath) -> Key -> CommandPerform
 performOther filespec key = do
 	f <- fromRepo $ filespec key
-	pruneTmpWorkDirBefore f (liftIO . removeWhenExistsWith removeFile)
+	pruneTmpWorkDirBefore f $ \f' -> do
+		secureErase f'
+		liftIO $ removeWhenExistsWith removeFile f'
 	next $ return True
diff --git a/doc/todo/dropunused_of_tmp_and_bad_files_should_honor_annex.secure-erase-command_config.mdwn b/doc/todo/dropunused_of_tmp_and_bad_files_should_honor_annex.secure-erase-command_config.mdwn
index eb53b9d715..1429ef1f69 100644
--- a/doc/todo/dropunused_of_tmp_and_bad_files_should_honor_annex.secure-erase-command_config.mdwn
+++ b/doc/todo/dropunused_of_tmp_and_bad_files_should_honor_annex.secure-erase-command_config.mdwn
@@ -1,3 +1,5 @@
 Currently, when `annex.secure-erase-command` is configured, 
 `git-annex dropunused` does not use it for deleting tmp and bad files.
 Since those can contain the content of objects, it should. --[[Joey]]
+
+> [[done]]

comment and related todo
diff --git a/doc/todo/Delayed_drop_from_remote/comment_1_e670391c20cdec5d40b55a06305bdfca._comment b/doc/todo/Delayed_drop_from_remote/comment_1_e670391c20cdec5d40b55a06305bdfca._comment
new file mode 100644
index 0000000000..3d9bc5e5de
--- /dev/null
+++ b/doc/todo/Delayed_drop_from_remote/comment_1_e670391c20cdec5d40b55a06305bdfca._comment
@@ -0,0 +1,29 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 1"""
+ date="2025-11-10T15:06:35Z"
+ content="""
+The deletion could be handled by a cron job that the user is
+responsible for setting up, which avoids needing to configure a time limit
+in git-annex, and also avoids the question of what git-annex command(s)
+would handle the clean up.
+
+An alternative way to handle this would be to use the "appendonly" config
+of `git-annex p2phttp` (and `git-annex-shell` has something similar). Then
+the repository would refuse to drop. And instead you could have a cron job
+that uses `git-annex unused` to drop old objects. This would need some way
+to only drop unused objects after some period of time.
+
+I think there are some benefits to that path, it makes explicit to the user
+that they data they wanted to drop is not immediately going away from the
+server. Which might be important for legal reasons (although the prospect
+of backups of annexed files makes it hard to be sure if a server has really
+deleted something anyway). And if the repository had a disk quota, this
+would make explicit to the user why dropping content from it didn't free up
+quota.
+
+(I think it would also be possible to (ab)use the `annex.secure-erase-command`
+to instead move objects to the directory. Probably not a good idea,
+especially because there's no guarantee that command is only run on
+complete annex objects.)
+"""]]
diff --git a/doc/todo/dropunused_of_tmp_and_bad_files_should_honor_annex.secure-erase-command_config.mdwn b/doc/todo/dropunused_of_tmp_and_bad_files_should_honor_annex.secure-erase-command_config.mdwn
new file mode 100644
index 0000000000..eb53b9d715
--- /dev/null
+++ b/doc/todo/dropunused_of_tmp_and_bad_files_should_honor_annex.secure-erase-command_config.mdwn
@@ -0,0 +1,3 @@
+Currently, when `annex.secure-erase-command` is configured, 
+`git-annex dropunused` does not use it for deleting tmp and bad files.
+Since those can contain the content of objects, it should. --[[Joey]]

diff --git a/doc/todo/Delayed_drop_from_remote.mdwn b/doc/todo/Delayed_drop_from_remote.mdwn
new file mode 100644
index 0000000000..dd2d26bd4e
--- /dev/null
+++ b/doc/todo/Delayed_drop_from_remote.mdwn
@@ -0,0 +1,11 @@
+In the name of protecting people from themselves I'd like to have an option to configure repositories on a Forgejo-aneksajo instance (or rather in general) to _not_ immediately obey a `git annex drop --from ... --force`.
+
+I am thinking of having an `annex.delayeddrop` config option (names subject to bike-shedding of course) to set in each repo's git config. With it set to e.g. "30d" `git annex drop` on that repository would, from the point of view of the user, do everything like always including recording that the repo no longer has the data, but instead of deleting the files immediately, move them into e.g. .git/annex/deleted-objects. This directory would then be cleaned of files that have been there for more than 30 days at some point in the future, e.g. when an fsck is done, or maybe on other operations too.
+
+I don't think any tooling around ".git/annex/deleted-objects" would be necessary, rather with the information that the data for some key was lost one could then manually dive into that directory, retrieve the data out of it, and reinject it into the repository.
+
+The point is to have a fast path to recovery from over-eager dropping that might otherwise lead to data loss, even though `--force` should be totally clear to everyone.
+
+Or maybe something like this exists already...
+
+[[!tag projects/ICE4]]

status update
diff --git a/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_7_3008ba0d43374a4dbf87335aaf0a9477._comment b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_7_3008ba0d43374a4dbf87335aaf0a9477._comment
new file mode 100644
index 0000000000..9f0e3122a8
--- /dev/null
+++ b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_7_3008ba0d43374a4dbf87335aaf0a9477._comment
@@ -0,0 +1,27 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 7"""
+ date="2025-11-07T20:37:44Z"
+ content="""
+I've landed a complete fix for this. The server no longer locks up
+when I run the test case for a long time.
+
+Additionally, there was a bug that caused p2phttp to leak lock file
+descriptors, which gets triggered by the same test case. I've fixed that.
+
+There are two problems I noticed in testing though.
+
+`git-annex get` sometimes slows down to just bytes per second, 
+or entirely stalls. This is most apparent with `-J10`, but I've seen it
+happen even when there is no concurrency or other clients.
+This should probably be treated as a separate bug, but it does
+cause the test case to eventually hang, unless git-annex is configured
+to do stall detection. The server keeps responding to
+other requests though.
+
+Running `git-annex drop` and interrupting it at the wrong moment
+while it's locking content on the server seems to cause a P2P protocol
+worker to not get returned to the worker pool. When it happens enough
+times, this can cause the server to stop responding to new requests.
+Which seems closely related to this bug.
+"""]]

p2phttp: Added the --lockedfiles option
This prevents serveLockContent from starting an unbounded number of
threads.
Note that, when it goes over this limit, git-annex is still able to drop
from the local repository in most situations, it just falls back to
checking content presence and is still able to prove the drop is safe.
But of course there are some cases where an active lock is needed in order to
drop.
The ugly getTimestamp hack works around a bug in the server. I suspect that
bug is also responsible for what happens if git-annex drop is interrupted
at the wrong time when checking the lock on the server -- as well as
leaving the lock fd open, the annex worker is not released to the pool,
so later connections to the server stall out. This needs to be
investigated, and the hack removed.
diff --git a/CHANGELOG b/CHANGELOG
index ebdfd81d04..b888e00163 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -4,6 +4,7 @@ git-annex (10.20251103) UNRELEASED; urgency=medium
   * remotedaemon: Avoid crashing when run with --debug.
   * p2phttp: Fix server stall when there are too many concurrent clients.
   * p2phttp: Fix a file descriptor leak caused by a race condition.
+  * p2phttp: Added the --lockedfiles option.
 
  -- Joey Hess <id@joeyh.name>  Mon, 03 Nov 2025 14:02:46 -0400
 
diff --git a/Command/P2PHttp.hs b/Command/P2PHttp.hs
index 140b2d4c94..f03509b333 100644
--- a/Command/P2PHttp.hs
+++ b/Command/P2PHttp.hs
@@ -57,6 +57,7 @@ data Options = Options
 	, proxyConnectionsOption :: Maybe Integer
 	, jobsOption :: Maybe Concurrency
 	, clusterJobsOption :: Maybe Int
+	, lockedFilesOption :: Maybe Integer
 	, directoryOption :: [FilePath]
 	}
 
@@ -119,6 +120,10 @@ optParser _ = Options
 		( long "clusterjobs" <> metavar paramNumber
 		<> help "number of concurrent node accesses per connection"
 		))
+	<*> optional (option auto
+		( long "lockedfiles" <> metavar paramNumber
+		<> help "number of content files that can be locked"
+		))
 	<*> many (strOption
 		( long "directory" <> metavar paramPath
 		<> help "serve repositories in subdirectories of a directory"
@@ -128,8 +133,10 @@ startAnnex :: Options -> Annex ()
 startAnnex o
 	| null (directoryOption o) = ifM ((/=) NoUUID <$> getUUID)
 		( do
+			lockedfilesqsem <- liftIO $ 
+				mkLockedFilesQSem (lockedFilesOption o)
 			authenv <- liftIO getAuthEnv
-			st <- mkServerState o authenv
+			st <- mkServerState o authenv lockedfilesqsem
 			liftIO $ runServer o st
 		-- Run in a git repository that is not a git-annex repository.
 		, liftIO $ startIO o 
@@ -146,20 +153,21 @@ startIO o
 		runServer o st
   where
 	mkst authenv oldst = do
+		lockedfilesqsem <- mkLockedFilesQSem (lockedFilesOption o)
 		repos <- findRepos o
 		sts <- forM repos $ \r -> do
 			strd <- Annex.new r
-			Annex.eval strd (mkstannex authenv oldst)
+			Annex.eval strd (mkstannex authenv oldst lockedfilesqsem)
 		return (mconcat sts)
 			{ updateRepos = updaterepos authenv
 			}
 	
-	mkstannex authenv oldst = do
+	mkstannex authenv oldst lockedfilesqsem = do
 		u <- getUUID
 		if u == NoUUID
 			then return mempty
 			else case M.lookup u (servedRepos oldst) of
-				Nothing -> mkServerState o authenv
+				Nothing -> mkServerState o authenv lockedfilesqsem
 				Just old -> return $ P2PHttpServerState
 					{ servedRepos = M.singleton u old
 					, serverShutdownCleanup = mempty
@@ -213,14 +221,15 @@ runServer o mst = go `finally` serverShutdownCleanup mst
 		Socket.listen sock Socket.maxListenQueue
 		return sock
 
-mkServerState :: Options -> M.Map Auth P2P.ServerMode -> Annex P2PHttpServerState
-mkServerState o authenv = 
+mkServerState :: Options -> M.Map Auth P2P.ServerMode -> LockedFilesQSem -> Annex P2PHttpServerState
+mkServerState o authenv lockedfilesqsem = 
 	withAnnexWorkerPool (jobsOption o) $
 		mkP2PHttpServerState
 			(mkGetServerMode authenv o)
 			return
 			(fromMaybe 1 $ proxyConnectionsOption o)
 			(fromMaybe 1 $ clusterJobsOption o)
+			lockedfilesqsem
 
 mkGetServerMode :: M.Map Auth P2P.ServerMode -> Options -> GetServerMode
 mkGetServerMode _ o _ Nothing
diff --git a/P2P/Http/Server.hs b/P2P/Http/Server.hs
index b94b2486ab..3f1917398f 100644
--- a/P2P/Http/Server.hs
+++ b/P2P/Http/Server.hs
@@ -474,7 +474,7 @@ serveLockContent
 	-> Handler LockResult
 serveLockContent mst su apiver (B64Key k) cu bypass sec auth = do
 	(conn, st) <- getP2PConnection apiver mst cu su bypass sec auth LockAction id
-	let lock = do
+	let lock = checklocklimit conn st $ do
 		lockresv <- newEmptyTMVarIO
 		unlockv <- newEmptyTMVarIO
 		-- A thread takes the lock, and keeps running
@@ -490,6 +490,7 @@ serveLockContent mst su apiver (B64Key k) cu bypass sec auth = do
 					void $ runFullProto (clientRunState conn) (clientP2PConnection conn) $ do
 						net $ sendMessage UNLOCKCONTENT
 				_ -> return ()
+			liftIO $ releaseLockedFilesQSem st
 		atomically (takeTMVar lockresv) >>= \case
 			Right True -> return (Just (annexworker, unlockv))
 			_ -> return Nothing
@@ -501,7 +502,19 @@ serveLockContent mst su apiver (B64Key k) cu bypass sec auth = do
 		Just (locker, lockid) -> do
 			liftIO $ storeLock lockid locker st
 			return $ LockResult True (Just lockid)
-		Nothing -> return $ LockResult False Nothing
+		Nothing -> do
+			releaseP2PConnection conn
+			return $ LockResult False Nothing
+  where
+	checklocklimit conn st a = 
+		ifM (consumeLockedFilesQSem st)
+			( a
+			, do
+				-- This works around a problem when nothing
+				-- is sent to the P2P connection.
+				_ <- liftIO $ proxyClientNetProto conn getTimestamp
+				return Nothing
+			)
 
 serveKeepLocked
 	:: APIVersion v
diff --git a/P2P/Http/State.hs b/P2P/Http/State.hs
index 47057dc779..6105a00cc8 100644
--- a/P2P/Http/State.hs
+++ b/P2P/Http/State.hs
@@ -79,6 +79,7 @@ data PerRepoServerState = PerRepoServerState
 	, annexRead :: Annex.AnnexRead
 	, getServerMode :: GetServerMode
 	, openLocks :: TMVar (M.Map LockID Locker)
+	, lockedFilesQSem :: LockedFilesQSem
 	}
 
 type AnnexWorkerPool = TMVar (WorkerPool (Annex.AnnexState, Annex.AnnexRead))
@@ -93,14 +94,15 @@ data ServerMode
 		}
 	| CannotServeRequests
 
-mkPerRepoServerState :: AcquireP2PConnection -> AnnexWorkerPool -> Annex.AnnexState -> Annex.AnnexRead -> GetServerMode -> IO PerRepoServerState
-mkPerRepoServerState acquireconn annexworkerpool annexstate annexread getservermode = PerRepoServerState
+mkPerRepoServerState :: AcquireP2PConnection -> AnnexWorkerPool -> Annex.AnnexState -> Annex.AnnexRead -> GetServerMode -> LockedFilesQSem -> IO PerRepoServerState
+mkPerRepoServerState acquireconn annexworkerpool annexstate annexread getservermode lockedfilesqsem = PerRepoServerState
 	<$> pure acquireconn
 	<*> pure annexworkerpool
 	<*> newTMVarIO annexstate
 	<*> pure annexread
 	<*> pure getservermode
 	<*> newTMVarIO mempty
+	<*> pure lockedfilesqsem
 
 data ActionClass = ReadAction | WriteAction | RemoveAction | LockAction
 	deriving (Eq)
@@ -258,14 +260,36 @@ type AcquireP2PConnection
 	= ConnectionParams
 	-> IO (Either ConnectionProblem P2PConnectionPair)
 
+type LockedFilesQSem = TMVar Integer
+
+mkLockedFilesQSem :: Maybe Integer -> IO LockedFilesQSem
+mkLockedFilesQSem = newTMVarIO . fromMaybe 100
+
+consumeLockedFilesQSem :: PerRepoServerState -> IO Bool
+consumeLockedFilesQSem st = atomically $ do
+	n <- takeTMVar (lockedFilesQSem st)
+	if n < 1
+		then do
+			putTMVar (lockedFilesQSem st) n
+			return False
+		else do
+			putTMVar (lockedFilesQSem st) (pred n)
+			return True
+
+releaseLockedFilesQSem :: PerRepoServerState -> IO ()
+releaseLockedFilesQSem st = atomically $ do
+	n <- takeTMVar (lockedFilesQSem st)
+	putTMVar (lockedFilesQSem st) (succ n)
+
 mkP2PHttpServerState
 	:: GetServerMode
 	-> UpdateRepos
 	-> ProxyConnectionPoolSize
 	-> ClusterConcurrency
+	-> LockedFilesQSem
 	-> AnnexWorkerPool
 	-> Annex P2PHttpServerState
-mkP2PHttpServerState getservermode updaterepos proxyconnectionpoolsize clusterconcurrency workerpool = do
+mkP2PHttpServerState getservermode updaterepos proxyconnectionpoolsize clusterconcurrency lockedfilesqsem workerpool = do
 	enableInteractiveBranchAccess
 	myuuid <- getUUID
 	myproxies <- M.lookup myuuid <$> getProxies

(Diff truncated)
Added a comment
diff --git a/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_6_eaf326343b6c358de25ee9a0448613bc._comment b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_6_eaf326343b6c358de25ee9a0448613bc._comment
new file mode 100644
index 0000000000..9c97dd9e73
--- /dev/null
+++ b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_6_eaf326343b6c358de25ee9a0448613bc._comment
@@ -0,0 +1,14 @@
+[[!comment format=mdwn
+ username="matrss"
+ avatar="http://cdn.libravatar.org/avatar/cd1c0b3be1af288012e49197918395f0"
+ subject="comment 6"
+ date="2025-11-07T09:12:13Z"
+ content="""
+> I think that --debug output from the p2phttp server would be helpful in narrowing down if there is particular operation that causes this hang.
+
+I should have been a bit more clear, I also saw the deadlock sometimes with concurrent get's, sometimes with drop's, and sometimes with a mix of both, so there wasn't one particular operation that seemed to be the issue.
+
+> -J2 also seems quite low though.
+
+This is for Forgejo-aneksajo, where there is still one p2phttp process being started per repository. Since there could potentially be 1000's of concurrent processes at any given time I thought it might be wise to start with the bare minimum by default. Due to how p2phttp and proxying is supposed to interact I've also realized that the current integration is not working as it should (<https://codeberg.org/forgejo-aneksajo/forgejo-aneksajo/issues/96>) and that I probably won't be able to make use of the single p2phttp process for all repositories (because of ambiguity with authorization when there are multiple different repositories with differing permissions that proxy for the same remote).
+"""]]

ask about S3 DEEP_ARCHIVE and the glacier special remote
diff --git a/doc/forum/Does_DEEP__95__ARCHIVE_replace_glacier_special_remote__63__.mdwn b/doc/forum/Does_DEEP__95__ARCHIVE_replace_glacier_special_remote__63__.mdwn
new file mode 100644
index 0000000000..7b48d8b977
--- /dev/null
+++ b/doc/forum/Does_DEEP__95__ARCHIVE_replace_glacier_special_remote__63__.mdwn
@@ -0,0 +1,10 @@
+In the git-annex docs for [S3](https://git-annex.branchable.com/special_remotes/S3/), under `storageclass`, it says
+
+> Amazon S3's DEEP_ARCHIVE is similar to Amazon Glacier. For that, use the glacier special remote, rather than this one.
+
+However, Amazon has [deprecated the standalone Glacier API](https://www.lastweekinaws.com/blog/aws-deprecates-two-dozen-services-most-of-which-youve-never-heard-of/), in favor of the S3 Glacier storage classes like [S3 Glacier Deep Archive](https://aws.amazon.com/blogs/aws/new-amazon-s3-storage-class-glacier-deep-archive/). As I understand it, new AWS accounts cannot sign up for Glacier at all, and existing accounts can only use it if they already had been using it. Instead, Amazon wants you to use the S3 classes, which are the [same price](https://aws.amazon.com/s3/pricing/) but use the S3 API instead of the Glacier API.
+
+
+For new repositories, should we use S3 with `storageclass=DEEP_ARCHIVE`? 
+
+It's not clear to me if this will work correctly, if the git-annex S3 implementation is built to handle S3 Glacier storage classes correctly. If not, what should we do since we can't use the standalone Glacier anymore?

comment
diff --git a/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_5_216561fb495aeb73683305a20a3b66e7._comment b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_5_216561fb495aeb73683305a20a3b66e7._comment
new file mode 100644
index 0000000000..f97ff5134f
--- /dev/null
+++ b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_5_216561fb495aeb73683305a20a3b66e7._comment
@@ -0,0 +1,10 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 5"""
+ date="2025-11-06T20:27:42Z"
+ content="""
+Pushed a preliminary fix in the `p2phttp_deadlock` branch.
+
+That has some known problems, documented in the commit. But it does
+avoid p2phttp locking up like this.
+"""]]

p2phttp: Fix server stall when there are too many concurrent clients
A deadlock eventually occurred when there were more concurent clients
than the size of the annex worker pool.
A test case for the deadlock is multiple clients all running
git-annex get; git-annex drop in a loop. With more clients than the
server's -J, this tended to lock up the server fairly quickly.
The problem was that inAnnexWorker is run twice per request, once for
the P2P protocol handling thread, and once for the P2P protocol
generating thread. Those two threads were started concurrently. Which,
when the worker pool is close to full, is equivilant to two locks being
taken, in potentially two different orders, and so could deadlock.
Fixed by making P2P.Http.Server use handleRequestAnnex instead of
inAnnexWorker. That forks off a new Annex state, runs the action in it,
and merges it back in.
Also, made getP2PConnection wait until the inAnnexWorker action has
started to return. When there are more incoming requests than the size
of the worker pool, this prevents request handers from starting
handleRequestAnnex until after getP2PConnection has started, so avoiding
running more annex actions than the -J level.
While before the server needed 2 jobs per request, so would handle
concurrent requests up to 1/2 of the -J level maximum, now it matches
the -J level. Updated docs accordingly.
Note that serveLockContent starts a thread which keeps running after the
request finishes. Before, that still consumed a worker. Which was also
probably a way for the worker pool to get full. Now, it does not.
So, lots of calls to serveLockContent can result in lots of threads,
which are lightweight though since they only keep a lock held.
Considering this as a new DOS attack, the server would run out of FDs
before it runs out of memory. I'll address this in the next commit.
diff --git a/CHANGELOG b/CHANGELOG
index 41e4572f46..ebdfd81d04 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -2,6 +2,7 @@ git-annex (10.20251103) UNRELEASED; urgency=medium
 
   * p2p --pair: Fix to work with external P2P networks.
   * remotedaemon: Avoid crashing when run with --debug.
+  * p2phttp: Fix server stall when there are too many concurrent clients.
   * p2phttp: Fix a file descriptor leak caused by a race condition.
 
  -- Joey Hess <id@joeyh.name>  Mon, 03 Nov 2025 14:02:46 -0400
diff --git a/P2P/Http/Server.hs b/P2P/Http/Server.hs
index 6a21c6e13c..b94b2486ab 100644
--- a/P2P/Http/Server.hs
+++ b/P2P/Http/Server.hs
@@ -126,7 +126,7 @@ serveGet mst su apiver (B64Key k) cu bypass baf startat sec auth = do
 	endv <- liftIO newEmptyTMVarIO
 	validityv <- liftIO newEmptyTMVarIO
 	finalv <- liftIO newEmptyTMVarIO
-	annexworker <- liftIO $ async $ inAnnexWorker st $ do
+	annexworker <- liftIO $ async $ handleRequestAnnex st $ do
 		let storer _offset len = sendContentWith $ \bs -> liftIO $ do
 			atomically $ putTMVar bsv (len, bs)
 			atomically $ takeTMVar endv
@@ -401,7 +401,7 @@ servePutAction
 	-> Maybe B64FilePath
 	-> (P2P.Protocol.Offset -> Proto (Maybe [UUID]))
 	-> IO (Either SomeException (Either ProtoFailure (Maybe [UUID])))
-servePutAction (conn, st) (B64Key k) baf a = inAnnexWorker st $
+servePutAction (conn, st) (B64Key k) baf a = handleRequestAnnex st $
 	enteringStage (TransferStage Download) $
 		runFullProto (clientRunState conn) (clientP2PConnection conn) $
 			put' k af a
@@ -477,9 +477,9 @@ serveLockContent mst su apiver (B64Key k) cu bypass sec auth = do
 	let lock = do
 		lockresv <- newEmptyTMVarIO
 		unlockv <- newEmptyTMVarIO
-		-- A single worker thread takes the lock, and keeps running
+		-- A thread takes the lock, and keeps running
 		-- until unlock in order to keep the lock held.
-		annexworker <- async $ inAnnexWorker st $ do
+		annexworker <- async $ handleRequestAnnex st $ do
 			lockres <- runFullProto (clientRunState conn) (clientP2PConnection conn) $ do
 				net $ sendMessage (LOCKCONTENT k)
 				checkSuccess
diff --git a/P2P/Http/State.hs b/P2P/Http/State.hs
index 44a2588b57..47057dc779 100644
--- a/P2P/Http/State.hs
+++ b/P2P/Http/State.hs
@@ -75,6 +75,8 @@ instance Semigroup P2PHttpServerState where
 data PerRepoServerState = PerRepoServerState
 	{ acquireP2PConnection :: AcquireP2PConnection
 	, annexWorkerPool :: AnnexWorkerPool
+	, annexState :: TMVar Annex.AnnexState
+	, annexRead :: Annex.AnnexRead
 	, getServerMode :: GetServerMode
 	, openLocks :: TMVar (M.Map LockID Locker)
 	}
@@ -91,10 +93,12 @@ data ServerMode
 		}
 	| CannotServeRequests
 
-mkPerRepoServerState :: AcquireP2PConnection -> AnnexWorkerPool -> GetServerMode -> IO PerRepoServerState
-mkPerRepoServerState acquireconn annexworkerpool getservermode = PerRepoServerState
+mkPerRepoServerState :: AcquireP2PConnection -> AnnexWorkerPool -> Annex.AnnexState -> Annex.AnnexRead -> GetServerMode -> IO PerRepoServerState
+mkPerRepoServerState acquireconn annexworkerpool annexstate annexread getservermode = PerRepoServerState
 	<$> pure acquireconn
 	<*> pure annexworkerpool
+	<*> newTMVarIO annexstate
+	<*> pure annexread
 	<*> pure getservermode
 	<*> newTMVarIO mempty
 
@@ -275,7 +279,9 @@ mkP2PHttpServerState getservermode updaterepos proxyconnectionpoolsize clusterco
 		liftIO $ atomically $ putTMVar endv ()
 		liftIO $ wait asyncservicer
 	let servinguuids = myuuid : map proxyRemoteUUID (maybe [] S.toList myproxies)
-	st <- liftIO $ mkPerRepoServerState (acquireconn reqv) workerpool getservermode
+	annexstate <- dupState
+	annexread <- Annex.getRead id
+	st <- liftIO $ mkPerRepoServerState (acquireconn reqv) workerpool annexstate annexread getservermode
 	return $ P2PHttpServerState
 		{ servedRepos = M.fromList $ zip servinguuids (repeat st)
 		, serverShutdownCleanup = endit
@@ -283,8 +289,10 @@ mkP2PHttpServerState getservermode updaterepos proxyconnectionpoolsize clusterco
 		}
   where
 	acquireconn reqv connparams = do
+		ready <- newEmptyTMVarIO
 		respvar <- newEmptyTMVarIO
-		atomically $ putTMVar reqv (connparams, respvar)
+		atomically $ putTMVar reqv (connparams, ready, respvar)
+		() <- atomically $ takeTMVar ready
 		atomically $ takeTMVar respvar
 
 	servicer myuuid myproxies proxypool reqv relv endv = do
@@ -296,8 +304,8 @@ mkP2PHttpServerState getservermode updaterepos proxyconnectionpoolsize clusterco
 					`orElse` 
 				(Left . Left <$> takeTMVar endv)
 		case reqrel of
-			Right (connparams, respvar) -> do
-				servicereq myuuid myproxies proxypool relv connparams
+			Right (connparams, ready, respvar) -> do
+				servicereq myuuid myproxies proxypool relv connparams ready
 					>>= atomically . putTMVar respvar
 				servicer myuuid myproxies proxypool reqv relv endv
 			Left (Right releaseconn) -> do
@@ -305,16 +313,16 @@ mkP2PHttpServerState getservermode updaterepos proxyconnectionpoolsize clusterco
 				servicer myuuid myproxies proxypool reqv relv endv
 			Left (Left ()) -> return ()
 	
-	servicereq myuuid myproxies proxypool relv connparams
+	servicereq myuuid myproxies proxypool relv connparams ready
 		| connectionServerUUID connparams == myuuid =
-			localConnection relv connparams workerpool
+			localConnection relv connparams workerpool ready
 		| otherwise =
 			atomically (getProxyConnectionPool proxypool connparams) >>= \case
-				Just conn -> proxyConnection proxyconnectionpoolsize relv connparams workerpool proxypool conn
-				Nothing -> checkcanproxy myproxies proxypool relv connparams
+				Just conn -> proxyConnection proxyconnectionpoolsize relv connparams workerpool proxypool conn ready
+				Nothing -> checkcanproxy myproxies proxypool relv connparams ready
 
-	checkcanproxy myproxies proxypool relv connparams = 
-		inAnnexWorker' workerpool
+	checkcanproxy myproxies proxypool relv connparams ready = do
+		inAnnexWorker workerpool
 			(checkCanProxy' myproxies (connectionServerUUID connparams))
 		>>= \case
 			Right (Left reason) -> return $ Left $
@@ -334,7 +342,7 @@ mkP2PHttpServerState getservermode updaterepos proxyconnectionpoolsize clusterco
 		bypass = P2P.Bypass $ S.fromList $ connectionBypass connparams
 		proxyconnection openconn = openconn >>= \case
 			Right conn -> proxyConnection proxyconnectionpoolsize
-				relv connparams workerpool proxypool conn
+				relv connparams workerpool proxypool conn ready
 			Left ex -> return $ Left $
 				ConnectionFailed $ show ex
 
@@ -354,10 +362,12 @@ localConnection
 	:: TMVar (IO ())
 	-> ConnectionParams
 	-> AnnexWorkerPool
+	-> TMVar ()
 	-> IO (Either ConnectionProblem P2PConnectionPair)
-localConnection relv connparams workerpool = 
+localConnection relv connparams workerpool ready = 
 	localP2PConnectionPair connparams relv $ \serverrunst serverconn ->
-		inAnnexWorker' workerpool $
+		inAnnexWorker workerpool $ do
+			liftIO $ atomically $ putTMVar ready ()
 			void $ runFullProto serverrunst serverconn $
 				P2P.serveOneCommandAuthed
 					(connectionServerMode connparams)
@@ -431,14 +441,16 @@ proxyConnection
 	-> AnnexWorkerPool
 	-> TMVar ProxyConnectionPool
 	-> ProxyConnection
+	-> TMVar ()
 	-> IO (Either ConnectionProblem P2PConnectionPair)
-proxyConnection proxyconnectionpoolsize relv connparams workerpool proxypool proxyconn = do
+proxyConnection proxyconnectionpoolsize relv connparams workerpool proxypool proxyconn ready = do
 	(clientconn, proxyfromclientconn) <- 
 		mkP2PConnectionPair connparams ("http client", "proxy")
 	clientrunst <- mkClientRunState connparams
 	proxyfromclientrunst <- mkClientRunState connparams
 	asyncworker <- async $
-		inAnnexWorker' workerpool $ do
+		inAnnexWorker workerpool $ do
+			liftIO $ atomically $ putTMVar ready ()
 			proxystate <- liftIO Proxy.mkProxyState
 			let proxyparams = Proxy.ProxyParams
 				{ Proxy.proxyMethods = mkProxyMethods
@@ -495,8 +507,8 @@ proxyConnection proxyconnectionpoolsize relv connparams workerpool proxypool pro
 	
 	requestcomplete () = return ()
 	
-	closeproxyconnection = 
-		void . inAnnexWorker' workerpool . proxyConnectionCloser
+	closeproxyconnection =
+		void . inAnnexWorker workerpool . proxyConnectionCloser
 
 data Locker = Locker
 	{ lockerThread :: Async ()
@@ -585,11 +597,8 @@ withAnnexWorkerPool mc a = do
 			Nothing -> giveup "Use -Jn or set annex.jobs to configure the number of worker threads."
 			Just wp -> a wp
 
-inAnnexWorker :: PerRepoServerState -> Annex a -> IO (Either SomeException a)
-inAnnexWorker st = inAnnexWorker' (annexWorkerPool st)
-
-inAnnexWorker' :: AnnexWorkerPool -> Annex a -> IO (Either SomeException a)
-inAnnexWorker' poolv annexaction = do
+inAnnexWorker :: AnnexWorkerPool -> Annex a -> IO (Either SomeException a)
+inAnnexWorker poolv annexaction = do
 	(workerstrd, workerstage) <- atomically $ waitStartWorkerSlot poolv
 	resv <- newEmptyTMVarIO
 	aid <- async $ do
@@ -611,6 +620,20 @@ inAnnexWorker' poolv annexaction = do

(Diff truncated)
correct -J documentation
diff --git a/doc/git-annex-p2phttp.mdwn b/doc/git-annex-p2phttp.mdwn
index a7ecd581d8..453015e1f3 100644
--- a/doc/git-annex-p2phttp.mdwn
+++ b/doc/git-annex-p2phttp.mdwn
@@ -60,9 +60,9 @@ convenient way to download the content of any key, by using the path
 
   This or annex.jobs must be set to configure the number of worker
   threads, per repository served, that serve connections to the webserver.
-  
-  Since the webserver itself also uses one of these threads, 
-  this needs to be set to 2 or more.
+
+  This must be set to 2 or more, because each request served by the
+  webserver needs 2 worker threads.
 
   A good choice is often one worker per CPU core: `--jobs=cpus`
 

comment
diff --git a/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_4_44e30c4f2dbebd4c453433d354db8f14._comment b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_4_44e30c4f2dbebd4c453433d354db8f14._comment
new file mode 100644
index 0000000000..1be5ae2d05
--- /dev/null
+++ b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_4_44e30c4f2dbebd4c453433d354db8f14._comment
@@ -0,0 +1,25 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 4"""
+ date="2025-11-05T18:40:52Z"
+ content="""
+Tested a modified p2phttp that uses 2 worker pools, one for the P2P client side
+and one for server side. This means that -J2 actually runs up to 4 threads,
+although with only 2 capabilities, so the change won't affect CPU load. 
+So I tried with -J2 and 4 clients running the loop.
+
+It still stalls much as before, maybe after a bit longer.
+
+It still seems likely that the two workers used per http request
+is the root of the problem. When there are more than annex.jobs concurrent 
+requests, each http response handler calls inAnnexWorker, and so one will
+block. If the corresponding localConnection successfully gets a worker,
+that means one of the other localConnections is blocked. Resulting in a
+running http response handler whose corresponding localConnection is blocked.
+The inverse also seems possible.
+
+If 2 worker pools is not the solution, it seems it would need to instead be
+solved by rearchitecting the http server to not have that separation. Or to
+ensure that getP2PConnection doesn't return until the localConnection has
+allocated its worker. I'll try that next.
+"""]]

analysis
diff --git a/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_1_7c052ef5f57516192cc9e1e03362d719._comment b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_1_7c052ef5f57516192cc9e1e03362d719._comment
new file mode 100644
index 0000000000..264c816e41
--- /dev/null
+++ b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_1_7c052ef5f57516192cc9e1e03362d719._comment
@@ -0,0 +1,20 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 1"""
+ date="2025-11-05T16:52:12Z"
+ content="""
+I think that --debug output from the p2phttp server would be helpful in
+narrowing down if there is particular operation that causes this hang.
+
+p2phttp has a pool of worker threads, so if a thread stalls out, or
+potentially crashes in some way that is not handled, it can result in all
+subsequent operations hanging. 
+[[!commit 91dbcf0b56ba540a33ea5a79ed52f33e82f4f61b]] is one recent example
+of that; I remember there were some similar problems when initially
+developing it.
+
+-J2 also seems quite low though. With the http server itself using one of
+those threads, all requests get serialized through the second thread. If there is
+any situation where request A needs request B to be made and finish before
+it can succeed, that would deadlock.
+"""]]
diff --git a/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_2_378ce078e1dbd049977910630cfd47ef._comment b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_2_378ce078e1dbd049977910630cfd47ef._comment
new file mode 100644
index 0000000000..e5bbecd679
--- /dev/null
+++ b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_2_378ce078e1dbd049977910630cfd47ef._comment
@@ -0,0 +1,71 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 2"""
+ date="2025-11-05T17:14:29Z"
+ content="""
+I was able to reproduce this fairly quickly with 2 clones, each running
+the loop on the same 5 files, which I made each be 1 mb in size.
+
+Both hung on get, of different files. The tail of the --debug:
+
+	[2025-11-05 13:14:06.255833094] (P2P.IO) [http server] [ThreadId 914] P2P > DATA 1048576
+	[2025-11-05 13:14:06.255872078] (P2P.IO) [http client] [ThreadId 912] P2P < DATA 1048576
+	[2025-11-05 13:14:06.262783513] (P2P.IO) [http server] [ThreadId 914] P2P > VALID
+	[2025-11-05 13:14:06.262897622] (P2P.IO) [http client] [ThreadId 912] P2P < VALID
+	[2025-11-05 13:14:06.262956555] (P2P.IO) [http client] [ThreadId 912] P2P > SUCCESS
+	[2025-11-05 13:14:06.263008765] (P2P.IO) [http server] [ThreadId 914] P2P < SUCCESS
+	[2025-11-05 13:14:06.264030615] (P2P.IO) [http client] [ThreadId 883] P2P > CHECKPRESENT SHA256E-s1048576--06477b9c41f04aaa5c09af0adbd093506435193c868ef56a5510eff0d3c9fc2b
+	[2025-11-05 13:14:06.264088566] (P2P.IO) [http server] [ThreadId 916] P2P < CHECKPRESENT SHA256E-s1048576--06477b9c41f04aaa5c09af0adbd093506435193c868ef56a5510eff0d3c9fc2b
+	[2025-11-05 13:14:06.264183098] (P2P.IO) [http server] [ThreadId 916] P2P > SUCCESS
+	[2025-11-05 13:14:06.264219447] (P2P.IO) [http client] [ThreadId 883] P2P < SUCCESS
+	[2025-11-05 13:14:06.265125295] (P2P.IO) [http client] [ThreadId 920] P2P > GET 0 3 SHA256E-s1048576--06477b9c41f04aaa5c09af0adbd093506435193c868ef56a5510eff0d3c9fc2b
+	[2025-11-05 13:14:06.265177174] (P2P.IO) [http server] [ThreadId 921] P2P < GET 0 3 SHA256E-s1048576--06477b9c41f04aaa5c09af0adbd093506435193c868ef56a5510eff0d3c9fc2b
+	[2025-11-05 13:14:06.265598603] (P2P.IO) [http server] [ThreadId 921] P2P > DATA 1048576
+	[2025-11-05 13:14:06.265639962] (P2P.IO) [http client] [ThreadId 920] P2P < DATA 1048576
+	[2025-11-05 13:14:06.274452543] (P2P.IO) [http server] [ThreadId 921] P2P > VALID
+	[2025-11-05 13:14:06.274505514] (P2P.IO) [http client] [ThreadId 920] P2P < VALID
+	[2025-11-05 13:14:06.274551963] (P2P.IO) [http client] [ThreadId 920] P2P > SUCCESS
+	[2025-11-05 13:14:06.274594385] (P2P.IO) [http server] [ThreadId 921] P2P < SUCCESS
+	[2025-11-05 13:14:06.276689062] (P2P.IO) [http client] [ThreadId 883] P2P > CHECKPRESENT SHA256E-s1048576--81386bfd2b7880ed397001ea5325ee25cfa69cf46d097b7a69b0a31b5e990f8d
+	[2025-11-05 13:14:06.276783864] (P2P.IO) [http server] [ThreadId 924] P2P < CHECKPRESENT SHA256E-s1048576--81386bfd2b7880ed397001ea5325ee25cfa69cf46d097b7a69b0a31b5e990f8d
+	[2025-11-05 13:14:06.276799023] (P2P.IO) [http client] [ThreadId 892] P2P > CHECKPRESENT SHA256E-s1048576--06477b9c41f04aaa5c09af0adbd093506435193c868ef56a5510eff0d3c9fc2b
+	[2025-11-05 13:14:06.276912961] (P2P.IO) [http server] [ThreadId 924] P2P > SUCCESS
+	[2025-11-05 13:14:06.276939743] (P2P.IO) [http client] [ThreadId 883] P2P < SUCCESS
+	[2025-11-05 13:14:06.276944802] (P2P.IO) [http server] [ThreadId 926] P2P < CHECKPRESENT SHA256E-s1048576--06477b9c41f04aaa5c09af0adbd093506435193c868ef56a5510eff0d3c9fc2b
+	[2025-11-05 13:14:06.277069411] (P2P.IO) [http server] [ThreadId 926] P2P > SUCCESS
+	[2025-11-05 13:14:06.277111522] (P2P.IO) [http client] [ThreadId 892] P2P < SUCCESS
+
+A second hang happened with the loops each running on the same 2 files. This time,
+one clone was doing "get 1" and the other clone "drop 1 (locking origin...)" when they hung.
+
+	[2025-11-05 13:28:03.931334099] (P2P.IO) [http server] [ThreadId 8421] P2P > SUCCESS
+	[2025-11-05 13:28:03.931380284] (P2P.IO) [http client] [ThreadId 8424] P2P < SUCCESS
+	[2025-11-05 13:28:03.932204439] (P2P.IO) [http client] [ThreadId 8424] P2P > UNLOCKCONTENT
+	[2025-11-05 13:28:03.932251987] (P2P.IO) [http server] [ThreadId 8421] P2P < UNLOCKCONTENT
+	[2025-11-05 13:28:04.252596865] (P2P.IO) [http client] [ThreadId 8427] P2P > CHECKPRESENT SHA256E-s1048576--4ad843113f3ee799f2ff834a80bb2aaff35d5babd68395339406671c50e99f6a
+	[2025-11-05 13:28:04.252748136] (P2P.IO) [http server] [ThreadId 8429] P2P < CHECKPRESENT SHA256E-s1048576--4ad843113f3ee799f2ff834a80bb2aaff35d5babd68395339406671c50e99f6a
+	[2025-11-05 13:28:04.252918516] (P2P.IO) [http server] [ThreadId 8429] P2P > SUCCESS
+	[2025-11-05 13:28:04.253026869] (P2P.IO) [http client] [ThreadId 8427] P2P < SUCCESS
+
+A third hang, again with 2 files, and both hung on "drop 1 (locking
+origin...)"
+
+	[2025-11-05 13:34:34.413288012] (P2P.IO) [http client] [ThreadId 16147] P2P > CHECKPRESENT SHA256E-s1048576--c644050a65e9e93a43f5b21e1188e4e7a406057d84102c78fce0007ceb875c69
+	[2025-11-05 13:34:34.413341843] (P2P.IO) [http server] [ThreadId 16172] P2P < CHECKPRESENT SHA256E-s1048576--c644050a65e9e93a43f5b21e1188e4e7a406057d84102c78fce0007ceb875c69
+	[2025-11-05 13:34:34.413415351] (P2P.IO) [http server] [ThreadId 16172] P2P > SUCCESS
+	[2025-11-05 13:34:34.413442692] (P2P.IO) [http client] [ThreadId 16147] P2P < SUCCESS
+	[2025-11-05 13:34:34.414251817] (P2P.IO) [http client] [ThreadId 16176] P2P > GET 0 2 SHA256E-s1048576--c644050a65e9e93a43f5b21e1188e4e7a406057d84102c78fce0007ceb875c69
+	[2025-11-05 13:34:34.4142963] (P2P.IO) [http server] [ThreadId 16177] P2P < GET 0 2 SHA256E-s1048576--c644050a65e9e93a43f5b21e1188e4e7a406057d84102c78fce0007ceb875c69
+	[2025-11-05 13:34:34.414731756] (P2P.IO) [http server] [ThreadId 16177] P2P > DATA 1048576
+	[2025-11-05 13:34:34.414777692] (P2P.IO) [http client] [ThreadId 16176] P2P < DATA 1048576
+	[2025-11-05 13:34:34.421258237] (P2P.IO) [http server] [ThreadId 16177] P2P > VALID
+	[2025-11-05 13:34:34.421322858] (P2P.IO) [http client] [ThreadId 16176] P2P < VALID
+	[2025-11-05 13:34:34.421358204] (P2P.IO) [http client] [ThreadId 16176] P2P > SUCCESS
+	[2025-11-05 13:34:34.421390053] (P2P.IO) [http server] [ThreadId 16177] P2P < SUCCESS
+	[2025-11-05 13:34:34.764709623] (P2P.IO) [http client] [ThreadId 16188] P2P > LOCKCONTENT SHA256E-s1048576--4ad843113f3ee799f2ff834a80bb2aaff35d5babd68395339406671c50e99f6a
+
+Here the P2P protocol client inside the http server got a worker thread, but then apparently
+the http response handler stalled out. That's different from the other 2 debug logs where
+the protocol client was able to send a response. I think in the other 2 debug logs,
+the P2P protocol client then stalls getting a worker thread.
+"""]]
diff --git a/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_3_8cfb1ac7cceb3a3d6345810fd0741d7f._comment b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_3_8cfb1ac7cceb3a3d6345810fd0741d7f._comment
new file mode 100644
index 0000000000..a552835ab9
--- /dev/null
+++ b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients/comment_3_8cfb1ac7cceb3a3d6345810fd0741d7f._comment
@@ -0,0 +1,33 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 3"""
+ date="2025-11-05T17:48:02Z"
+ content="""
+Aha! We have two things both calling inAnnexWorker:
+
+1. localConnection, handling the P2P protocol client
+   side of things inside the http server. (Or in the case of a proxy,
+   other functions that do the similar thing.)
+2. http response handlers, which run the P2P protocol server side.
+
+For each http request, both of these run asyncronously.
+
+So, with -J2, if two http requests happen at the same time, and
+localConnection wins both races, the two worker threads are both stalled
+waiting for a response from the P2P server side. Which is blocked waiting
+for a worker thread. Or perhaps both of the http response handlers win,
+similar deadlock. 
+
+Maybe it could even happen that the localConnection for one request wins,
+as well as the response handler for the other request?
+
+(And higher -J numbers would still have the same problem,
+when there are more clients. The docs for -J are also a bit wrong,
+they say that the http server uses 1 thread itself, but it can really
+use any number of threads since localConnection does run
+inAnnexWorker in an async action.)
+
+Anyway, if this analysis is correct, the fix is surely to have 2 worker
+thread pools, once for the P2P protocol client side, and one for the P2P
+protocol server side.
+"""]]

Add openneuro tag
diff --git a/doc/bugs/S3_remote_fails_for_GCP_with_multiple_prefixes.mdwn b/doc/bugs/S3_remote_fails_for_GCP_with_multiple_prefixes.mdwn
index da4e6c9bce..923209bb04 100644
--- a/doc/bugs/S3_remote_fails_for_GCP_with_multiple_prefixes.mdwn
+++ b/doc/bugs/S3_remote_fails_for_GCP_with_multiple_prefixes.mdwn
@@ -93,3 +93,5 @@ initremote: 1 failed
 ### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
 
 Thanks for all your great work, Joey!
+
+[[!tag projects/openneuro]]

fix address example
diff --git a/doc/tips/peer_to_peer_network_with_iroh.mdwn b/doc/tips/peer_to_peer_network_with_iroh.mdwn
index e20d9a6241..2259ba7af1 100644
--- a/doc/tips/peer_to_peer_network_with_iroh.mdwn
+++ b/doc/tips/peer_to_peer_network_with_iroh.mdwn
@@ -80,7 +80,7 @@ Here's how it all looks:
 	remote: Compressing objects: 100% (7/7), done.
 	remote: Total 8 (delta 0), reused 0 (delta 0)
 	Unpacking objects: 100% (8/8), done.
-	From tor-annex::wa3i6wgttmworwli.onion:5162
+	From p2p-annex::iroh:endpointadroxtad5dj5vaweczqnmkhk2sb7dmysazljjul6zeug7bexymejaaa
 	   452db22..a894c60  git-annex  -> peer1/git-annex
 	   c0ac431..44ca7f6  master     -> peer1/master
 	

remove now-obsolate warnings
diff --git a/doc/tips/peer_to_peer_network_with_iroh.mdwn b/doc/tips/peer_to_peer_network_with_iroh.mdwn
index 6dfdbf325c..e20d9a6241 100644
--- a/doc/tips/peer_to_peer_network_with_iroh.mdwn
+++ b/doc/tips/peer_to_peer_network_with_iroh.mdwn
@@ -13,13 +13,6 @@ To use this, you need a few things:
   executable.
 * You also need to install [Magic Wormhole](https://github.com/warner/magic-wormhole) -
   here are [the installation instructions](https://magic-wormhole.readthedocs.io/en/latest/welcome.html#installation).
-
-*Important:*
-
-* The installation process must make a `wormhole` executable available
-  somewhere on your `$PATH`.  Some distributions may only install executables
-  which reference the Python version, e.g. `wormhole-2.7`, in which case you
-  will need to manually create a symlink (and maybe file a bug with your distribution).
 * You need git-annex version 10.20251103 or newer. Older versions of git-annex
   unfortunately had a bug that prevents this process from working correctly.
 
diff --git a/doc/tips/peer_to_peer_network_with_tor.mdwn b/doc/tips/peer_to_peer_network_with_tor.mdwn
index 90f000c197..9d2d9995ba 100644
--- a/doc/tips/peer_to_peer_network_with_tor.mdwn
+++ b/doc/tips/peer_to_peer_network_with_tor.mdwn
@@ -16,16 +16,6 @@ To use this, you need to get Tor installed and running. See
 You also need to install [Magic Wormhole](https://github.com/warner/magic-wormhole) -
 here are [the installation instructions](https://magic-wormhole.readthedocs.io/en/latest/welcome.html#installation).
 
-*Important:*
-
-* The installation process must make a `wormhole` executable available
-  somewhere on your `$PATH`.  Some distributions may only install executables
-  which reference the Python version, e.g. `wormhole-2.7`, in which case you
-  will need to manually create a symlink (and maybe file a bug with your distribution).
-
-* You need git-annex version 6.20180705 or newer. Older versions of git-annex
-  unfortunately had a bug that prevents this process from working correctly.
-
 ## pairing two repositories
 
 You have two git-annex repositories on different computers, and want to
diff --git a/doc/tips/peer_to_peer_network_with_tor/comment_6_5237c2b408dc1841ca01a51084702b90._comment b/doc/tips/peer_to_peer_network_with_tor/comment_6_5237c2b408dc1841ca01a51084702b90._comment
new file mode 100644
index 0000000000..d556240ce9
--- /dev/null
+++ b/doc/tips/peer_to_peer_network_with_tor/comment_6_5237c2b408dc1841ca01a51084702b90._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""Re: Issue on openSUSE with Tor's requirement for Python 2.7 """
+ date="2025-11-03T19:44:09Z"
+ content="""
+Thanks for that. Since that issue got fixed in 2020, it seems unncessary to
+complicate this tip with the warning about it, so I've removed your
+addition now.
+"""]]

git-annex version for p2p --pair fix for iroh
diff --git a/CHANGELOG b/CHANGELOG
index 527c5d46dd..a580a28dae 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -1,4 +1,4 @@
-git-annex (10.20251030) UNRELEASED; urgency=medium
+git-annex (10.20251103) UNRELEASED; urgency=medium
 
   * p2p --pair: Fix to work with external P2P networks.
   * remotedaemon: Avoid crashing when run with --debug.
diff --git a/doc/tips/peer_to_peer_network_with_iroh.mdwn b/doc/tips/peer_to_peer_network_with_iroh.mdwn
index ce162ffcf5..6dfdbf325c 100644
--- a/doc/tips/peer_to_peer_network_with_iroh.mdwn
+++ b/doc/tips/peer_to_peer_network_with_iroh.mdwn
@@ -20,6 +20,8 @@ To use this, you need a few things:
   somewhere on your `$PATH`.  Some distributions may only install executables
   which reference the Python version, e.g. `wormhole-2.7`, in which case you
   will need to manually create a symlink (and maybe file a bug with your distribution).
+* You need git-annex version 10.20251103 or newer. Older versions of git-annex
+  unfortunately had a bug that prevents this process from working correctly.
 
 ## pairing two repositories
 
diff --git a/git-annex.cabal b/git-annex.cabal
index 701c4c2530..087182bcbf 100644
--- a/git-annex.cabal
+++ b/git-annex.cabal
@@ -1,5 +1,5 @@
 Name: git-annex
-Version: 10.20251029
+Version: 10.20251103
 Cabal-Version: 1.12
 License: AGPL-3
 Maintainer: Joey Hess <id@joeyh.name>

dumbpipe versioning
diff --git a/doc/special_remotes/p2p/git-annex-p2p-iroh b/doc/special_remotes/p2p/git-annex-p2p-iroh
index b1969c7380..83be015c6f 100755
--- a/doc/special_remotes/p2p/git-annex-p2p-iroh
+++ b/doc/special_remotes/p2p/git-annex-p2p-iroh
@@ -1,13 +1,10 @@
 #!/bin/sh
 # Allows git-annex to use iroh for P2P connections.
 #
-# This uses a modified version of iroh's dumbpipe program, adding the
-# generate-ticket command. This pull request has the necessary changes:
+# This uses iroh's dumbpipe program. It needs a version with the
+# generate-ticket command, which was added in this pull request:
 # https://github.com/n0-computer/dumbpipe/pull/86
 #
-# Quality: experimental. Has worked at least twice, but there are known and
-# unknown bugs.
-#
 # Copyright 2025 Joey Hess; licenced under the GNU GPL version 3 or higher.
 
 set -e
diff --git a/doc/tips/peer_to_peer_network_with_iroh.mdwn b/doc/tips/peer_to_peer_network_with_iroh.mdwn
index d743d8f46e..ce162ffcf5 100644
--- a/doc/tips/peer_to_peer_network_with_iroh.mdwn
+++ b/doc/tips/peer_to_peer_network_with_iroh.mdwn
@@ -8,7 +8,7 @@ It can be used with git-annex, to connect together two repositories.
 To use this, you need a few things:
 
 * Install [dumbpipe](https://www.dumbpipe.dev/). This will be used to talk
-  over Iroh.
+  over Iroh. Note that this needs version 0.33 or newer of dumbpipe.
 * Download [[special_remotes/p2p/git-annex-p2p-iroh]] and make the script
   executable.
 * You also need to install [Magic Wormhole](https://github.com/warner/magic-wormhole) -

add iroh tip
Adapted from the tor tip.
Also, removed some out of date stuff from the tor tip.
diff --git a/doc/tips/peer_to_peer_network_with_iroh.mdwn b/doc/tips/peer_to_peer_network_with_iroh.mdwn
new file mode 100644
index 0000000000..d743d8f46e
--- /dev/null
+++ b/doc/tips/peer_to_peer_network_with_iroh.mdwn
@@ -0,0 +1,139 @@
+[Iroh](https://www.iroh.computer/) is a peer to peer protocol that can
+connect any two devices on the planet -- fast! 
+
+It can be used with git-annex, to connect together two repositories.
+
+## dependencies
+
+To use this, you need a few things:
+
+* Install [dumbpipe](https://www.dumbpipe.dev/). This will be used to talk
+  over Iroh.
+* Download [[special_remotes/p2p/git-annex-p2p-iroh]] and make the script
+  executable.
+* You also need to install [Magic Wormhole](https://github.com/warner/magic-wormhole) -
+  here are [the installation instructions](https://magic-wormhole.readthedocs.io/en/latest/welcome.html#installation).
+
+*Important:*
+
+* The installation process must make a `wormhole` executable available
+  somewhere on your `$PATH`.  Some distributions may only install executables
+  which reference the Python version, e.g. `wormhole-2.7`, in which case you
+  will need to manually create a symlink (and maybe file a bug with your distribution).
+
+## pairing two repositories
+
+You have two git-annex repositories on different computers, and want to
+connect them together over Iroh so they share their contents. Or, you and a
+friend want to connect your repositories together. Pairing is an easy way
+to accomplish this.
+
+In each git-annex repository, run these commands:
+
+	git annex p2p --enable iroh
+	git annex remotedaemon
+
+Now git-annex is listening for connections on Iroh, but
+it will only talk to peers after pairing with them.
+
+In both repositories, run this command:
+
+	git annex p2p --pair
+
+This will print out a pairing code, like "11-incredible-tumeric",
+and prompt for you to enter the other repository's pairing code.
+
+So you have to get in contact with your friend to exchange codes.
+See the section below "how to exchange pairing codes" for tips on
+how to do that securely.
+
+Once the pairing codes are exchanged, the two repositories will be
+connected to one-another via Iroh. Each will have a git remote, with a name
+like "peer1", which connects to the other repository. 
+
+Then, you can run commands like `git annex sync peer1 --content` to sync
+with the paired repository.
+
+Pairing connects just two repositories, but you can repeat the process to
+pair with as many other repositories as you like, in order to build up
+larger networks of repositories.
+
+## example session
+
+Here's how it all looks:
+
+	$ git annex p2p --enable iroh
+	p2p enable iroh ok
+	$ git annex remotedaemon
+	$ git annex p2p --pair
+	p2p pair peer1 (using Magic Wormhole) 
+	
+	This repository's pairing code is: 11-incredible-tumeric
+	
+	Enter the other repository's pairing code: 1-revenue-icecream
+	Exchanging pairing data...
+	Successfully exchanged pairing data. Connecting to peer1...
+	ok
+	$ git annex sync peer1 --content
+		commit 
+	On branch master
+	nothing to commit, working tree clean
+	ok
+	pull peer1 
+	remote: Enumerating objects: 10, done.
+	remote: Counting objects: 100% (10/10), done.
+	remote: Compressing objects: 100% (7/7), done.
+	remote: Total 8 (delta 0), reused 0 (delta 0)
+	Unpacking objects: 100% (8/8), done.
+	From tor-annex::wa3i6wgttmworwli.onion:5162
+	   452db22..a894c60  git-annex  -> peer1/git-annex
+	   c0ac431..44ca7f6  master     -> peer1/master
+	
+	Updating c0ac431..44ca7f6
+	Fast-forward
+	 amazing_file | 1 +
+	 1 file changed, 1 insertion(+)
+	 create mode 120000 amazing_file
+	ok
+	(merging peer1/git-annex into git-annex...)
+	get amazing_file (from peer1...) 
+	(checksum...) ok
+
+## how to exchange pairing codes
+
+When pairing with a friend's repository, you have to exchange
+pairing codes. How to do this securely?
+
+The pairing codes can only be used once, so it's ok to exchange them in
+a way that someone else can access later. However, if someone can overhear
+your exchange of codes in real time, they could trick you into pairing
+with them.
+
+Here are some suggestions for how to exchange the codes,
+with the most secure ways first:
+
+* In person.
+* In an encrypted message (gpg signed email, Off The Record (OTR)
+  conversation, etc).
+* By a voice phone call.
+
+## starting git-annex remotedaemon on boot
+
+Notice the `git annex remotedaemon` being run in the above examples.
+That command listens for incoming Iroh connections so that other peers
+can connect to your repository over Tor.
+
+So, you may want to arrange for the remotedaemon to be started on boot.
+You can do that with a simple cron job:
+
+	@reboot cd ~/myannexrepo && git annex remotedaemon
+
+If you use the git-annex assistant, and have it auto-starting on boot, it
+will take care of starting the remotedaemon for you.
+
+## speed of large transfers
+
+This should be fast! Iroh often gets peers directly connected to
+one-another, handling the necessary punching through firewalls and NAT.
+In some cases, when Iroh is not able to do that, traffic will be sent via a
+relay, which could be slower.
diff --git a/doc/tips/peer_to_peer_network_with_tor.mdwn b/doc/tips/peer_to_peer_network_with_tor.mdwn
index 2a9287a5a8..90f000c197 100644
--- a/doc/tips/peer_to_peer_network_with_tor.mdwn
+++ b/doc/tips/peer_to_peer_network_with_tor.mdwn
@@ -3,6 +3,9 @@ git-annex has recently gotten support for running as a
 and easy to use way to connect repositories in different
 locations. No account on a central server is needed; it's peer-to-peer.
 
+(See also [[peer_to_peer_network_with_iroh]] for something similar but
+faster if you don't need all the layered security of tor.)
+
 ## dependencies
 
 To use this, you need to get Tor installed and running. See
@@ -15,15 +18,12 @@ here are [the installation instructions](https://magic-wormhole.readthedocs.io/e
 
 *Important:*
 
-* At the time of writing, you need to install Magic Wormhole under Python 2,
-  because [Tor support is only available under python2.7](https://magic-wormhole.readthedocs.io/en/latest/tor.html).
-
 * The installation process must make a `wormhole` executable available
   somewhere on your `$PATH`.  Some distributions may only install executables
   which reference the Python version, e.g. `wormhole-2.7`, in which case you
   will need to manually create a symlink (and maybe file a bug with your distribution).
 
-* You need git-annex version 6.20180705. Older versions of git-annex
+* You need git-annex version 6.20180705 or newer. Older versions of git-annex
   unfortunately had a bug that prevents this process from working correctly.
 
 ## pairing two repositories

p2p --pair: Fix to work with external P2P networks
When storing a P2P authtoken, it needs to have our local address, not the
address of the peer.
diff --git a/CHANGELOG b/CHANGELOG
index a6490580ce..b87b05e4cd 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -1,3 +1,9 @@
+git-annex (10.20251030) UNRELEASED; urgency=medium
+
+  * p2p --pair: Fix to work with external P2P networks.
+
+ -- Joey Hess <id@joeyh.name>  Mon, 03 Nov 2025 14:02:46 -0400
+
 git-annex (10.20251029) upstream; urgency=medium
 
   * Support ssh remotes with '#' and '?' in the path to the repository,
diff --git a/Command/P2P.hs b/Command/P2P.hs
index 491355507c..2aa3f674cf 100644
--- a/Command/P2P.hs
+++ b/Command/P2P.hs
@@ -263,7 +263,7 @@ wormholePairing remotename ouraddrs ui = do
 						Left _e -> return ReceiveFailed
 						Right ls -> maybe 
 							(return ReceiveFailed)
-							(finishPairing 100 remotename ourhalf)
+							(finishPairing 100 remotename ourhalf ouraddrs)
 							(deserializePairData ls)
 
 -- | Allow the peer we're pairing with to authenticate to us,
@@ -276,8 +276,8 @@ wormholePairing remotename ouraddrs ui = do
 -- Since we're racing the peer as they do the same, the first try is likely
 -- to fail to authenticate. Can retry any number of times, to avoid the
 -- users needing to redo the whole process.
-finishPairing :: Int -> RemoteName -> HalfAuthToken -> PairData -> Annex PairingResult
-finishPairing retries remotename (HalfAuthToken ourhalf) (PairData (HalfAuthToken theirhalf) theiraddrs) = do
+finishPairing :: Int -> RemoteName -> HalfAuthToken -> [P2PAddress] -> PairData -> Annex PairingResult
+finishPairing retries remotename (HalfAuthToken ourhalf) ouraddrs (PairData (HalfAuthToken theirhalf) theiraddrs) = do
 	case (toAuthToken (ourhalf <> theirhalf), toAuthToken (theirhalf <> ourhalf)) of
 		(Just ourauthtoken, Just theirauthtoken) -> do
 			liftIO $ putStrLn $ "Successfully exchanged pairing data. Connecting to " ++ remotename ++  "..."
@@ -289,9 +289,9 @@ finishPairing retries remotename (HalfAuthToken ourhalf) (PairData (HalfAuthToke
 		liftIO $ threadDelaySeconds (Seconds 2)
 		liftIO $ putStrLn $ "Unable to connect to " ++ remotename ++ ". Retrying..."
 		go (n-1) theiraddrs theirauthtoken ourauthtoken
-	go n (addr:rest) theirauthtoken ourauthtoken = do
-		storeP2PAuthToken addr ourauthtoken
-		r <- setupLink remotename (P2PAddressAuth addr theirauthtoken)
+	go n (theiraddr:rest) theirauthtoken ourauthtoken = do
+		forM_ ouraddrs $ \ouraddr -> storeP2PAuthToken ouraddr ourauthtoken
+		r <- setupLink remotename (P2PAddressAuth theiraddr theirauthtoken)
 		case r of
 			LinkSuccess -> return PairSuccess
 			_ -> go n rest theirauthtoken ourauthtoken
diff --git a/doc/bugs/p2p_--pair_seems_broken_for_iroh.mdwn b/doc/bugs/p2p_--pair_seems_broken_for_iroh.mdwn
index 7c4de04a2f..3a2946881c 100644
--- a/doc/bugs/p2p_--pair_seems_broken_for_iroh.mdwn
+++ b/doc/bugs/p2p_--pair_seems_broken_for_iroh.mdwn
@@ -3,3 +3,9 @@ magic wormhole step.
 
 `git-annex p2p --link` does work with the iroh script, so this is probably
 a bug in git-annex. --[[Joey]]
+
+> --debug shows the problem is `AUTH-FAILURE`. And it appears that the
+> remotedaemon's loadP2PAuthTokens is not loading any auth tokens after
+> pairing writes one to `.git/annex/creds/p2pauth`. The written auth token
+> incorrectly has the address of the peer, rather than the local repository.
+> [[fixed|done]] --[[Joey]

diff --git a/doc/bugs/S3_remote_fails_for_GCP_with_multiple_prefixes.mdwn b/doc/bugs/S3_remote_fails_for_GCP_with_multiple_prefixes.mdwn
new file mode 100644
index 0000000000..da4e6c9bce
--- /dev/null
+++ b/doc/bugs/S3_remote_fails_for_GCP_with_multiple_prefixes.mdwn
@@ -0,0 +1,95 @@
+### Please describe the problem.
+initremote of an S3 special remote with a GCP object storage bucket and a fileprefix fails if another repo with a different fileprefix has already been configured in the same bucket.
+
+### What steps will reproduce the problem?
+With two git-annex repos and an initially empty bucket configured without versioning or hierarchical namespaces:
+
+For the first repo:
+
+git-annex --debug initremote s3-BACKUP type=S3 partsize=1GiB fileprefix=ds001263/ encryption=none public=no bucket=openneuro-nell-test host=storage.googleapis.com storageclass=ARCHIVE cost=400
+
+For a second repo:
+
+git-annex --debug initremote s3-BACKUP type=S3 partsize=1GiB fileprefix=ds001264/ encryption=none public=no bucket=openneuro-nell-test host=storage.googleapis.com storageclass=ARCHIVE cost=400
+
+The first initremote will succeed and configure the remote. The second attempts to create a bucket and fails because it already exists. Manually populating remote.log and annex-uuid in the bucket allows this remote to function after enableremote.
+
+### What version of git-annex are you using? On what operating system?
+
+10.20250929 on Fedora 43.
+
+### Please provide any additional information below.
+
+[[!format sh """
+# If you can, paste a complete transcript of the problem occurring here.
+# If the problem is with the git-annex assistant, paste in .git/annex/daemon.log
+
+initremote s3-BACKUP [2025-11-02 15:07:58.441842173] (Utility.Process) process [3914104] read: git ["--git-dir=.git","--work-tree=.","--literal-pathspecs","-c","annex.debug=true","show-ref","git-annex"]
+[2025-11-02 15:07:58.442407721] (Utility.Process) process [3914104] done ExitSuccess
+[2025-11-02 15:07:58.442547945] (Utility.Process) process [3914105] read: git ["--git-dir=.git","--work-tree=.","--literal-pathspecs","-c","annex.debug=true","show-ref","--hash","refs/heads/git-annex"]
+[2025-11-02 15:07:58.443007509] (Utility.Process) process [3914105] done ExitSuccess
+[2025-11-02 15:07:58.443341839] (Utility.Process) process [3914106] chat: git ["--git-dir=.git","--work-tree=.","--literal-pathspecs","-c","annex.debug=true","cat-file","--batch"]
+[2025-11-02 15:07:58.445120803] (Remote.S3) String to sign: "GET\n\n\nSun, 02 Nov 2025 23:07:58 GMT\n/openneuro-nell-test/?location"
+[2025-11-02 15:07:58.445148464] (Remote.S3) Host: "openneuro-nell-test.storage.googleapis.com"
+[2025-11-02 15:07:58.445161875] (Remote.S3) Path: "/"
+[2025-11-02 15:07:58.445173305] (Remote.S3) Query string: "location"
+[2025-11-02 15:07:58.445188565] (Remote.S3) Header: [("Date","Sun, 02 Nov 2025 23:07:58 GMT"),("User-Agent","git-annex/10.20250929")]
+[2025-11-02 15:07:58.635355111] (Remote.S3) Response status: Status {statusCode = 403, statusMessage = "Forbidden"}
+[2025-11-02 15:07:58.635400652] (Remote.S3) Response header 'Content-Type': 'application/xml; charset=UTF-8'
+[2025-11-02 15:07:58.635424923] (Remote.S3) Response header 'X-GUploader-UploadID': 'AOCedOEofSsg_ed3IPSuAQerc3FtHvXPALQhf2W1S26R_51sPNFu-0-ZozTZuBqhr5pV-3fK'
+[2025-11-02 15:07:58.635441664] (Remote.S3) Response header 'Content-Length': '298'
+[2025-11-02 15:07:58.635455574] (Remote.S3) Response header 'Date': 'Sun, 02 Nov 2025 23:07:58 GMT'
+[2025-11-02 15:07:58.635469194] (Remote.S3) Response header 'Expires': 'Sun, 02 Nov 2025 23:07:58 GMT'
+[2025-11-02 15:07:58.635481435] (Remote.S3) Response header 'Cache-Control': 'private, max-age=0'
+[2025-11-02 15:07:58.635495745] (Remote.S3) Response header 'Server': 'UploadServer'
+(checking bucket...) [2025-11-02 15:07:58.635780314] (Remote.S3) String to sign: "GET\n\n\nSun, 02 Nov 2025 23:07:58 GMT\n/openneuro-nell-test/ds001264/annex-uuid"
+[2025-11-02 15:07:58.635796454] (Remote.S3) Host: "openneuro-nell-test.storage.googleapis.com"
+[2025-11-02 15:07:58.635818565] (Remote.S3) Path: "/ds001264/annex-uuid"
+[2025-11-02 15:07:58.635828655] (Remote.S3) Query string: ""
+[2025-11-02 15:07:58.635840346] (Remote.S3) Header: [("Date","Sun, 02 Nov 2025 23:07:58 GMT"),("Authorization","..."),("User-Agent","git-annex/10.20250929")]
+[2025-11-02 15:07:58.685220703] (Remote.S3) Response status: Status {statusCode = 404, statusMessage = "Not Found"}
+[2025-11-02 15:07:58.685251934] (Remote.S3) Response header 'Content-Type': 'application/xml; charset=UTF-8'
+[2025-11-02 15:07:58.685268695] (Remote.S3) Response header 'X-GUploader-UploadID': 'AOCedOHoPd6zdBzYMMr-ON5aWjlDBbGd7ZIaf_Iit8Gt74l3aRT-Ty4Fayk9Tx9tlBMYuMKH'
+[2025-11-02 15:07:58.685280535] (Remote.S3) Response header 'Content-Length': '201'
+[2025-11-02 15:07:58.685290386] (Remote.S3) Response header 'Date': 'Sun, 02 Nov 2025 23:07:58 GMT'
+[2025-11-02 15:07:58.685299996] (Remote.S3) Response header 'Expires': 'Sun, 02 Nov 2025 23:07:58 GMT'
+[2025-11-02 15:07:58.685310096] (Remote.S3) Response header 'Cache-Control': 'private, max-age=0'
+[2025-11-02 15:07:58.685319476] (Remote.S3) Response header 'Server': 'UploadServer'
+[2025-11-02 15:07:58.685365338] (Remote.S3) String to sign: "GET\n\n\nSun, 02 Nov 2025 23:07:58 GMT\n/openneuro-nell-test/"
+[2025-11-02 15:07:58.685376888] (Remote.S3) Host: "openneuro-nell-test.storage.googleapis.com"
+[2025-11-02 15:07:58.685386298] (Remote.S3) Path: "/"
+[2025-11-02 15:07:58.685394309] (Remote.S3) Query string: ""
+[2025-11-02 15:07:58.685403479] (Remote.S3) Header: [("Date","Sun, 02 Nov 2025 23:07:58 GMT"),("Authorization","..."),("User-Agent","git-annex/10.20250929")]
+[2025-11-02 15:07:58.725819533] (Remote.S3) Response status: Status {statusCode = 200, statusMessage = "OK"}
+[2025-11-02 15:07:58.725847874] (Remote.S3) Response header 'Content-Type': 'application/xml; charset=UTF-8'
+[2025-11-02 15:07:58.725861764] (Remote.S3) Response header 'X-GUploader-UploadID': 'AOCedOGjVuiFnd4UNsb069xhhamfE7ttizD8j1W9S7fGeUBqVoPxKff00jMdZyvUGFo90z_N'
+[2025-11-02 15:07:58.725873324] (Remote.S3) Response header 'x-goog-metageneration': '3'
+[2025-11-02 15:07:58.725883625] (Remote.S3) Response header 'Content-Length': '784'
+[2025-11-02 15:07:58.725893065] (Remote.S3) Response header 'Date': 'Sun, 02 Nov 2025 23:07:58 GMT'
+[2025-11-02 15:07:58.725907215] (Remote.S3) Response header 'Expires': 'Sun, 02 Nov 2025 23:07:58 GMT'
+[2025-11-02 15:07:58.725983778] (Remote.S3) Response header 'Cache-Control': 'private, max-age=0'
+[2025-11-02 15:07:58.72604907] (Remote.S3) Response header 'Server': 'UploadServer'
+(creating bucket in US...) [2025-11-02 15:07:58.726309948] (Remote.S3) String to sign: "PUT\n\n\nSun, 02 Nov 2025 23:07:58 GMT\n/openneuro-nell-test/"
+[2025-11-02 15:07:58.726329498] (Remote.S3) Host: "openneuro-nell-test.storage.googleapis.com"
+[2025-11-02 15:07:58.726341689] (Remote.S3) Path: "/"
+[2025-11-02 15:07:58.726350049] (Remote.S3) Query string: ""
+[2025-11-02 15:07:58.726366349] (Remote.S3) Header: [("Date","Sun, 02 Nov 2025 23:07:58 GMT"),("Authorization","..."),("User-Agent","git-annex/10.20250929")]
+[2025-11-02 15:07:58.75553637] (Remote.S3) Response status: Status {statusCode = 409, statusMessage = "Conflict"}
+[2025-11-02 15:07:58.755576871] (Remote.S3) Response header 'Content-Type': 'application/xml; charset=UTF-8'
+[2025-11-02 15:07:58.755590572] (Remote.S3) Response header 'X-GUploader-UploadID': 'AOCedOFn-ViFqzgcWiIW6Pun3lCz6lMnBFrRxyRpyC9LIdnv9j20Yz2Cd7MnuXIcNxZ-j6_J'
+[2025-11-02 15:07:58.755603132] (Remote.S3) Response header 'Content-Length': '421'
+[2025-11-02 15:07:58.755610962] (Remote.S3) Response header 'Vary': 'Origin'
+[2025-11-02 15:07:58.755618952] (Remote.S3) Response header 'Date': 'Sun, 02 Nov 2025 23:07:58 GMT'
+[2025-11-02 15:07:58.755628153] (Remote.S3) Response header 'Server': 'UploadServer'
+
+git-annex: S3Error {s3StatusCode = Status {statusCode = 409, statusMessage = "Conflict"}, s3ErrorCode = "BucketNameUnavailable", s3ErrorMessage = "The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.", s3ErrorResource = Nothing, s3ErrorHostId = Nothing, s3ErrorAccessKeyId = Nothing, s3ErrorStringToSign = Nothing, s3ErrorBucket = Nothing, s3ErrorEndpointRaw = Nothing, s3ErrorEndpoint = Nothing}
+failed
+[2025-11-02 15:07:58.755843459] (Utility.Process) process [3914106] done ExitSuccess
+initremote: 1 failed
+
+# End of transcript or log.
+"""]]
+
+### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
+
+Thanks for all your great work, Joey!

removed
diff --git a/doc/special_remotes/directory/comment_25_11ce2a9f48ab9a043cc90d125e796685._comment b/doc/special_remotes/directory/comment_25_11ce2a9f48ab9a043cc90d125e796685._comment
deleted file mode 100644
index 35fa9399e1..0000000000
--- a/doc/special_remotes/directory/comment_25_11ce2a9f48ab9a043cc90d125e796685._comment
+++ /dev/null
@@ -1,14 +0,0 @@
-[[!comment format=mdwn
- username="hatzka"
- avatar="http://cdn.libravatar.org/avatar/446138196d9d09c19f57e739e9786a99"
- subject="a potentially bad idea"
- date="2025-10-31T00:20:54Z"
- content="""
-I have some git-annex repositories that are large enough that the objects don't fit on my SSD. I want to keep the repositories themselves on my SSD, because they also contain small versioned files that benefit from fast access. And I want git-annex to know which files are on which physical drives, so that I don't have to fsck if a drive fails (even with `--fast` it takes a while, and if one drive already failed I would rather avoid using the rest unnecessarily).
-
-I think it should be possible to meet all of these requirements by mounting an overlayfs over the `.git/annex/objects` folder. The writable `upperdir` would be on the same device as the rest of the repository; the read-only lower layers would be the hard drives, which I would also make accessible to git-annex as directory special remotes. This way, I could add objects to the repository normally, then move them to the hard drives without making them inaccessible.
-
-Obviously for this to be safe I would need to untrust the repository itself, as otherwise git-annex would see two real copies where in fact there was only one. (I'm fine with not being able to permanently store anything only on the SSD.) The other obstacle I've run into is that directory remotes don't have the same layout as an objects folder.
-
-Is this a terrible idea? Is there a better way? And, assuming the answers are \"not too terrible\" and \"not really\", how can I set up a directory special remote so that this will work?
-"""]]

Added a comment: a potentially bad idea
diff --git a/doc/special_remotes/directory/comment_25_11ce2a9f48ab9a043cc90d125e796685._comment b/doc/special_remotes/directory/comment_25_11ce2a9f48ab9a043cc90d125e796685._comment
new file mode 100644
index 0000000000..35fa9399e1
--- /dev/null
+++ b/doc/special_remotes/directory/comment_25_11ce2a9f48ab9a043cc90d125e796685._comment
@@ -0,0 +1,14 @@
+[[!comment format=mdwn
+ username="hatzka"
+ avatar="http://cdn.libravatar.org/avatar/446138196d9d09c19f57e739e9786a99"
+ subject="a potentially bad idea"
+ date="2025-10-31T00:20:54Z"
+ content="""
+I have some git-annex repositories that are large enough that the objects don't fit on my SSD. I want to keep the repositories themselves on my SSD, because they also contain small versioned files that benefit from fast access. And I want git-annex to know which files are on which physical drives, so that I don't have to fsck if a drive fails (even with `--fast` it takes a while, and if one drive already failed I would rather avoid using the rest unnecessarily).
+
+I think it should be possible to meet all of these requirements by mounting an overlayfs over the `.git/annex/objects` folder. The writable `upperdir` would be on the same device as the rest of the repository; the read-only lower layers would be the hard drives, which I would also make accessible to git-annex as directory special remotes. This way, I could add objects to the repository normally, then move them to the hard drives without making them inaccessible.
+
+Obviously for this to be safe I would need to untrust the repository itself, as otherwise git-annex would see two real copies where in fact there was only one. (I'm fine with not being able to permanently store anything only on the SSD.) The other obstacle I've run into is that directory remotes don't have the same layout as an objects folder.
+
+Is this a terrible idea? Is there a better way? And, assuming the answers are \"not too terrible\" and \"not really\", how can I set up a directory special remote so that this will work?
+"""]]

diff --git a/doc/bugs/p2phttp_deadlocks_with_concurrent_clients.mdwn b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients.mdwn
new file mode 100644
index 0000000000..78a552343d
--- /dev/null
+++ b/doc/bugs/p2phttp_deadlocks_with_concurrent_clients.mdwn
@@ -0,0 +1,41 @@
+### Please describe the problem.
+
+P2phttp can deadlock with multiple concurrent clients talking to it.
+
+
+### What steps will reproduce the problem?
+
+1. Create a git-annex repository with a bunch of annexed files served via p2phttp like so: `git-annex --debug p2phttp -J2 --bind 127.0.0.1 --wideopen`
+2. Create multiple different clones of that repository connected via p2phttp all doing `while true; do git annex drop .; git annex get --in origin; done`
+3. Observe a deadlock after an indeterminate amount of time
+
+This deadlock seems to occur faster the more repos you use. I've tried increasing -J to 3 and had it deadlock with two client repos once, but that seems to happen much less often.
+
+### What version of git-annex are you using? On what operating system?
+
+```
+$ git annex version
+git-annex version: 10.20250929-g33ab579243742b0b18ffec2ea4ce1e3a827720b4
+build flags: Assistant Webapp Pairing Inotify DBus DesktopNotify TorrentParser MagicMime Benchmark Feeds Testsuite S3 WebDAV Servant OsPath
+dependency versions: aws-0.24.4 bloomfilter-2.0.1.2 crypton-1.0.4 DAV-1.3.4 feed-1.3.2.1 ghc-9.10.2 http-client-0.7.19 persistent-sqlite-2.13.3.1 torrent-10000.1.3 uuid-1.3.16 yesod-1.6.2.1
+key/value backends: SHA256E SHA256 SHA512E SHA512 SHA224E SHA224 SHA384E SHA384 SHA3_256E SHA3_256 SHA3_512E SHA3_512 SHA3_224E SHA3_224 SHA3_384E SHA3_384 SKEIN256E SKEIN256 SKEIN512E SKEIN512 BLAKE2B256E BLAKE2B256 BLAKE2B512E BLAKE2B512 BLAKE2B160E BLAKE2B160 BLAKE2B224E BLAKE2B224 BLAKE2B384E BLAKE2B384 BLAKE2BP512E BLAKE2BP512 BLAKE2S256E BLAKE2S256 BLAKE2S160E BLAKE2S160 BLAKE2S224E BLAKE2S224 BLAKE2SP256E BLAKE2SP256 BLAKE2SP224E BLAKE2SP224 SHA1E SHA1 MD5E MD5 WORM URL GITBUNDLE GITMANIFEST VURL X*
+remote types: git gcrypt p2p S3 bup directory rsync web bittorrent webdav adb tahoe glacier ddar git-lfs httpalso borg rclone hook external compute mask
+operating system: linux x86_64
+supported repository versions: 8 9 10
+upgrade supported from repository versions: 0 1 2 3 4 5 6 7 8 9 10
+local repository version: 10
+```
+
+### Please provide any additional information below.
+
+[[!format sh """
+# If you can, paste a complete transcript of the problem occurring here.
+# If the problem is with the git-annex assistant, paste in .git/annex/daemon.log
+
+
+# End of transcript or log.
+"""]]
+
+### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
+
+[[!tag projects/ICE4]]

diff --git a/doc/forum/Git-annex_in___34__AGit-Flow__34__.mdwn b/doc/forum/Git-annex_in___34__AGit-Flow__34__.mdwn
index 2a52839629..af9c36998c 100644
--- a/doc/forum/Git-annex_in___34__AGit-Flow__34__.mdwn
+++ b/doc/forum/Git-annex_in___34__AGit-Flow__34__.mdwn
@@ -16,4 +16,4 @@ Worth it to note that AGit-Flow already works for contributors with write access
 
 Do you have any other ideas on how git-annex could be used in this workflow?
 
-[[!tag projects/INM7]]
+[[!tag projects/ICE4]]

typo
diff --git a/doc/projects/FJZ.mdwn b/doc/projects/FZJ.mdwn
similarity index 94%
rename from doc/projects/FJZ.mdwn
rename to doc/projects/FZJ.mdwn
index 1e2a2635c8..c2953d4600 100644
--- a/doc/projects/FJZ.mdwn
+++ b/doc/projects/FZJ.mdwn
@@ -1,4 +1,4 @@
-At FJZ, the INM-7 and ICE-4 data hosting infrastructures use git-annex. 
+At FZJ, the INM-7 and ICE-4 data hosting infrastructures use git-annex. 
 This is a tracking page for issues relating to those projects.
 It includes issues relating to 
 [forgejo-aneksajo](https://codeberg.org/matrss/forgejo-aneksajo).
diff --git a/doc/projects/ICE4.mdwn b/doc/projects/ICE4.mdwn
index 4e9588d414..1ff01d36d4 100644
--- a/doc/projects/ICE4.mdwn
+++ b/doc/projects/ICE4.mdwn
@@ -1 +1 @@
-[[!meta redir=FJZ]]
+[[!meta redir=FZJ]]
diff --git a/doc/projects/INM7.mdwn b/doc/projects/INM7.mdwn
index 4e9588d414..1ff01d36d4 100644
--- a/doc/projects/INM7.mdwn
+++ b/doc/projects/INM7.mdwn
@@ -1 +1 @@
-[[!meta redir=FJZ]]
+[[!meta redir=FZJ]]

Added a comment
diff --git a/doc/todo/p2phttp__58___regularly_re-check_for_annex.url_config/comment_2_a2d5b4bda70398422636652b8bf6e9f2._comment b/doc/todo/p2phttp__58___regularly_re-check_for_annex.url_config/comment_2_a2d5b4bda70398422636652b8bf6e9f2._comment
new file mode 100644
index 0000000000..20e97fc099
--- /dev/null
+++ b/doc/todo/p2phttp__58___regularly_re-check_for_annex.url_config/comment_2_a2d5b4bda70398422636652b8bf6e9f2._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="matrss"
+ avatar="http://cdn.libravatar.org/avatar/cd1c0b3be1af288012e49197918395f0"
+ subject="comment 2"
+ date="2025-10-29T17:38:12Z"
+ content="""
+Due to your explanation of how p2phttp is supposed to work with proxying I realized that I have made a mistake in how I have integrated it into Forgejo-aneksajo. Right now there is a single p2phttp endpoint serving all repositories, and the provided UUID is used to determine on which repository to act. But this breaks with proxying, since the UUID then doesn't necessarily correspond to a repository on the instance. I will therefore have to move the p2phttp endpoints under the `<owner>/<repo>` routing namespace. Having git-annex retrieve updated config data from remotes would make this change propagate to clones automatically, which would be nice I think.
+"""]]

diff --git a/doc/todo/p2phttp__58___regularly_re-check_for_annex.url_config.mdwn b/doc/todo/p2phttp__58___regularly_re-check_for_annex.url_config.mdwn
index 95f3f2f612..e483b00c76 100644
--- a/doc/todo/p2phttp__58___regularly_re-check_for_annex.url_config.mdwn
+++ b/doc/todo/p2phttp__58___regularly_re-check_for_annex.url_config.mdwn
@@ -4,3 +4,5 @@ From my experimentation it seems to be that git-annex does not discover the `ann
 2. Likewise, if the server-side initially didn't support p2phttp and didn't set `annex.url` when the repository was cloned, but is later updated to support it, git-annex doesn't automatically pick up this change.
 
 This automatic discovery would be nice for p2phttp support in forgejo-aneksajo, as existing clones could automatically start making use of it as soon as the instance is updated to support it on the server-side and the git-annex version is updated to be recent enough on the client-side.
+
+[[!tag projects/ICE4]]

diff --git a/doc/todo/More_fine-grained_testremote_command.mdwn b/doc/todo/More_fine-grained_testremote_command.mdwn
index bb4eb63f26..25182b28fc 100644
--- a/doc/todo/More_fine-grained_testremote_command.mdwn
+++ b/doc/todo/More_fine-grained_testremote_command.mdwn
@@ -9,4 +9,4 @@ If that's not possible for some reason it would also be an improvement with rega
 
 What do you think?
 
-[[!tag projects/INM7]]
+[[!tag projects/ICE4]]

diff --git a/doc/bugs/__96__git_annex_push__96___does_not_use_git-credential-oauth.mdwn b/doc/bugs/__96__git_annex_push__96___does_not_use_git-credential-oauth.mdwn
index da0edd22da..06a2cf4cd0 100644
--- a/doc/bugs/__96__git_annex_push__96___does_not_use_git-credential-oauth.mdwn
+++ b/doc/bugs/__96__git_annex_push__96___does_not_use_git-credential-oauth.mdwn
@@ -83,4 +83,4 @@ $
 ### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
 
 
-[[!tag projects/INM7]]
+[[!tag projects/ICE4]]

fix
diff --git a/doc/projects/FJZ.mdwn b/doc/projects/FJZ.mdwn
index 849de58bb6..1e2a2635c8 100644
--- a/doc/projects/FJZ.mdwn
+++ b/doc/projects/FJZ.mdwn
@@ -27,7 +27,7 @@ Bugs
 <details>
 <summary>Fixed</summary>
 
-[[!inline pages="bugs/* and !bugs/done and link(bugs/done)) and
+[[!inline pages="bugs/* and !bugs/done and link(bugs/done) and
 (tagged(projects/INM7) or tagged(projects/ICE4))" feeds=no actions=yes archive=yes show=0  template=buglist]]
 
 </details>

fix
diff --git a/doc/projects/FJZ.mdwn b/doc/projects/FJZ.mdwn
index b06738f104..849de58bb6 100644
--- a/doc/projects/FJZ.mdwn
+++ b/doc/projects/FJZ.mdwn
@@ -27,7 +27,7 @@ Bugs
 <details>
 <summary>Fixed</summary>
 
-[[!inline pages="(bugs/* and !bugs/done and link(bugs/done)) and
-(tagged(projects/INM7 or tagged(projects/ICE4))" feeds=no actions=yes archive=yes show=0  template=buglist]]
+[[!inline pages="bugs/* and !bugs/done and link(bugs/done)) and
+(tagged(projects/INM7) or tagged(projects/ICE4))" feeds=no actions=yes archive=yes show=0  template=buglist]]
 
 </details>

fix
diff --git a/doc/projects/FJZ.mdwn b/doc/projects/FJZ.mdwn
index 830a29f8f5..b06738f104 100644
--- a/doc/projects/FJZ.mdwn
+++ b/doc/projects/FJZ.mdwn
@@ -28,6 +28,6 @@ Bugs
 <summary>Fixed</summary>
 
 [[!inline pages="(bugs/* and !bugs/done and link(bugs/done)) and
-(tagged(projects/INM7 or tagged(projects/ICE4)" feeds=no actions=yes archive=yes show=0  template=buglist]]
+(tagged(projects/INM7 or tagged(projects/ICE4))" feeds=no actions=yes archive=yes show=0  template=buglist]]
 
 </details>

add ICE4 page, as a redirect
diff --git a/doc/projects.mdwn b/doc/projects.mdwn
index 9a750943dd..aacb8fe897 100644
--- a/doc/projects.mdwn
+++ b/doc/projects.mdwn
@@ -1,5 +1,5 @@
 Projects that rely on git-annex can put pages here to do things like track
 bugs that affect them, etc. (See also: [[related_software]])
 
-[[!inline pages="projects/* and !projects/*/* and !*/Discussion and !projects/INM7" 
+[[!inline pages="projects/* and !projects/*/* and !*/Discussion and !projects/INM7 and !projects/ICE4" 
 feeds=no archive=yes sort=title rootpage="projects" postformtext="Add your project:"]]
diff --git a/doc/projects/ICE4.mdwn b/doc/projects/ICE4.mdwn
new file mode 100644
index 0000000000..4e9588d414
--- /dev/null
+++ b/doc/projects/ICE4.mdwn
@@ -0,0 +1 @@
+[[!meta redir=FJZ]]

rename project page (left a redirect)
diff --git a/doc/projects.mdwn b/doc/projects.mdwn
index bbf2b2d2b9..9a750943dd 100644
--- a/doc/projects.mdwn
+++ b/doc/projects.mdwn
@@ -1,5 +1,5 @@
 Projects that rely on git-annex can put pages here to do things like track
 bugs that affect them, etc. (See also: [[related_software]])
 
-[[!inline pages="projects/* and !projects/*/* and !*/Discussion" 
+[[!inline pages="projects/* and !projects/*/* and !*/Discussion and !projects/INM7" 
 feeds=no archive=yes sort=title rootpage="projects" postformtext="Add your project:"]]
diff --git a/doc/projects/FJZ.mdwn b/doc/projects/FJZ.mdwn
new file mode 100644
index 0000000000..830a29f8f5
--- /dev/null
+++ b/doc/projects/FJZ.mdwn
@@ -0,0 +1,33 @@
+At FJZ, the INM-7 and ICE-4 data hosting infrastructures use git-annex. 
+This is a tracking page for issues relating to those projects.
+It includes issues relating to 
+[forgejo-aneksajo](https://codeberg.org/matrss/forgejo-aneksajo).
+
+TODOs
+=====
+
+[[!inline pages="todo/* and !todo/done and !link(todo/done) and
+(tagged(projects/INM7) or tagged(projects/ICE4))" sort=mtime feeds=no actions=yes archive=yes show=0 template=buglist]]
+
+
+<details>
+<summary>Done</summary>
+
+[[!inline pages="todo/* and !todo/done and link(todo/done) and
+(tagged(projects/INM7) or tagged(projects/ICE4))" feeds=no actions=yes archive=yes show=0 template=buglist]]
+
+</details>
+
+Bugs
+====
+
+[[!inline pages="bugs/* and !bugs/done and !link(bugs/done) and
+(tagged(projects/INM7) or tagged(projects/ICE4))" sort=mtime feeds=no actions=yes archive=yes show=0  template=buglist template=buglist]]
+
+<details>
+<summary>Fixed</summary>
+
+[[!inline pages="(bugs/* and !bugs/done and link(bugs/done)) and
+(tagged(projects/INM7 or tagged(projects/ICE4)" feeds=no actions=yes archive=yes show=0  template=buglist]]
+
+</details>
diff --git a/doc/projects/INM7.mdwn b/doc/projects/INM7.mdwn
index 917d1428cf..4e9588d414 100644
--- a/doc/projects/INM7.mdwn
+++ b/doc/projects/INM7.mdwn
@@ -1,32 +1 @@
-The INM7 data hosting infrastructure uses git-annex. This is a tracking
-page for issues relating to that project. It includes issues relating to
-[forgejo-aneksajo](https://codeberg.org/matrss/forgejo-aneksajo).
-
-TODOs
-=====
-
-[[!inline pages="todo/* and !todo/done and !link(todo/done) and
-tagged(projects/INM7)" sort=mtime feeds=no actions=yes archive=yes show=0 template=buglist]]
-
-
-<details>
-<summary>Done</summary>
-
-[[!inline pages="todo/* and !todo/done and link(todo/done) and
-tagged(projects/INM7)" feeds=no actions=yes archive=yes show=0 template=buglist]]
-
-</details>
-
-Bugs
-====
-
-[[!inline pages="bugs/* and !bugs/done and !link(bugs/done) and
-tagged(projects/INM7)" sort=mtime feeds=no actions=yes archive=yes show=0  template=buglist template=buglist]]
-
-<details>
-<summary>Fixed</summary>
-
-[[!inline pages="(bugs/* and !bugs/done and link(bugs/done)) and
-tagged(projects/INM7)" feeds=no actions=yes archive=yes show=0  template=buglist]]
-
-</details>
+[[!meta redir=FJZ]]

add news item for git-annex 10.20251029
diff --git a/doc/news/version_10.20250630.mdwn b/doc/news/version_10.20250630.mdwn
deleted file mode 100644
index 7761138fea..0000000000
--- a/doc/news/version_10.20250630.mdwn
+++ /dev/null
@@ -1,9 +0,0 @@
-git-annex 10.20250630 released with [[!toggle text="these changes"]]
-[[!toggleable text="""  * Work around git 2.50 bug that caused it to crash when there is a merge
-    conflict with an unlocked annexed file.
-  * Skip and warn when a tree import includes empty filenames,
-    which can happen with eg a S3 bucket.
-  * Avoid a problem with temp file names ending in whitespace on
-    filesystems like VFAT that don't support such filenames.
-  * webapp: Rename "Upgrade Repository" to "Convert Repository"
-    to avoid confusion with git-annex upgrade."""]]
\ No newline at end of file
diff --git a/doc/news/version_10.20251029.mdwn b/doc/news/version_10.20251029.mdwn
new file mode 100644
index 0000000000..b98b28583c
--- /dev/null
+++ b/doc/news/version_10.20251029.mdwn
@@ -0,0 +1,5 @@
+git-annex 10.20251029 released with [[!toggle text="these changes"]]
+[[!toggleable text="""  * Support ssh remotes with '#' and '?' in the path to the repository,
+    the same way git does.
+  * assistant: Fix reversion that caused files to be added locked by
+    default."""]]
\ No newline at end of file

Added a comment
diff --git a/doc/bugs/install_on_android_boox__58___xargs_Permission_denied/comment_4_2e7fc84160c9bd75f7dc1a6a44ab132a._comment b/doc/bugs/install_on_android_boox__58___xargs_Permission_denied/comment_4_2e7fc84160c9bd75f7dc1a6a44ab132a._comment
new file mode 100644
index 0000000000..0564fa74db
--- /dev/null
+++ b/doc/bugs/install_on_android_boox__58___xargs_Permission_denied/comment_4_2e7fc84160c9bd75f7dc1a6a44ab132a._comment
@@ -0,0 +1,18 @@
+[[!comment format=mdwn
+ username="matrss"
+ avatar="http://cdn.libravatar.org/avatar/cd1c0b3be1af288012e49197918395f0"
+ subject="comment 4"
+ date="2025-10-26T11:55:55Z"
+ content="""
+I am seeing the same issue on a Pixel 9a with GrapheneOS and Termux installed from the Play Store, but _not_ with Termux installed from F-Droid.
+
+Digging a bit I found that:
+
+- Termux does not advertise the Google Play Store as a means of installation on their website: <https://termux.dev/en/>
+- The Termux wiki states that everyone who can use F-Droid should get it from there: <https://wiki.termux.com/wiki/Termux_Google_Play>
+- The F-Droid and Play Store builds are created from different codebases, at least temporarily: <https://github.com/termux-play-store#:~:text=As%20the%20F%2DDroid%20build%20of,and%20details%20are%20worked%20out.>
+
+I've also noticed that some executables just don't exist in Termux-from-Play-Store, e.g. `termux-change-repo`.
+
+Considering that you might just want to try the F-Droid build, if possible.
+"""]]

bug
diff --git a/doc/bugs/p2p_--pair_seems_broken_for_iroh.mdwn b/doc/bugs/p2p_--pair_seems_broken_for_iroh.mdwn
new file mode 100644
index 0000000000..7c4de04a2f
--- /dev/null
+++ b/doc/bugs/p2p_--pair_seems_broken_for_iroh.mdwn
@@ -0,0 +1,5 @@
+When using git-annex-p2p-iroh, `git-annnex p2p --pair` times out after the
+magic wormhole step.
+
+`git-annex p2p --link` does work with the iroh script, so this is probably
+a bug in git-annex. --[[Joey]]

Propose emphemeral special remotes
diff --git a/doc/todo/Ephemeral_special_remotes.mdwn b/doc/todo/Ephemeral_special_remotes.mdwn
new file mode 100644
index 0000000000..7082e52512
--- /dev/null
+++ b/doc/todo/Ephemeral_special_remotes.mdwn
@@ -0,0 +1,14 @@
+Connecting to a discussion we had at distribits....
+
+It would be useful to extend the external special remote protocol with the ability to create ephemeral special remotes. Ephemeral in the sense that they are created by and during the runtime of a special remote, and only exist until that special remote process is terminated by git-annex.
+
+There could be a new protocol command that takes the same parameters as `initremote` as arguments. Its response would be the UUID of the created special remote.
+
+The second part of the protocol extension would be a third response value for `CHECKPRESENT`, `TRANSFER*`, `REMOVE`. The addition to `SUCCESS`, and `FAILURE` would by `REDIRECT-REMOTE <UUID>`, and instruct git-annex to perform the same request against the special remote given by `UUID` instead.
+
+The corresponding change in key availability would be recorded for the original special remote.
+
+A use case would be to have an "orchestration" special remotes that maybe represent a particular infrastructure. They dynamically deploy appropriate transfer setups, and do not commit them to a repository. This can be useful for setups with short-lived tokens/urls. This is
+in some way also an alternative to the `sameas` approach, where the alternatives are hidden in the implementation of a special remote, rather than in *each* repository.
+
+[[!tag projects/INM7]]

diff --git a/doc/todo/Special_remote_redirect_to_URL.mdwn b/doc/todo/Special_remote_redirect_to_URL.mdwn
index 1407702620..b3b51e18a3 100644
--- a/doc/todo/Special_remote_redirect_to_URL.mdwn
+++ b/doc/todo/Special_remote_redirect_to_URL.mdwn
@@ -3,7 +3,7 @@ The [external special remote protocol](/design/external_special_remote_protocol/
 * `TRANSFER-SUCCESS RETRIEVE {key}`
 * `TRANSFER-FAILURE RETRIEVE {key} {message}`
 
-I propose a third response: `TRANSFER-REDIRECT RETRIEVE {key} {url}`
+I propose a third response: `TRANSFER-REDIRECT-URL RETRIEVE {key} {url}`
 
 This will permit the following use cases:
 

Initiate request for request redirection
diff --git a/doc/todo/Special_remote_redirect_to_URL.mdwn b/doc/todo/Special_remote_redirect_to_URL.mdwn
new file mode 100644
index 0000000000..1407702620
--- /dev/null
+++ b/doc/todo/Special_remote_redirect_to_URL.mdwn
@@ -0,0 +1,15 @@
+The [external special remote protocol](/design/external_special_remote_protocol/) allows the following responses to `TRANSFER RETRIEVE {key} {file}`:
+
+* `TRANSFER-SUCCESS RETRIEVE {key}`
+* `TRANSFER-FAILURE RETRIEVE {key} {message}`
+
+I propose a third response: `TRANSFER-REDIRECT RETRIEVE {key} {url}`
+
+This will permit the following use cases:
+
+1) Make a request against an authentication server that provides a short-lived access token to the same or a different server. The authentication server does not need to relay the data.
+2) Deterministically calculate a remote URL (or local path) without reimplementing HTTP fetch logic, taking advantage of the testing and security hardening of the git-annex implementation.
+
+
+[[!meta author=cjmarkie]]
+[[!tag projects/openneuro]]

remove redundant comment
diff --git a/doc/special_remotes/p2p/git-annex-p2p-iroh b/doc/special_remotes/p2p/git-annex-p2p-iroh
index 545bdc4166..b1969c7380 100755
--- a/doc/special_remotes/p2p/git-annex-p2p-iroh
+++ b/doc/special_remotes/p2p/git-annex-p2p-iroh
@@ -33,7 +33,6 @@ if [ "$1" = address ]; then
 else
 	socketfile="$2"
 	if [ -z "$socketfile" ]; then
-		# Connect to the peer's address and relay stdin and stdout.
                 peeraddress="$1"
 		dumbpipe connect "$peeraddress"
 	else

add
diff --git a/doc/special_remotes/p2p.mdwn b/doc/special_remotes/p2p.mdwn
index 37e606dd4d..6a959f4c49 100644
--- a/doc/special_remotes/p2p.mdwn
+++ b/doc/special_remotes/p2p.mdwn
@@ -10,6 +10,8 @@ For other P2P networks, a fairly simple program is used to connect
 git-annex up with the network. Install one of these programs to use the P2P
 network of your choice:
 
+* [[git-annex-p2p-iroh]]  
+  Uses [Iroh](https://www.iroh.computer/) for fast P2P with hole punching.
 * [[git-annex-p2p-unix-sockets]]  
   This is only a demo, using unix sockets in `/tmp` rather than a real
   P2P network. Not for real world use.

add
diff --git a/doc/special_remotes/p2p/git-annex-p2p-iroh b/doc/special_remotes/p2p/git-annex-p2p-iroh
new file mode 100755
index 0000000000..545bdc4166
--- /dev/null
+++ b/doc/special_remotes/p2p/git-annex-p2p-iroh
@@ -0,0 +1,43 @@
+#!/bin/sh
+# Allows git-annex to use iroh for P2P connections.
+#
+# This uses a modified version of iroh's dumbpipe program, adding the
+# generate-ticket command. This pull request has the necessary changes:
+# https://github.com/n0-computer/dumbpipe/pull/86
+#
+# Quality: experimental. Has worked at least twice, but there are known and
+# unknown bugs.
+#
+# Copyright 2025 Joey Hess; licenced under the GNU GPL version 3 or higher.
+
+set -e
+
+git_dir=$(git rev-parse --git-dir)
+creds_dir="$git_dir/annex/creds"
+iroh_secret_file="$creds_dir/iroh-secret"
+
+get_iroh_secret () {
+	IROH_SECRET=$(cat "$iroh_secret_file")
+	export IROH_SECRET
+}
+
+if [ "$1" = address ]; then
+	if [ ! -e "$iroh_secret_file" ]; then
+		mkdir -p "$creds_dir"
+		umask 077
+		gpg --gen-random 16 32 > $iroh_secret_file
+	fi
+	get_iroh_secret
+	# avoid display of the iroh secret to stderr
+	dumbpipe generate-ticket 2>/dev/null
+else
+	socketfile="$2"
+	if [ -z "$socketfile" ]; then
+		# Connect to the peer's address and relay stdin and stdout.
+                peeraddress="$1"
+		dumbpipe connect "$peeraddress"
+	else
+		get_iroh_secret
+		dumbpipe listen-unix --socket-path="$socketfile"
+	fi
+fi

typo
diff --git a/doc/special_remotes/p2p/git-annex-p2p-unix-sockets b/doc/special_remotes/p2p/git-annex-p2p-unix-sockets
index 100dee5291..2fe1568efb 100755
--- a/doc/special_remotes/p2p/git-annex-p2p-unix-sockets
+++ b/doc/special_remotes/p2p/git-annex-p2p-unix-sockets
@@ -4,7 +4,7 @@
 # This simulates a multi-node P2P network using unix 
 # socket files in /tmp.
 #
-# Copyright 2025 Joey Hess; icenced under the GNU GPL version 3 or higher.
+# Copyright 2025 Joey Hess; licenced under the GNU GPL version 3 or higher.
 
 set -e
 

Added a comment
diff --git a/doc/bugs/some_conflict_resolution_tests_fail_some_time/comment_3_951e1bd5ea9828205988d5d61a4acd54._comment b/doc/bugs/some_conflict_resolution_tests_fail_some_time/comment_3_951e1bd5ea9828205988d5d61a4acd54._comment
new file mode 100644
index 0000000000..13c374630e
--- /dev/null
+++ b/doc/bugs/some_conflict_resolution_tests_fail_some_time/comment_3_951e1bd5ea9828205988d5d61a4acd54._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ username="yarikoptic"
+ avatar="http://cdn.libravatar.org/avatar/f11e9c84cb18d26a1748c33b48c924b4"
+ subject="comment 3"
+ date="2025-10-24T14:26:07Z"
+ content="""
+ha -- ran into this issue while looking for some demo of `datalad foreach-subdatset`, FWIW, may be it could be considered \"done\"/closed since I do not see consistent re-manifestation in 2025. Last one of some kind failures `git grep` matching only in early 2024
+
+"""]]

Support ssh remotes with '#' and '?' in the path to the repository
The same way git does.
Affected repository types are regular git ssh remotes, and also gcrypt
remotes, and potentially also bup remotes.
repoPath is used for such repositories accessed over ssh. uriPath is used
in some other places, eg the bittorrent special remote, where it would not
be appropriate to mimic git's behavior. The distinction seems to hold up
well from what I can see.
The ordering of uriFragment after uriQuery is to correctly handle cases
where both appear in an url. "ssh://localhost/tmp/foo?baz#bar" has an
uriFragment of "#bar" and an uriQuery of "?baz". On the other hand,
"ssh://localhost/tmp/foo#baz?bar" has an uriFragment of "#baz?bar" and no
uriQuery.
Sponsored-by: Dartmouth College's DANDI project
diff --git a/CHANGELOG b/CHANGELOG
index c9eabe919c..21888c75f1 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -1,3 +1,10 @@
+git-annex (10.20250930) UNRELEASED; urgency=medium
+
+  * Support ssh remotes with '#' and '?' in the path to the repository,
+    the same way git does.
+
+ -- Joey Hess <id@joeyh.name>  Mon, 20 Oct 2025 15:22:30 -0400
+
 git-annex (10.20250929) upstream; urgency=medium
 
   * enableremote: Allow type= to be provided when it does not change the
diff --git a/Git.hs b/Git.hs
index 3eafcd674b..30930c2c17 100644
--- a/Git.hs
+++ b/Git.hs
@@ -38,7 +38,7 @@ module Git (
 	relPath,
 ) where
 
-import Network.URI (uriPath, uriScheme, unEscapeString)
+import Network.URI (uriPath, uriScheme, uriQuery, uriFragment, unEscapeString)
 #ifndef mingw32_HOST_OS
 import System.Posix.Files
 #endif
@@ -73,7 +73,10 @@ repoLocation Repo { location = Unknown } = giveup "unknown repoLocation"
  - it's the gitdir, and for URL repositories, is the path on the remote
  - host. -}
 repoPath :: Repo -> OsPath
-repoPath Repo { location = Url u } = toOsPath $ unEscapeString $ uriPath u
+repoPath Repo { location = Url u } = toOsPath $ unEscapeString $
+	-- git allows the path of a ssh url to include both '?' and '#',
+	-- and treats them as part of the path
+	uriPath u ++ uriQuery u ++ uriFragment u
 repoPath Repo { location = Local { worktree = Just d } } = d
 repoPath Repo { location = Local { gitdir = d } } = d
 repoPath Repo { location = LocalUnknown dir } = dir
diff --git a/doc/bugs/fails_to_discover_uuid_over_ssh_with___35___in_path_.mdwn b/doc/bugs/fails_to_discover_uuid_over_ssh_with___35___in_path_.mdwn
index bd5130b64f..1b3ebda7eb 100644
--- a/doc/bugs/fails_to_discover_uuid_over_ssh_with___35___in_path_.mdwn
+++ b/doc/bugs/fails_to_discover_uuid_over_ssh_with___35___in_path_.mdwn
@@ -72,3 +72,5 @@ FTR: I was trying to backup some old behavioral videos (octopus) from the laptop
 
 [[!meta author=yoh]]
 [[!tag projects/dandi]]
+
+> [[fixed|done]] --[[Joey]]
diff --git a/doc/bugs/fails_to_discover_uuid_over_ssh_with___35___in_path_/comment_1_00c1062abe02a42cea491f6bb8e6e5dc._comment b/doc/bugs/fails_to_discover_uuid_over_ssh_with___35___in_path_/comment_1_00c1062abe02a42cea491f6bb8e6e5dc._comment
new file mode 100644
index 0000000000..f035f5644d
--- /dev/null
+++ b/doc/bugs/fails_to_discover_uuid_over_ssh_with___35___in_path_/comment_1_00c1062abe02a42cea491f6bb8e6e5dc._comment
@@ -0,0 +1,11 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 1"""
+ date="2025-10-20T19:16:15Z"
+ content="""
+Also affected is '?' in the path. It's somewhat surprising to me that git
+treats these parts of an url as path components, but
+not too surprising, as git's definition of "url" is pretty loose.
+
+Fixed git-annex to follow suite.
+"""]]

Added a comment: Re: support for bulk write/read/test remote - ps
diff --git a/doc/design/external_special_remote_protocol/comment_58_d2ba09a90544cdfa245e69b951107702._comment b/doc/design/external_special_remote_protocol/comment_58_d2ba09a90544cdfa245e69b951107702._comment
new file mode 100644
index 0000000000..6df734ae3a
--- /dev/null
+++ b/doc/design/external_special_remote_protocol/comment_58_d2ba09a90544cdfa245e69b951107702._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="psxvoid"
+ avatar="http://cdn.libravatar.org/avatar/fde068fbdeabeea31e3be7aa9c55d84b"
+ subject="Re: support for bulk write/read/test remote - ps"
+ date="2025-10-19T05:00:34Z"
+ content="""
+P.S.: And to make it more clear why I told about dar first and then about writing BDXLs - when I mentioned dar - it was the stage when I was experimenting with dar to use it as an intermediary to write to BDXLs. But then I started to experiment with plain files because it could be better for a long-term archival solution.
+"""]]

removed
diff --git a/doc/design/external_special_remote_protocol/comment_58_2bd7eb40046423b1424eaa2aae78ba95._comment b/doc/design/external_special_remote_protocol/comment_58_2bd7eb40046423b1424eaa2aae78ba95._comment
deleted file mode 100644
index c48cd63c2b..0000000000
--- a/doc/design/external_special_remote_protocol/comment_58_2bd7eb40046423b1424eaa2aae78ba95._comment
+++ /dev/null
@@ -1,8 +0,0 @@
-[[!comment format=mdwn
- username="psxvoid"
- avatar="http://cdn.libravatar.org/avatar/fde068fbdeabeea31e3be7aa9c55d84b"
- subject="Rr: support for bulk write/read/test remote (PS)"
- date="2025-10-19T04:58:54Z"
- content="""
-P.S.: And to make it more clear why I told about dar first and then about writing BDXLs - when I mentioned dar - it was the stage when I experimented with dar to use it as an intermediary to write to BDXLs. But then I started to experiment with plain files because it could be better for long-term archival solution.
-"""]]

Added a comment: Rr: support for bulk write/read/test remote (PS)
diff --git a/doc/design/external_special_remote_protocol/comment_58_2bd7eb40046423b1424eaa2aae78ba95._comment b/doc/design/external_special_remote_protocol/comment_58_2bd7eb40046423b1424eaa2aae78ba95._comment
new file mode 100644
index 0000000000..c48cd63c2b
--- /dev/null
+++ b/doc/design/external_special_remote_protocol/comment_58_2bd7eb40046423b1424eaa2aae78ba95._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="psxvoid"
+ avatar="http://cdn.libravatar.org/avatar/fde068fbdeabeea31e3be7aa9c55d84b"
+ subject="Rr: support for bulk write/read/test remote (PS)"
+ date="2025-10-19T04:58:54Z"
+ content="""
+P.S.: And to make it more clear why I told about dar first and then about writing BDXLs - when I mentioned dar - it was the stage when I experimented with dar to use it as an intermediary to write to BDXLs. But then I started to experiment with plain files because it could be better for long-term archival solution.
+"""]]

Added a comment: Re: support for bulk write/read/test remote - joey
diff --git a/doc/design/external_special_remote_protocol/comment_57_20103729c97cb4392715987dca5408ae._comment b/doc/design/external_special_remote_protocol/comment_57_20103729c97cb4392715987dca5408ae._comment
new file mode 100644
index 0000000000..f74740ca6f
--- /dev/null
+++ b/doc/design/external_special_remote_protocol/comment_57_20103729c97cb4392715987dca5408ae._comment
@@ -0,0 +1,34 @@
+[[!comment format=mdwn
+ username="psxvoid"
+ avatar="http://cdn.libravatar.org/avatar/fde068fbdeabeea31e3be7aa9c55d84b"
+ subject="Re: support for bulk write/read/test remote - joey"
+ date="2025-10-19T04:36:37Z"
+ content="""
+Hi Joey,
+
+Sorry, for the late response, and thanks for the feedback.
+
+> \"that's fundamentally different than how git-annex works\"
+
+Hence the previous comment :)
+
+> \"And I think you could put it in your special remote.\"
+
+That's exactly what I was doing around a year ago. I was implementing a special remote to support writing data on BDXL disks.
+
+> \"So that when git-annex sends a file to your remote, the file is actually stored in the remote, rather than in a temporary location.\"
+
+Yep, roughly that's how I was implementing it - storing intermediate data in an sqlite database.
+
+I'd put the project on hold because I started to ask myself the following questions:
+
+1. OK, I can store transactions in the special remote. It means storing what is where on which disk. Isn't it what git annex supposed to do?
+2. If a BDXL disk get's corrupted or lost, how to reflect it in the git annex repo and the special remote? I can mark it as \"lost\" in the remote, then run fsck in git annex remote.
+3. Because I have to track location data separately in the special remote, what if it get's corrupted (the sqlite database)?
+4. What if I buy 50GB BDXL instead of 100GB which I'm using? Does it means the special remote also should track free space on each disk?
+5. Burning a disk - what if it won't be successful? Git annex will think that it was successful, cause it doesn't support bulk operations and numcopies rules will be violated.
+
+There were many more questions like this.
+
+And at some point the design started to look more like a blown-up feature-reach archival application/solution. The main point here is that it's definitely possible. I can limit the scope but there are many many issues, and nobody except me will be interested in it. Plus, many responsibilities would be overlapping with git annex.
+"""]]

diff --git a/doc/bugs/__96__git_annex_push__96___does_not_use_git-credential-oauth.mdwn b/doc/bugs/__96__git_annex_push__96___does_not_use_git-credential-oauth.mdwn
new file mode 100644
index 0000000000..da0edd22da
--- /dev/null
+++ b/doc/bugs/__96__git_annex_push__96___does_not_use_git-credential-oauth.mdwn
@@ -0,0 +1,86 @@
+### Please describe the problem.
+
+I have git-credential-oauth configured to ease http authentication against Forgejo instances:
+
+```
+[credential]
+	helper = cache --timeout 21600
+	helper = oauth
+```
+
+When I am using `git annex push` to push to a non-existing repository on a Forgejo-aneksajo instance it doesn't utilize that credential helper though, and instead asks for username and password (see log below). The same also happens for `git annex sync`. Once oauth authorization has happened and an access token is cached (i.e. after the `git push` in the log) git-annex does use it properly.
+
+
+### What steps will reproduce the problem?
+
+See log below, combined with the git-credential-oauth configuration from above.
+
+
+### What version of git-annex are you using? On what operating system?
+
+```
+git-annex version: 10.20250828-gfe7ecf505146342fe8df2430a0bcaf5f02d89a80
+build flags: Assistant Webapp Pairing Inotify DBus DesktopNotify TorrentParser MagicMime Servant Benchmark Feeds Testsuite S3 WebDAV
+dependency versions: aws-0.24.1 bloomfilter-2.0.1.2 crypton-0.34 DAV-1.3.4 feed-1.3.2.1 ghc-9.6.6 http-client-0.7.17 persistent-sqlite-2.13.3.0 torrent-10000.1.3 uuid-1.3.15 yesod-1.6.2.1
+key/value backends: SHA256E SHA256 SHA512E SHA512 SHA224E SHA224 SHA384E SHA384 SHA3_256E SHA3_256 SHA3_512E SHA3_512 SHA3_224E SHA3_224 SHA3_384E SHA3_384 SKEIN256E SKEIN256 SKEIN512E SKEIN512 BLAKE2B256E BLAKE2B256 BLAKE2B512E BLAKE2B512 BLAKE2B160E BLAKE2B160 BLAKE2B224E BLAKE2B224 BLAKE2B384E BLAKE2B384 BLAKE2BP512E BLAKE2BP512 BLAKE2S256E BLAKE2S256 BLAKE2S160E BLAKE2S160 BLAKE2S224E BLAKE2S224 BLAKE2SP256E BLAKE2SP256 BLAKE2SP224E BLAKE2SP224 SHA1E SHA1 MD5E MD5 WORM URL GITBUNDLE GITMANIFEST VURL X*
+remote types: git gcrypt p2p S3 bup directory rsync web bittorrent webdav adb tahoe glacier ddar git-lfs httpalso borg rclone hook external compute mask
+operating system: linux x86_64
+supported repository versions: 8 9 10
+upgrade supported from repository versions: 0 1 2 3 4 5 6 7 8 9 10
+```
+
+
+### Please provide any additional information below.
+
+[[!format sh """
+# If you can, paste a complete transcript of the problem occurring here.
+# If the problem is with the git-annex assistant, paste in .git/annex/daemon.log
+
+$ datalad create test1
+create(ok): /home/icg149/Playground/test1 (dataset)
+$ cd test1
+$ git remote add origin https://atris.fz-juelich.de/m.risse/test1.git
+$ git annex push origin
+Username for 'https://atris.fz-juelich.de': ^C
+[ble: exit 130]
+$ git push origin main
+Please complete authentication in your browser...
+https://atris.fz-juelich.de/login/oauth/authorize?client_id=a4792ccc-144e-407e-86c9-5e7d8d9c3269&code_challenge=uEYAd0rzQY4JG0yOkDNMUNEBHqIQInrvdMqOIFL3AWI&code_challenge_method=S256&redirect_uri=http%3A%2F%2F127.0.0.1%3A42305&response_type=code&state=9UFx41eIbP3Qn9PWh_5eGaE3UDWMNdWBKE6_nwIo_DM
+Enumerating objects: 6, done.
+Counting objects: 100% (6/6), done.
+Delta compression using up to 8 threads
+Compressing objects: 100% (5/5), done.
+Writing objects: 100% (6/6), 521 bytes | 521.00 KiB/s, done.
+Total 6 (delta 0), reused 0 (delta 0), pack-reused 0 (from 0)
+To https://atris.fz-juelich.de/m.risse/test1.git
+ * [new branch]      main -> main
+$ git annex push origin
+push origin 
+Everything up-to-date
+Enumerating objects: 5, done.
+Counting objects: 100% (5/5), done.
+Delta compression using up to 8 threads
+Compressing objects: 100% (3/3), done.
+Writing objects: 100% (5/5), 426 bytes | 426.00 KiB/s, done.
+Total 5 (delta 1), reused 0 (delta 0), pack-reused 0 (from 0)
+remote: 
+remote: Create a new pull request for 'synced/main':
+remote:   https://atris.fz-juelich.de/m.risse/test1/compare/main...synced/main
+remote: 
+remote: 
+remote: Create a new pull request for 'synced/git-annex':
+remote:   https://atris.fz-juelich.de/m.risse/test1/compare/main...synced/git-annex
+remote: 
+To https://atris.fz-juelich.de/m.risse/test1.git
+ * [new branch]      main -> synced/main
+ * [new branch]      git-annex -> synced/git-annex
+ok
+$ 
+
+# End of transcript or log.
+"""]]
+
+### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
+
+
+[[!tag projects/INM7]]

Added a comment
diff --git a/doc/todo/More_fine-grained_testremote_command/comment_3_88a943ac262a279a46d0b761d1e2a24e._comment b/doc/todo/More_fine-grained_testremote_command/comment_3_88a943ac262a279a46d0b761d1e2a24e._comment
new file mode 100644
index 0000000000..8094c11623
--- /dev/null
+++ b/doc/todo/More_fine-grained_testremote_command/comment_3_88a943ac262a279a46d0b761d1e2a24e._comment
@@ -0,0 +1,18 @@
+[[!comment format=mdwn
+ username="matrss"
+ avatar="http://cdn.libravatar.org/avatar/cd1c0b3be1af288012e49197918395f0"
+ subject="comment 3"
+ date="2025-10-14T16:52:24Z"
+ content="""
+> It's not as simple as just plumbing that up though, because testremote has implicit dependencies in its test ordering. It has to do the storeKey test before it can do the present test, for example.
+
+I already thought that this might be the case, so running the tests independently isn't really infeasible.
+
+To address my second point I might be able to just parse the output of testremote into \"sub-tests\" on the Forgejo-aneksajo side. Tasty doesn't seem to have a nice streaming output format for that though, right? There is a TAP formatter, but that looks unmaintained...
+
+---
+
+> There are actually only two write operations, storeKey and removeKey. Since removeKey is supposed to succeed when a key is not present, if storeKey fails, then removeKey will succeed. But removeKey should fail to remove a key that is stored on the remote. To test that, the --test-readonly=file option would need to be used to provide a file that is already stored on the remote.
+
+Now that you are saying this, is a new option even necessary? --test-readonly already takes a filename that is expected to be present on the remote, so instead of adding a new option --test-readonly could ensure that this key can't be removed, and that a different key can't be stored (and that removeKey succeeds on this not-present key).
+"""]]

comment
diff --git a/doc/todo/More_fine-grained_testremote_command/comment_1_d0d1406f9b1619b57908e62ac3200f69._comment b/doc/todo/More_fine-grained_testremote_command/comment_1_d0d1406f9b1619b57908e62ac3200f69._comment
new file mode 100644
index 0000000000..8d61f17ac7
--- /dev/null
+++ b/doc/todo/More_fine-grained_testremote_command/comment_1_d0d1406f9b1619b57908e62ac3200f69._comment
@@ -0,0 +1,26 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 1"""
+ date="2025-10-14T14:54:53Z"
+ content="""
+It would be possible to make `git-annex testremote` support the
+command-line options of the underlying test framework (tasty).
+`git-annex test` already does that, so has --list-test and --pattern.
+
+It's not as simple as just plumbing that up though, because testremote has
+implicit dependencies in its test ordering. It has to do the `storeKey`
+test before it can do the `present` test, for example. Those dependencies
+would need to be made explict, rather than implicit.
+
+Explict dependencies, though, would also make it not really possible to run
+most of the tests separately. Running testremote 5 times to run the listed
+tests, if each run does the necessary `storeKey` would add a lot of overhead.
+
+Not declaring dependencies and leaving it up to the user to run testremote
+repeatedly to run a sequence of tests in the necessary order would also
+run into problems with testremote using random test keys which change every
+time it's run, as well as it having an end cleanup stage where it removes
+any lingering test keys from the local repository and the remote.
+
+This seems to be a bit of an impasse... :-/
+"""]]
diff --git a/doc/todo/More_fine-grained_testremote_command/comment_2_de940a1c4ca0194582cd0ad449eefe28._comment b/doc/todo/More_fine-grained_testremote_command/comment_2_de940a1c4ca0194582cd0ad449eefe28._comment
new file mode 100644
index 0000000000..94958c870e
--- /dev/null
+++ b/doc/todo/More_fine-grained_testremote_command/comment_2_de940a1c4ca0194582cd0ad449eefe28._comment
@@ -0,0 +1,34 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 2"""
+ date="2025-10-14T15:21:02Z"
+ content="""
+I don't know about the "--write-only" name, but I see the value in having a
+way for testremote to check what a remote that is expected to only allow
+read access does not allow any writes, as well as otherwise behaving
+correctly.
+
+There are actually only two write operations, `storeKey` and `removeKey`.
+Since `removeKey` is supposed to succeed when a key is not present, if
+`storeKey` fails, then `removeKey` will succeed. But `removeKey` should
+fail to remove a key that is stored on the remote. To test that,
+the --test-readonly=file option would need to be used to provide a file
+that is already stored on the remote.
+
+I think it would make sense to require that option be present
+in order to use this new "--write-only" (or whatever name) option.
+
+---
+
+Also, git-annex does know internally that some remotes are readonly. For
+example, a regular http git remote that does not use p2phttp.
+Or any remote that has `remote.<name>.annex-readonly` set. Currently
+`testremote` only skips all the write tests for those, rather than
+confirming that writes fail. It would make sense for testremote of a known
+readonly remote to behave as if this new option were provided.
+
+(But, setting `remote.<name>.annex-readonly` rather than using
+the "--write-only" option would not work for you, because that config
+causes git-annex to refuse to try to write to the remote. Which doesn't
+tell you if your server is configured to correctly reject writes.)
+"""]]

diff --git a/doc/todo/More_fine-grained_testremote_command.mdwn b/doc/todo/More_fine-grained_testremote_command.mdwn
new file mode 100644
index 0000000000..bb4eb63f26
--- /dev/null
+++ b/doc/todo/More_fine-grained_testremote_command.mdwn
@@ -0,0 +1,12 @@
+I am using `git annex testremote` as a baseline test bench for Forgejo-aneksajo as a git-annex remote, and it is awesome to have that. I have some pain points with it though:
+
+- I would like to use these tests to confirm that I don't accidentally give write access to read-only users. This means I would need a way to ensure that all tests which require write access fail against the remote.
+- I am spawning a `git annex testremote` subprocess within the integration tests of Forgejo-aneksajo, which are written in Go. Sometimes this "large blackbox test" gets stuck in CI and I haven't figured out why yet. It would be nice to have a more transparent integration into the Forgejo-aneksajo test suite.
+
+Both of those points could be addressed if `git annex testremote` provided a way to run each test individually, and to get a list of all the available tests categorized by if they are read-only or read-write. I could then integrate each as an individual sub-test into Forgejo-aneksajo's test suite and properly assert on the outcome of the test given the respective test setup.
+
+If that's not possible for some reason it would also be an improvement with regards to the first point if there was something like a `git annex testremote --write-only` with the option to only report success if all of those tests have failed.
+
+What do you think?
+
+[[!tag projects/INM7]]

diff --git a/doc/forum/Git-annex_in___34__AGit-Flow__34__.mdwn b/doc/forum/Git-annex_in___34__AGit-Flow__34__.mdwn
index f1662dc3a0..2a52839629 100644
--- a/doc/forum/Git-annex_in___34__AGit-Flow__34__.mdwn
+++ b/doc/forum/Git-annex_in___34__AGit-Flow__34__.mdwn
@@ -7,11 +7,12 @@ I am wondering how git-annex could best fit into this flow. I would like to be a
 The fundamental issue seems to be that annexed objects always belong to the entire repository, and are not scoped to any branch.
 
 I've thought of these options so far:
+
 - Provide a "per PR special remote" that the creator of the PR could push annexed files to. This would require the user to configure an additional remote, which the AGit-Flow tries to avoid for plain-git contributions.
 - A per-user special remote that is assumed to contain the annexed files for all of the users AGit-PRs. If git recognizes remote configs in the users' global git config then it could be possible to get away with configuring things once, but I am not sure of the behavior of git in that case.
 - Allow read-only users to have append-only access to the annex. This must at least be limited to secure hashes though, and there are implications of DoS by malicious users filling disk space / quotas.
 
-Worth it to note that AGit-Flow already works for Contributors with write access, since they can write to the annex freely anyway.
+Worth it to note that AGit-Flow already works for contributors with write access, since they can write to the annex freely anyway.
 
 Do you have any other ideas on how git-annex could be used in this workflow?
 

diff --git a/doc/forum/Git-annex_in___34__AGit-Flow__34__.mdwn b/doc/forum/Git-annex_in___34__AGit-Flow__34__.mdwn
new file mode 100644
index 0000000000..f1662dc3a0
--- /dev/null
+++ b/doc/forum/Git-annex_in___34__AGit-Flow__34__.mdwn
@@ -0,0 +1,18 @@
+Forgejo supports ["AGit-Flow"](https://forgejo.org/docs/latest/user/agit-support/) to make pull requests without requiring a user to fork a repository first. This is achieved by having a sort of branch namespace `refs/for/<target-branch>/<topic>` which can be pushed to by users that only have read access to the repository. This will open a PR from this branch to the named target branch.
+
+There are efforts in upstream Forgejo to make this a more prominent alternative to forking for contributions: <https://codeberg.org/forgejo/discussions/issues/131>.
+
+I am wondering how git-annex could best fit into this flow. I would like to be able to create PRs containing annexed files on Forgejo-aneksajo in this way (tracking issue on the Forgejo-aneksajo side: <https://codeberg.org/forgejo-aneksajo/forgejo-aneksajo/issues/32>). Obviously annexed objects copied to the Forgejo-aneksajo instance via this path should only be available in the context of that PR in some way.
+
+The fundamental issue seems to be that annexed objects always belong to the entire repository, and are not scoped to any branch.
+
+I've thought of these options so far:
+- Provide a "per PR special remote" that the creator of the PR could push annexed files to. This would require the user to configure an additional remote, which the AGit-Flow tries to avoid for plain-git contributions.
+- A per-user special remote that is assumed to contain the annexed files for all of the users AGit-PRs. If git recognizes remote configs in the users' global git config then it could be possible to get away with configuring things once, but I am not sure of the behavior of git in that case.
+- Allow read-only users to have append-only access to the annex. This must at least be limited to secure hashes though, and there are implications of DoS by malicious users filling disk space / quotas.
+
+Worth it to note that AGit-Flow already works for Contributors with write access, since they can write to the annex freely anyway.
+
+Do you have any other ideas on how git-annex could be used in this workflow?
+
+[[!tag projects/INM7]]

diff --git a/doc/bugs/S3_fails_with_v4_signing.mdwn b/doc/bugs/S3_fails_with_v4_signing.mdwn
new file mode 100644
index 0000000000..4713d454fe
--- /dev/null
+++ b/doc/bugs/S3_fails_with_v4_signing.mdwn
@@ -0,0 +1,45 @@
+### Please describe the problem.
+
+As mentioned in various places on this wiki, git annex fails with S3 backends requiring v4 signatures, e.g. London, Frankfurt.
+
+Support v4 has apparently been merged in [upstream](https://github.com/aristidb/aws/pull/241).
+
+Would it be possible to migrate to v4 signing? I'd do the PR myself but my Haskell is currently non existent, sadly.
+
+
+### What steps will reproduce the problem?
+
+Interact with a git annex remote using v4.
+
+### What version of git-annex are you using? On what operating system?
+
+```shell
+git-annex version
+git-annex version: 10.20250630
+build flags: Assistant Webapp Pairing Inotify DBus DesktopNotify TorrentParser MagicMime Servant Feeds Testsuite S3 WebDAV
+dependency versions: aws-0.24.4 bloomfilter-2.0.1.2 crypton-1.0.4 DAV-1.3.4 feed-1.3.2.1 ghc-9.8.4 http-client-0.7.19 persistent-sqlite-2.13.3.0 torrent-10000.1.3 uuid-1.3.16 yesod-1.6.2.1
+key/value backends: SHA256E SHA256 SHA512E SHA512 SHA224E SHA224 SHA384E SHA384 SHA3_256E SHA3_256 SHA3_512E SHA3_512 SHA3_224E SHA3_224 SHA3_384E SHA3_384 SKEIN256E SKEIN256 SKEIN512E SKEIN512 BLAKE2B256E BLAKE2B256 BLAKE2B512E BLAKE2B512 BLAKE2B160E BLAKE2B160 BLAKE2B224E BLAKE2B224 BLAKE2B384E BLAKE2B384 BLAKE2BP512E BLAKE2BP512 BLAKE2S256E BLAKE2S256 BLAKE2S160E BLAKE2S160 BLAKE2S224E BLAKE2S224 BLAKE2SP256E BLAKE2SP256 BLAKE2SP224E BLAKE2SP224 SHA1E SHA1 MD5E MD5 WORM URL GITBUNDLE GITMANIFEST VURL X*
+remote types: git gcrypt p2p S3 bup directory rsync web bittorrent webdav adb tahoe glacier ddar git-lfs httpalso borg rclone hook external compute mask
+operating system: linux x86_64
+supported repository versions: 8 9 10
+upgrade supported from repository versions: 0 1 2 3 4 5 6 7 8 9 10
+local repository version: 10
+```
+
+On NixOs, although I *think* the same will happen anywhere.
+
+
+### Please provide any additional information below.
+
+[[!format sh """
+ git annex initremote s3 type=S3 bucket=thema-assembly-line region=EU datacenter=eu-west-2 encryption=none
+initremote s3 (checking bucket...) (creating bucket in eu-west-2...)
+git-annex: S3Error {s3StatusCode = Status {statusCode = 400, statusMessage = "Bad Request"}, s3ErrorCode = "InvalidRequest", s3ErrorMessage = "The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.", s3ErrorResource = Nothing, s3ErrorHostId = Just "Hi0geDlta/PbTTIhfzHvtGxcoWq14VWxp/y5RugFCDEext1aOw0wFBhRP8+jVkHHDqTvoWqCgcY=", s3ErrorAccessKeyId = Nothing, s3ErrorStringToSign = Nothing, s3ErrorBucket = Nothing, s3ErrorEndpointRaw = Nothing, s3ErrorEndpoint = Nothing}
+failed
+initremote: 1 failed
+
+"""]]
+
+### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
+
+Yep. It's a great little tool; up till now always local network + rsync.

diff --git a/doc/bugs/Walrus_storage_backend.mdwn b/doc/bugs/Walrus_storage_backend.mdwn
new file mode 100644
index 0000000000..c5b1168c15
--- /dev/null
+++ b/doc/bugs/Walrus_storage_backend.mdwn
@@ -0,0 +1,20 @@
+### Please describe the feature.
+
+Walrus is a new type of decentralized storage, which allows programmable ownership of blob data.
+This fits perfectly to git-annex and allows to store huge amounts of blob data with 100% uptime and a good price economy.
+
+https://www.walrus.xyz/
+
+The coordination layer is SUI, a decentralized global programmable object database.
+Unfortunately, nobody implemented a git storage on SUI yet, so currently this can be seen a a normal blob storage. Long term, hosting git on SUI/walrus will create real decentralized git repos.
+
+As a nice bonus, when using walrus/sui, those, how use the git-annex default package, could pay a small fee to support the project. This would allow a steady income long term for the project.
+
+Since walrus is a storage backend and only guarantees that objects are available for epochs the underlying Storage object is reserved for (you buy a contingent ob blob storage for a duration of time).
+This Storage object needs to be extended / managed.
+The whole infrastructure, allow to build a decentralized annex cloud storage, where the user actually own his data and only storage payment etc is automated. Notifications etc.
+
+
+### Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
+
+I love git annex, works like a charm. Using it for 5+ years

Revert "update"
This reverts commit 48090579d91d92318d8b394c17b26b4a6af6ee69.
diff --git a/doc/thanks/list b/doc/thanks/list
index dd52e76d21..dfeda7a813 100644
--- a/doc/thanks/list
+++ b/doc/thanks/list
@@ -126,4 +126,3 @@ Lilia.Nanne,
 Dusty Mabe, 
 mpol, 
 Andrew Poelstra, 
-~1877056, 

update
diff --git a/doc/thanks/list b/doc/thanks/list
index dfeda7a813..dd52e76d21 100644
--- a/doc/thanks/list
+++ b/doc/thanks/list
@@ -126,3 +126,4 @@ Lilia.Nanne,
 Dusty Mabe, 
 mpol, 
 Andrew Poelstra, 
+~1877056, 

update
diff --git a/doc/thanks/list b/doc/thanks/list
index 1f590c5572..dfeda7a813 100644
--- a/doc/thanks/list
+++ b/doc/thanks/list
@@ -125,3 +125,4 @@ oz,
 Lilia.Nanne, 
 Dusty Mabe, 
 mpol, 
+Andrew Poelstra, 

much better
diff --git a/doc/tips/Friends_-_Connecting_Projects_to_Share_Files.mdwn b/doc/tips/Friends_-_Connecting_Projects_to_Share_Files.mdwn
index 00048cb64e..5e06039132 100644
--- a/doc/tips/Friends_-_Connecting_Projects_to_Share_Files.mdwn
+++ b/doc/tips/Friends_-_Connecting_Projects_to_Share_Files.mdwn
@@ -1,72 +1,93 @@
-# Acquaintances: Sharing Files through Connected Projects
+[[!meta author="Spencer"]]
+
+# Friends: Sharing Files through Connected Projects
 
 I often connect repos together during my scientific work, in which I like to use the [YODA (Datalad)](https://handbook.datalad.org/en/latest/_images/dataset_modules.svg) standard of connecting related projects via submodules. However, I've recently found that sometimes I have to connect an entire repo to, say, a paper just to use one resource. For the sake of provenance, this connection is essential, but it feels extremely inefficient and unscalable to have one repo filled with submodules just for individual files.
 
-For these specific instances, I'm devising an alternative solution: acquaintance repos.
+For these specific instances, I'm devising an alternative solution: friend repos.
 
-## Acquaintances are Unrelated Repos
+## Friends are Unrelated Repos
 
-In general, an acquaintance is a repo whose *history* (branches, worktree, commits) is not relevant to the current repo, but is the origin for some files that the current repo uses. This is unlike *clones* (where everything is related), *parents/children* (where the entire child is derived or related to the parent, e.g. like superproject team repos and their children), or other [groups](https://git-annex.branchable.com/preferred_content/standard_groups/) defined by git-annex (archives, sources, etc.)
+In general, a friend is a repo whose *history* (branches, worktree, commits) is not relevant to the current repo, but is the origin for some files that the current repo uses. This is unlike *clones* (where everything is related), *parents/children* (where the entire child is derived or related to the parent, e.g. like superproject team repos and their children), or other [groups](https://git-annex.branchable.com/preferred_content/standard_groups/) defined by git-annex (archives, sources, etc.)
 
 This definition requires upholding some technical details:
 
-1. Acquaintances should **never sync**. This precludes defining them as normal git remotes unless you are very dilligent about undefining `remote.<name>.fetch` and setting `remote.<name>.sync=false`
-1. Acquaintances don't need to know about *all* files in the acquaintance repo (neither in a git sense or annex sense), just the files used. Therefore `git annex filter-branch` is a bit overkill, but could be done manually via selecting exactly the keys needed.
+1. Friends should **never sync**. This precludes defining them as normal git remotes unless you are very dilligent about undefining `remote.<name>.fetch` and setting `remote.<name>.sync=false`
+1. Friends don't need to know about *all* files in the friend repo (neither their history (git) or key logs (annex)), they just the files they use. Therefore while `git annex filter-branch` could be used to filter for just the files needed, it is a bit overkill.
 
 ## Solution - A Special Remote with Custom Groups
 
 (`gx` is short for `git annex`)
 
-Define a special repo that points to the primary storage location for the acquaintance repo.
-I like to define it with a name like `acq.X` so it's obvious by inspection that it's an acquaintance.
-Other metadata also tells you this (`gx group acq.X` will list `acquaintance`, or something could be added to the description),
+Define a special repo that points to the primary storage location for the friend repo.
+I like to define it with a name like `fri.X` so it's obvious by inspection that it's an friend.
+Other metadata also tells you this (`gx group fri.X` will list `friend`, or something could be added to the description),
 but being in the name makes it clear especially for e.g. `gx list`.
 
 ### Depot: Primary Storage
 
 The depot is where a repo stores its *own* stuff.
 This prevents others' stuff from being duplicated into the referencing repo.
-For those familiar with the `client` group, `depot`s are just clients with acquaintances replacing archives.
+For those familiar with the `client` group, `depot`s are just clients with friends replacing archives.
+
+```bash
+gx groupwanted depot "(include=* and (not (copies=friend:1))) or approxlackingcopies=1"
+```
+
+#### Client Replacement Version
+
+If you want to be able to use the assistant or archives, here's a version that can stand in for `client`:
 
-`gx groupwanted depot "(include=* and (not (copies=acquaintance:1))) or approxlackingcopies=1"`
+```bash
+gx groupwanted depot "(include=* and ((exclude=*/archive/* and exclude=archive/*) or (not (copies=archive:1 or copies=smallarchive:1 or copies=friend:1)))) or approxlackingcopies=1"
+```
 
-### Acquaintance
+### Friend: Related Repos
 
-The acquaintance is the source for stuff the current repo references.
+The friend is the source for stuff the current repo references.
 Therefore, it doesn't need to be stored by the repo (i.e. in its depot)
 
-`gx groupwanted acquaintance present`
+```bash
+gx groupwanted friend present
+```
 
 ### Finishing Up
 
-To actually register where acquaintance files are, the ideal way is `gx fsck`.
+To actually register where friend files are, the ideal way is `gx fsck`.
 This is better than e.g. `gx filter-branch` mentioned above because it's automatic.
 The default behavior of `fsck`, like other annex commands, is to check against files *in the current worktree*,
 so it will only populate the metadata for a special remote about the files the current repo is trained to care about.
 
-`gx fsck -f acq.X -J 10`
+```bash
+gx fsck -f fri.X --fast -J 10
+```
 
-This may be a bit slow initially because it has to check each file in the worktree by seeking the remote, downloading known files, and verifying their hashes before they're registered as present in the new acquaintance.
+Without `--fast`, the process will be slower as it verifies hashes by downloading files.
 
 In short the process involves:
 
-1. For every external file desired by a repo:
-  1. Copy the file (or a symlink) to the current repo and track it with annex
-  1. Define a new special remote `acq.X` pointing to the depot/storage location for the file from the acquaintance repo.
-  1. Assign the special remote with group `acquaintance`
-  1. Assign any storage locations for the current remote with group `depot`
-  1. Run `gx fsck -f acq.X` to populate the new special remote's contents relative to the current repo's worktree/branch
-  1. Run `gx sync` if desired. The result should be files present in the current repo (if desired), and only in the acquaintance but not the depot(s).
-  1. Now, the acquaintance acts as a link back to the origin for referenced files without duplication or having to add the entire acquaintance as a submodule!
+1. For every repo that wants a friend:
+    1. Define the group `friend` with its `groupwanted` rule (above for easy copying)
+    1. Define the group `depot` with its `groupwanted` rule (above for easy copying)
+    1. Set existing depots to use the `depot` group and have `groupwanted` as their `wanted` rule
+1. For every friend:
+    1. Define a new special remote `fri.X` pointing to the depot/storage location for friend repo.
+    1. Assign the special remote with group `friend` and ensure it has `groupwanted` as its `wanted` rule
+1. For every batch of files added from a friend:
+    1. Copy the files (or symlinks) and track them with annex
+    1. Run the `gx fsck` above to update the friend with the new files
+    1. Run `gx sync` if desired.
+    1. The result should be files present in the friend (and maybe the current), but not the depot(s).
+    1. Now, the friend tells us where a file came from without having to add the entire friend as a submodule!
 
 ## FAQ/Open Questions
 
-1. Is there a way to define the custom groups globally, or will I have to re-define special groups in every repo that uses acquaitances/depots?
-  1. Not sure yet. I wonder where custom groups could be defined globally? Maybe in the user `.gitconfig`.
+1. Is there a way to define the custom groups globally, or will I have to re-define special groups in every repo that uses friend/depots?
+    1. Not sure yet. I wonder where custom groups could be defined globally? Maybe in the user `.gitconfig`.
 1. Is there a way to get CLI autocomplete to suggest custom groups?
-  1. Not sure yet.
-1. Will this play well with standard groups and the assistant, especially if `client`s and `archive`s are used?
-  1. Probably not, I don't use the assistant, but I suspect if one wanted to they'd have to define depots as clients with the acquantaince logic added instead of substituted for archives.
+    1. I don't think there's support for this yet: only the standard groups are suggested in my zsh/omz setup.
+1. Is this a replacement for Datalad datasets?
+    1. I think of this as a tool to use alongside datasets. Datalad datasets are great when one project depends on the entirety of another (like a technical paper on an analysis) while this technique is better for collecting files from many projects under one umbrella (like a Thesis, which coincidentally, is what I'm developing this for).
+    1. This also helps separate the ideas of storage (where files live) and referencing (how files are used). When I originally started using datasets, I had one special repo for each repo since I figured each repo has to have its own unique remote for git in whatever Github/Organization/Team the project belongs to anyway. Now, this is motivating me to consider how to rationally store contents for projects that share some commonality (a collaboration, an experimental phase, a taskforce, a super-repo as a parent). In this way, I can maintain a provenance record while minimizing the number of clones and remotes I need to maintain.
 
-<!-- Work in progress! Feel free to leave comments like this if you have questions about the final idea once I finish it. -->
 <!-- Learning in Public: I've only just begun to use this for myself and am eliciting feedback and fleshing it out by describing it here (Feynmann Technique Style) -->

rename tips/Acquaintances_-_Connecting_Projects_to_Share_Files.mdwn to tips/Friends_-_Connecting_Projects_to_Share_Files.mdwn
diff --git a/doc/tips/Acquaintances_-_Connecting_Projects_to_Share_Files.mdwn b/doc/tips/Friends_-_Connecting_Projects_to_Share_Files.mdwn
similarity index 100%
rename from doc/tips/Acquaintances_-_Connecting_Projects_to_Share_Files.mdwn
rename to doc/tips/Friends_-_Connecting_Projects_to_Share_Files.mdwn
diff --git a/doc/tips/Acquaintances_-_Connecting_Projects_to_Share_Files/comment_1_8abe6074c55f81ee3643b508e742c6cd._comment b/doc/tips/Friends_-_Connecting_Projects_to_Share_Files/comment_1_8abe6074c55f81ee3643b508e742c6cd._comment
similarity index 100%
rename from doc/tips/Acquaintances_-_Connecting_Projects_to_Share_Files/comment_1_8abe6074c55f81ee3643b508e742c6cd._comment
rename to doc/tips/Friends_-_Connecting_Projects_to_Share_Files/comment_1_8abe6074c55f81ee3643b508e742c6cd._comment

comment
diff --git a/doc/tips/Acquaintances_-_Connecting_Projects_to_Share_Files/comment_1_8abe6074c55f81ee3643b508e742c6cd._comment b/doc/tips/Acquaintances_-_Connecting_Projects_to_Share_Files/comment_1_8abe6074c55f81ee3643b508e742c6cd._comment
new file mode 100644
index 0000000000..f58b964892
--- /dev/null
+++ b/doc/tips/Acquaintances_-_Connecting_Projects_to_Share_Files/comment_1_8abe6074c55f81ee3643b508e742c6cd._comment
@@ -0,0 +1,7 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 1"""
+ date="2025-10-06T13:31:33Z"
+ content="""
+Passing --fast to fsck will prevent it needing to download the files.
+"""]]

new idea, work in progress
diff --git a/doc/tips/Acquaintances_-_Connecting_Projects_to_Share_Files.mdwn b/doc/tips/Acquaintances_-_Connecting_Projects_to_Share_Files.mdwn
new file mode 100644
index 0000000000..00048cb64e
--- /dev/null
+++ b/doc/tips/Acquaintances_-_Connecting_Projects_to_Share_Files.mdwn
@@ -0,0 +1,72 @@
+# Acquaintances: Sharing Files through Connected Projects
+
+I often connect repos together during my scientific work, in which I like to use the [YODA (Datalad)](https://handbook.datalad.org/en/latest/_images/dataset_modules.svg) standard of connecting related projects via submodules. However, I've recently found that sometimes I have to connect an entire repo to, say, a paper just to use one resource. For the sake of provenance, this connection is essential, but it feels extremely inefficient and unscalable to have one repo filled with submodules just for individual files.
+
+For these specific instances, I'm devising an alternative solution: acquaintance repos.
+
+## Acquaintances are Unrelated Repos
+
+In general, an acquaintance is a repo whose *history* (branches, worktree, commits) is not relevant to the current repo, but is the origin for some files that the current repo uses. This is unlike *clones* (where everything is related), *parents/children* (where the entire child is derived or related to the parent, e.g. like superproject team repos and their children), or other [groups](https://git-annex.branchable.com/preferred_content/standard_groups/) defined by git-annex (archives, sources, etc.)
+
+This definition requires upholding some technical details:
+
+1. Acquaintances should **never sync**. This precludes defining them as normal git remotes unless you are very dilligent about undefining `remote.<name>.fetch` and setting `remote.<name>.sync=false`
+1. Acquaintances don't need to know about *all* files in the acquaintance repo (neither in a git sense or annex sense), just the files used. Therefore `git annex filter-branch` is a bit overkill, but could be done manually via selecting exactly the keys needed.
+
+## Solution - A Special Remote with Custom Groups
+
+(`gx` is short for `git annex`)
+
+Define a special repo that points to the primary storage location for the acquaintance repo.
+I like to define it with a name like `acq.X` so it's obvious by inspection that it's an acquaintance.
+Other metadata also tells you this (`gx group acq.X` will list `acquaintance`, or something could be added to the description),
+but being in the name makes it clear especially for e.g. `gx list`.
+
+### Depot: Primary Storage
+
+The depot is where a repo stores its *own* stuff.
+This prevents others' stuff from being duplicated into the referencing repo.
+For those familiar with the `client` group, `depot`s are just clients with acquaintances replacing archives.
+
+`gx groupwanted depot "(include=* and (not (copies=acquaintance:1))) or approxlackingcopies=1"`
+
+### Acquaintance
+
+The acquaintance is the source for stuff the current repo references.
+Therefore, it doesn't need to be stored by the repo (i.e. in its depot)
+
+`gx groupwanted acquaintance present`
+
+### Finishing Up
+
+To actually register where acquaintance files are, the ideal way is `gx fsck`.
+This is better than e.g. `gx filter-branch` mentioned above because it's automatic.
+The default behavior of `fsck`, like other annex commands, is to check against files *in the current worktree*,
+so it will only populate the metadata for a special remote about the files the current repo is trained to care about.
+
+`gx fsck -f acq.X -J 10`
+
+This may be a bit slow initially because it has to check each file in the worktree by seeking the remote, downloading known files, and verifying their hashes before they're registered as present in the new acquaintance.
+
+In short the process involves:
+
+1. For every external file desired by a repo:
+  1. Copy the file (or a symlink) to the current repo and track it with annex
+  1. Define a new special remote `acq.X` pointing to the depot/storage location for the file from the acquaintance repo.
+  1. Assign the special remote with group `acquaintance`
+  1. Assign any storage locations for the current remote with group `depot`
+  1. Run `gx fsck -f acq.X` to populate the new special remote's contents relative to the current repo's worktree/branch
+  1. Run `gx sync` if desired. The result should be files present in the current repo (if desired), and only in the acquaintance but not the depot(s).
+  1. Now, the acquaintance acts as a link back to the origin for referenced files without duplication or having to add the entire acquaintance as a submodule!
+
+## FAQ/Open Questions
+
+1. Is there a way to define the custom groups globally, or will I have to re-define special groups in every repo that uses acquaitances/depots?
+  1. Not sure yet. I wonder where custom groups could be defined globally? Maybe in the user `.gitconfig`.
+1. Is there a way to get CLI autocomplete to suggest custom groups?
+  1. Not sure yet.
+1. Will this play well with standard groups and the assistant, especially if `client`s and `archive`s are used?
+  1. Probably not, I don't use the assistant, but I suspect if one wanted to they'd have to define depots as clients with the acquantaince logic added instead of substituted for archives.
+
+<!-- Work in progress! Feel free to leave comments like this if you have questions about the final idea once I finish it. -->
+<!-- Learning in Public: I've only just begun to use this for myself and am eliciting feedback and fleshing it out by describing it here (Feynmann Technique Style) -->

Added a comment: My config works now
diff --git a/doc/forum/annex.largefiles_doesn__39__t_work_for_git_add/comment_1_132d155d5445745e5ee086370be48aad._comment b/doc/forum/annex.largefiles_doesn__39__t_work_for_git_add/comment_1_132d155d5445745e5ee086370be48aad._comment
new file mode 100644
index 0000000000..2055ee769c
--- /dev/null
+++ b/doc/forum/annex.largefiles_doesn__39__t_work_for_git_add/comment_1_132d155d5445745e5ee086370be48aad._comment
@@ -0,0 +1,30 @@
+[[!comment format=mdwn
+ username="incogshift"
+ avatar="http://cdn.libravatar.org/avatar/fe527f5047693f6657cd03a6893da975"
+ subject="My config works now"
+ date="2025-10-04T08:05:00Z"
+ content="""
+I have `.gitattributes`:
+
+```
+* annex.largefiles=nothing filter=annex
+*.pdf annex.largefiles=anything filter=annex
+```
+
+and git config:
+
+```
+[annex]
+	gitaddtoannex = true
+```
+
+Using `git add` now adds it to annex. This can be confirmed with
+
+```
+git annex info file.pdf
+```
+
+The output should show `present = true` at the end. If it wasn't added to annex, the output would show `fatal: Not a valid object name file.pdf`.
+
+And it seems that, by default, the files are stored in the working tree in their unlocked state. So `git add` doesn't replace the file with a symlink unlike `git annex add`
+"""]]

diff --git a/doc/forum/annex.largefiles_doesn__39__t_work_for_git_add.mdwn b/doc/forum/annex.largefiles_doesn__39__t_work_for_git_add.mdwn
index 27c94248c1..128341f4ca 100644
--- a/doc/forum/annex.largefiles_doesn__39__t_work_for_git_add.mdwn
+++ b/doc/forum/annex.largefiles_doesn__39__t_work_for_git_add.mdwn
@@ -14,3 +14,18 @@ My config is the one below:
 *.pptx annex.largefiles=anything
 *.docx annex.largefiles=anything
 ```
+
+I'm using NixOS. My git annex version info is below:
+
+```
+git annex version
+git-annex version: 10.20250630
+build flags: Assistant Webapp Pairing Inotify DBus DesktopNotify TorrentParser MagicMime Servant Feeds Testsuite S3 WebDAV
+dependency versions: aws-0.24.4 bloomfilter-2.0.1.2 crypton-1.0.4 DAV-1.3.4 feed-1.3.2.1 ghc-9.8.4 http-client-0.7.19 persistent-sqlite-2.13.3.0 torrent-10000.1.3 uuid-1.3.16 yesod-1.6.2.1
+key/value backends: SHA256E SHA256 SHA512E SHA512 SHA224E SHA224 SHA384E SHA384 SHA3_256E SHA3_256 SHA3_512E SHA3_512 SHA3_224E SHA3_224 SHA3_384E SHA3_384 SKEIN256E SKEIN256 SKEIN512E SKEIN512 BLAKE2B256E BLAKE2B256 BLAKE2B512E BLAKE2B512 BLAKE2B160E BLAKE2B160 BLAKE2B224E BLAKE2B224 BLAKE2B384E BLAKE2B384 BLAKE2BP512E BLAKE2BP512 BLAKE2S256E BLAKE2S256 BLAKE2S160E BLAKE2S160 BLAKE2S224E BLAKE2S224 BLAKE2SP256E BLAKE2SP256 BLAKE2SP224E BLAKE2SP224 SHA1E SHA1 MD5E MD5 WORM URL GITBUNDLE GITMANIFEST VURL X*
+remote types: git gcrypt p2p S3 bup directory rsync web bittorrent webdav adb tahoe glacier ddar git-lfs httpalso borg rclone hook external compute mask
+operating system: linux x86_64
+supported repository versions: 8 9 10
+upgrade supported from repository versions: 0 1 2 3 4 5 6 7 8 9 10
+local repository version: 10
+```

diff --git a/doc/forum/annex.largefiles_doesn__39__t_work_for_git_add.mdwn b/doc/forum/annex.largefiles_doesn__39__t_work_for_git_add.mdwn
index c43d41edda..27c94248c1 100644
--- a/doc/forum/annex.largefiles_doesn__39__t_work_for_git_add.mdwn
+++ b/doc/forum/annex.largefiles_doesn__39__t_work_for_git_add.mdwn
@@ -1,4 +1,4 @@
-I set up `annex.largefiles` in my global `.gitattributes` config. But git add doesn't add the defined large files to annex. But git annex works with large files and small as intended.
+I set up `annex.largefiles` in my global `.gitattributes` config. But git add doesn't add the defined large files to annex. But git annex works with large and small files as intended.
 
 My config is the one below:
 

diff --git a/doc/forum/annex.largefiles_doesn__39__t_work_for_git_add.mdwn b/doc/forum/annex.largefiles_doesn__39__t_work_for_git_add.mdwn
index b357fcf5a7..c43d41edda 100644
--- a/doc/forum/annex.largefiles_doesn__39__t_work_for_git_add.mdwn
+++ b/doc/forum/annex.largefiles_doesn__39__t_work_for_git_add.mdwn
@@ -1,7 +1,8 @@
-I set up annex.largefiles in my global .gitattributes config. But git add doesn't add the defined large files to annex. But git annex works with large files and small as intended.
+I set up `annex.largefiles` in my global `.gitattributes` config. But git add doesn't add the defined large files to annex. But git annex works with large files and small as intended.
 
 My config is the one below:
 
+```
 * annex.largefiles=nothing
 *.pdf annex.largefiles=anything
 *.mp4 annex.largefiles=anything
@@ -12,3 +13,4 @@ My config is the one below:
 *.DOC annex.largefiles=anything
 *.pptx annex.largefiles=anything
 *.docx annex.largefiles=anything
+```

diff --git a/doc/forum/annex.largefiles_doesn__39__t_work_for_git_add.mdwn b/doc/forum/annex.largefiles_doesn__39__t_work_for_git_add.mdwn
new file mode 100644
index 0000000000..b357fcf5a7
--- /dev/null
+++ b/doc/forum/annex.largefiles_doesn__39__t_work_for_git_add.mdwn
@@ -0,0 +1,14 @@
+I set up annex.largefiles in my global .gitattributes config. But git add doesn't add the defined large files to annex. But git annex works with large files and small as intended.
+
+My config is the one below:
+
+* annex.largefiles=nothing
+*.pdf annex.largefiles=anything
+*.mp4 annex.largefiles=anything
+*.mp3 annex.largefiles=anything
+*.mkv annex.largefiles=anything
+*.odt annex.largefiles=anything
+*.wav annex.largefiles=anything
+*.DOC annex.largefiles=anything
+*.pptx annex.largefiles=anything
+*.docx annex.largefiles=anything

comment
diff --git a/doc/todo/very_confusing_name_annex.assistant.allowunlocked/comment_1_b7ad0090e29776c61babbc7bf0ccd684._comment b/doc/todo/very_confusing_name_annex.assistant.allowunlocked/comment_1_b7ad0090e29776c61babbc7bf0ccd684._comment
new file mode 100644
index 0000000000..73e3afccec
--- /dev/null
+++ b/doc/todo/very_confusing_name_annex.assistant.allowunlocked/comment_1_b7ad0090e29776c61babbc7bf0ccd684._comment
@@ -0,0 +1,23 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 1"""
+ date="2025-10-02T17:32:52Z"
+ content="""
+I think that "annex.assistant.allowlocked" would be as confusing, like you
+say the user would then have to RTFM to realize that they need to use
+annex.addunlocked to configure it, and that it doesn't cause files to be
+locked by default.
+
+To me, "treataddunlocked" is vague. Treat it as what?
+"allowaddunlocked" would be less vague since it does get the (full)
+name of the other config in there, so says it's allowing use of
+the other config.
+
+I agree this is a confusing name, and I wouldn't mind changing it, but I
+don't think it warrants an entire release to do that. So there would be
+perhaps a month for people to start using the current name. If this had
+come up in the 2 weeks between implementation and release I would have
+changed it, but at this point it starts to need a backwards compatability
+transition to change it, and I don't know if the minor improvement of
+"allowaddunlocked" is worth that.
+"""]]

Added a comment
diff --git a/doc/bugs/Compiling_20250925__44___variable_not_in_scope_error/comment_2_33575c4a6477e3384a16533ff8b258ee._comment b/doc/bugs/Compiling_20250925__44___variable_not_in_scope_error/comment_2_33575c4a6477e3384a16533ff8b258ee._comment
new file mode 100644
index 0000000000..54f8af3378
--- /dev/null
+++ b/doc/bugs/Compiling_20250925__44___variable_not_in_scope_error/comment_2_33575c4a6477e3384a16533ff8b258ee._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ username="caleb@2b0d6f0eabf955cc8fd04c634b09f0ca4aad9233"
+ nickname="caleb"
+ avatar="http://cdn.libravatar.org/avatar/1d84382865c6c3378c04a35348fdfa07"
+ subject="comment 2"
+ date="2025-10-01T22:15:14Z"
+ content="""
+Thank you for the fix, that built just fine and I've successfully bumped the Arch Linux package to 20250929.
+"""]]

Added a comment
diff --git a/doc/todo/import_tree_from_rsync_special_remote/comment_8_b545d29519e57fbc2d563ce6d9aafdb7._comment b/doc/todo/import_tree_from_rsync_special_remote/comment_8_b545d29519e57fbc2d563ce6d9aafdb7._comment
new file mode 100644
index 0000000000..268f8cab57
--- /dev/null
+++ b/doc/todo/import_tree_from_rsync_special_remote/comment_8_b545d29519e57fbc2d563ce6d9aafdb7._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="yarikoptic"
+ avatar="http://cdn.libravatar.org/avatar/f11e9c84cb18d26a1748c33b48c924b4"
+ subject="comment 8"
+ date="2025-10-01T20:18:43Z"
+ content="""
+FTR: apparently sshfs is based on sftp and that one provides no means to access original inode.  Not yet sure on what then it could say about stability of inode across remounts/as whether sensible to rely on it. Useful ref with pointers [sshfs/issues/109#issuecomment-2755824670](https://github.com/libfuse/sshfs/issues/109#issuecomment-2755824670)
+"""]]

complaining about choice of variable
diff --git a/doc/todo/very_confusing_name_annex.assistant.allowunlocked.mdwn b/doc/todo/very_confusing_name_annex.assistant.allowunlocked.mdwn
new file mode 100644
index 0000000000..e58d392424
--- /dev/null
+++ b/doc/todo/very_confusing_name_annex.assistant.allowunlocked.mdwn
@@ -0,0 +1,8 @@
+Thank you for addressing that [todo](https://git-annex.branchable.com/todo/allow_configuring_assistant_to_add_files_locked/)!  
+
+But I must say though that the choice of `annex.assistant.allowunlocked`  is very confusing!  Without careful RTFM it suggests that by default assistant **does not** `allowunlocked`, thus using `locked` and thus to the **opposite** effect of the default behavior.
+
+Since really it instructs assistant to consider `addunlocked`, then I would have named it like `treataddunlocked` or alike.
+Or the smallest change to make it semantically sensible would have been to remove `un` from it and make `annex.assistant.allowlocked` thus allowing for `locked` files in general, which would then in reality (after RTFM) mean using `addunlocked` config.
+
+Just wanted to check if you stick to current choice before I start making use of it!

comment
diff --git a/doc/todo/import_tree_from_rsync_special_remote/comment_7_9716fc56ccfb622c964a64b37c1c5fdc._comment b/doc/todo/import_tree_from_rsync_special_remote/comment_7_9716fc56ccfb622c964a64b37c1c5fdc._comment
new file mode 100644
index 0000000000..1036bbd920
--- /dev/null
+++ b/doc/todo/import_tree_from_rsync_special_remote/comment_7_9716fc56ccfb622c964a64b37c1c5fdc._comment
@@ -0,0 +1,24 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 7"""
+ date="2025-10-01T15:35:53Z"
+ content="""
+I wonder how sshfs manages stable inodes that differ from the actual ones?
+But if it's really reliably stable, it would be ok to use it with the
+directory special remote.
+
+Extending the external special remote interface to support
+[import](https://git-annex.branchable.com/design/external_special_remote_protocol/export_and_import_appendix/#index1h2)
+would let you roll your own special remote, that could use ssh with
+rsync or whatever. 
+
+The current design for that tries to support both import and export, but
+noone has yet stepped up to the plate to try to implement a special remote
+that supports both safely. So I am leaning toward thinking that it would be
+a good idea to make the external special remote interface support *only*
+import (or export) for a given external special remote, but not both.
+
+Then would become pretty easy to make your own special remote that
+implements import only. Using whatever ssh commands make sense for the
+server.
+"""]]
diff --git a/doc/todo/importtree_only_remotes.mdwn b/doc/todo/importtree_only_remotes.mdwn
index 8f140c9450..2f9174b670 100644
--- a/doc/todo/importtree_only_remotes.mdwn
+++ b/doc/todo/importtree_only_remotes.mdwn
@@ -32,7 +32,9 @@ the wrong content. (So the remote should have retrievalSecurityPolicy =
 RetrievalVerifiableKeysSecure to make downloads be verified well enough.)
 
 I said this would not use a ContentIdentifier, but it seems it needs some
-simple form of ContentIdentifier, which could be just an mtime.
+simple form of ContentIdentifier, which could be just an mtime
+(but mtime or mtime+size is not able to detect swaps of 2 files that share
+both; using inode or something like that is better).
 Without any ContentIdentifier, it seems that each time 
 `git annex import --from remote` is run, it would need to re-download
 all files from the remote, because it would have no way of knowing

followup
diff --git a/doc/forum/Is_there_a_way_to_have_assistant_add_files_locked__63__/comment_11_48d03d7cc1a5e007d3d06d9753d467ff._comment b/doc/forum/Is_there_a_way_to_have_assistant_add_files_locked__63__/comment_11_48d03d7cc1a5e007d3d06d9753d467ff._comment
new file mode 100644
index 0000000000..5e8d14000d
--- /dev/null
+++ b/doc/forum/Is_there_a_way_to_have_assistant_add_files_locked__63__/comment_11_48d03d7cc1a5e007d3d06d9753d467ff._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 11"""
+ date="2025-10-01T15:32:45Z"
+ content="""
+This did get implemented, `git config annex.assistant.allowunlocked true`
+and that will make it use your `annex.addunlocked` setting.
+"""]]

Added a comment
diff --git a/doc/todo/import_tree_from_rsync_special_remote/comment_6_abc34860aed11d274a91d3134b6a7040._comment b/doc/todo/import_tree_from_rsync_special_remote/comment_6_abc34860aed11d274a91d3134b6a7040._comment
new file mode 100644
index 0000000000..da3609a6fd
--- /dev/null
+++ b/doc/todo/import_tree_from_rsync_special_remote/comment_6_abc34860aed11d274a91d3134b6a7040._comment
@@ -0,0 +1,34 @@
+[[!comment format=mdwn
+ username="yarikoptic"
+ avatar="http://cdn.libravatar.org/avatar/f11e9c84cb18d26a1748c33b48c924b4"
+ subject="comment 6"
+ date="2025-10-01T13:09:00Z"
+ content="""
+quick check -- according to `ls` - original inodes are not mapped but some are given and persist across remounts:
+
+```
+❯ ls -li /tmp/glances-root.log ~/.emacs ~/20250807-15forzabava.pdf
+   132280 lrwxrwxrwx 1 yoh  yoh      17 Nov 11  2014 /home/yoh/.emacs -> .etc/emacs/.emacs
+152278557 -rw-rw-r-- 1 yoh  yoh  207101 Aug  7 10:30 /home/yoh/20250807-15forzabava.pdf
+       34 -rw-r--r-- 1 root root   1165 Oct  1 08:43 /tmp/glances-root.log
+
+❯ sshfs localhost:/ /tmp/localhost
+
+❯ ls -li /tmp/localhost{/tmp/glances-root.log,/home/yoh/{.emacs,20250807-15forzabava.pdf}}
+ 6 lrwxrwxrwx 1 yoh  yoh      17 Nov 11  2014 /tmp/localhost/home/yoh/.emacs -> .etc/emacs/.emacs
+10 -rw-rw-r-- 1 yoh  yoh  207101 Aug  7 10:30 /tmp/localhost/home/yoh/20250807-15forzabava.pdf
+ 3 -rw-r--r-- 1 root root   1165 Oct  1 08:43 /tmp/localhost/tmp/glances-root.log
+
+❯ fusermount -u /tmp/localhost
+
+❯ sshfs localhost:/ /tmp/localhost
+
+❯ ls -li /tmp/localhost{/tmp/glances-root.log,/home/yoh/{.emacs,20250807-15forzabava.pdf}}
+ 6 lrwxrwxrwx 1 yoh  yoh      17 Nov 11  2014 /tmp/localhost/home/yoh/.emacs -> .etc/emacs/.emacs
+10 -rw-rw-r-- 1 yoh  yoh  207101 Aug  7 10:30 /tmp/localhost/home/yoh/20250807-15forzabava.pdf
+ 3 -rw-r--r-- 1 root root   1165 Oct  1 08:43 /tmp/localhost/tmp/glances-root.log
+
+```
+
+ok, if not `sshfs` and not `rsync` -- any other way you see? e.g. could it be easily setup for some `git` with ssh URL type \"special\" remote? ;-)
+"""]]

comments
diff --git a/doc/todo/Recent_remote_activities/comment_4_766ce3ab6c4ff368ec8e06e6c6f6aa8e._comment b/doc/todo/Recent_remote_activities/comment_4_766ce3ab6c4ff368ec8e06e6c6f6aa8e._comment
new file mode 100644
index 0000000000..7ca4788d07
--- /dev/null
+++ b/doc/todo/Recent_remote_activities/comment_4_766ce3ab6c4ff368ec8e06e6c6f6aa8e._comment
@@ -0,0 +1,23 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""git-annex activity"""
+ date="2025-09-30T14:29:54Z"
+ content="""
+Copying a related idea from @nobodyinperson on [[todo/remove_webapp]]:
+
+Furthermore, a command like `git annex activity` that goes arbitrarily far back in time and statically (non-live) lists recent activities like:
+
+- yesterday 23:32: remote1 downloaded 5 files (45MB)
+- today 10:45: you modified file `document.txt` (10MB)
+- today 10:46: you uploaded file `document.txt` (from today 10:45) to remote1, remote2 and remote3
+- today 12:35: Fred McGitFace modified file `document.txt` (12MB) and uploaded to remote2
+- ...
+
+Basically a human-readable (or as JSON), chronological log of things that happened in the repo. This is a superpower of git-annex: all this information is available as far back as one wants, we just don't have a way to access it nicely. `git log` and `git annex log` exist, but they are too specific, too broad or a bit hard to parse on their own. For example:
+
+- `git annex activity --since=\"2 weeks ago\" --include='*.doc'` would list things (who committed, which remote received it, etc.) that happened in the last two weeks to *.doc files
+- `git annex activity --only-annex --in=remote2` would list recent annex operations (in the `git-annex` branch only) of remote2
+- `git annex activity --only-changes --largerthan=10MB` would list recent file changes (additions, modifications, deletions, etc., in `git log` only)
+
+This `git annex assistant-log` and `git annex activity` would be a very nice feature to showcase git-annex's power (which other file syncing tool can to this? 🤔) and also solve [[todo/Recent_remote_activities]].
+"""]]
diff --git a/doc/todo/Recent_remote_activities/comment_5_1f4f43b32af276ef3b3db54fc2cb33f7._comment b/doc/todo/Recent_remote_activities/comment_5_1f4f43b32af276ef3b3db54fc2cb33f7._comment
new file mode 100644
index 0000000000..ca7d2061b6
--- /dev/null
+++ b/doc/todo/Recent_remote_activities/comment_5_1f4f43b32af276ef3b3db54fc2cb33f7._comment
@@ -0,0 +1,11 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 5"""
+ date="2025-09-30T14:31:59Z"
+ content="""
+A `git-annex activity` (or `git-annex log`) could also optionally stream live
+activity as it is happening. Eg, when a transfer is started it could display
+the start, and then later the end. That would be easy to build with what's
+in git-annex already. The assistant already uses the transfer logs that way,
+using inotify to notice changes.
+"""]]
diff --git a/doc/todo/Recent_remote_activities/comment_6_9e686c20ccd2c81f72f479441ca57698._comment b/doc/todo/Recent_remote_activities/comment_6_9e686c20ccd2c81f72f479441ca57698._comment
new file mode 100644
index 0000000000..7f06dd5337
--- /dev/null
+++ b/doc/todo/Recent_remote_activities/comment_6_9e686c20ccd2c81f72f479441ca57698._comment
@@ -0,0 +1,24 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""Re: git-annex activity"""
+ date="2025-09-30T14:34:50Z"
+ content="""
+> `git annex activity --since="2 weeks ago" --include='*.doc'
+
+This is essentially the same as `git-annex log` with a path. It also
+supports --since and --json. The difference I guess is the idea to also
+include information about git commits of the files, not only git-annex
+location changes. That would complicate the output, and apparently
+`git-annex log`'s output is too hard to parse already. So a design for a
+better output would be needed.
+
+> `git annex activity --only-annex --in=remote2`
+
+This is the same as `git-annex log --all` with the output filtered to only
+list a given remote. (`--in` does not influence `--all` currently).
+
+> `git annex activity --only-changes --largerthan=10MB`
+
+Can probably be accomplished with `git log` with some
+-S regexp.
+"""]]
diff --git a/doc/todo/remove_webapp/comment_4_d80ec1b3534ffa514df926925a0105f7._comment b/doc/todo/remove_webapp/comment_4_d80ec1b3534ffa514df926925a0105f7._comment
new file mode 100644
index 0000000000..ec4a8b0ae1
--- /dev/null
+++ b/doc/todo/remove_webapp/comment_4_d80ec1b3534ffa514df926925a0105f7._comment
@@ -0,0 +1,10 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 4"""
+ date="2025-09-30T14:22:09Z"
+ content="""
+git-annex does support desktop notifications of file uploads/downloads,
+via --notify-start and --notify-finish. (When built with dbus support.)
+That can be used with the assistant w/o webapp to keep a desktop user
+informed about what is going on.
+"""]]
diff --git a/doc/todo/remove_webapp/comment_5_75c22d9f3a84c259084468c03f5735bb._comment b/doc/todo/remove_webapp/comment_5_75c22d9f3a84c259084468c03f5735bb._comment
new file mode 100644
index 0000000000..f1225bb40d
--- /dev/null
+++ b/doc/todo/remove_webapp/comment_5_75c22d9f3a84c259084468c03f5735bb._comment
@@ -0,0 +1,12 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 5"""
+ date="2025-09-30T14:55:25Z"
+ content="""
+I've copied the `git-annex activity` idea over to
+[[todo/Recent_remote_activities]] so it doesn't get lost.
+
+I don't think it makes sense to make that a blocker for removing the webapp
+though. That would only let an advanced user build some kind of activity
+display, doesn't address the needs of most users of the webapp. 
+"""]]

Added a comment: Fixed in 20050929
diff --git a/doc/bugs/importfeed__58___Enum.toEnum__123__Word8__125____58___tag___40__8217__41___is_outs/comment_7_4d6559666e8b53957ed93ffa5928cb00._comment b/doc/bugs/importfeed__58___Enum.toEnum__123__Word8__125____58___tag___40__8217__41___is_outs/comment_7_4d6559666e8b53957ed93ffa5928cb00._comment
new file mode 100644
index 0000000000..00264d5eba
--- /dev/null
+++ b/doc/bugs/importfeed__58___Enum.toEnum__123__Word8__125____58___tag___40__8217__41___is_outs/comment_7_4d6559666e8b53957ed93ffa5928cb00._comment
@@ -0,0 +1,26 @@
+[[!comment format=mdwn
+ username="ewen"
+ avatar="http://cdn.libravatar.org/avatar/605b2981cb52b4af268455dee7a4f64e"
+ subject="Fixed in 20050929"
+ date="2025-09-29T21:54:14Z"
+ content="""
+Thanks for the very quick turn around on a new release!
+
+Conveniently HomeBrew also turned around building the new release quickly (I suspect it might be one of the packages in their CI for auto upgrade now), so I've been able to test the HomeBrew build of 20050929.
+
+20250929 seems to be working correctly to download podcast feeds, parse them, and download the media attachments as before.
+
+Ewen
+
+PS: Test example below.  But also worked for my regular podcast downloads, which were failing with 20250926.
+
+```
+ewen@basadi:/tmp/retest$ TEMPLATE='archive/${feedtitle}/${itemtitle}${extension}'
+ewen@basadi:/tmp/retest$ git annex importfeed --relaxed --template=\"${TEMPLATE}\" \"https://risky.biz/feeds/risky-business\"
+importfeed gathering known urls ok
+importfeed https://risky.biz/feeds/risky-business (\"Risky Business\") ok
+addurl https://dts.podtrac.com/redirect.mp3/media3.risky.biz/RB808.mp3 (to archive/Risky_Business/Risky_Business__808_--_Insane_megabug_in_Entra_left_all_tenants_exposed.mp3) ok
+addurl https://dts.podtrac.com/redirect.mp3/media3.risky.biz/RB807.mp3 (to archive/Risky_Business/Risky_Business__807_--_Shai-Hulud_npm_worm_wreaks_old-school_havoc.mp3) ok
+...
+```
+"""]]

Revert "webapp: Remove support for local pairing"
This reverts commit 8ea6d7acc548cb35b4905c9c663e8a7de66ac752.
Temporarily, until builds finish for today's release.
diff --git a/Assistant.hs b/Assistant.hs
index cd81895861..911ebd33d3 100644
--- a/Assistant.hs
+++ b/Assistant.hs
@@ -40,6 +40,9 @@ import Assistant.Threads.Glacier
 #ifdef WITH_WEBAPP
 import Assistant.WebApp
 import Assistant.Threads.WebApp
+#ifdef WITH_PAIRING
+import Assistant.Threads.PairListener
+#endif
 #else
 import Assistant.Types.UrlRenderer
 #endif
@@ -152,6 +155,11 @@ startDaemon assistant foreground startdelay cannotrun listenhost listenport star
 			then webappthread
 			else webappthread ++
 				[ watch commitThread
+#ifdef WITH_WEBAPP
+#ifdef WITH_PAIRING
+				, assist $ pairListenerThread urlrenderer
+#endif
+#endif
 				, assist pushThread
 				, assist pushRetryThread
 				, assist exportThread
diff --git a/Assistant/Pairing/MakeRemote.hs b/Assistant/Pairing/MakeRemote.hs
new file mode 100644
index 0000000000..f4468bc07c
--- /dev/null
+++ b/Assistant/Pairing/MakeRemote.hs
@@ -0,0 +1,98 @@
+{- git-annex assistant pairing remote creation
+ -
+ - Copyright 2012 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module Assistant.Pairing.MakeRemote where
+
+import Assistant.Common
+import Assistant.Ssh
+import Assistant.Pairing
+import Assistant.Pairing.Network
+import Assistant.MakeRemote
+import Assistant.Sync
+import Config.Cost
+import Config
+import qualified Types.Remote as Remote
+
+import Network.Socket
+import qualified Data.Text as T
+
+{- Authorized keys are set up before pairing is complete, so that the other
+ - side can immediately begin syncing. -}
+setupAuthorizedKeys :: PairMsg -> OsPath -> IO ()
+setupAuthorizedKeys msg repodir = case validateSshPubKey $ remoteSshPubKey $ pairMsgData msg of
+	Left err -> giveup err
+	Right pubkey -> do
+		absdir <- absPath repodir
+		unlessM (liftIO $ addAuthorizedKeys True absdir pubkey) $
+			giveup "failed setting up ssh authorized keys"
+
+{- When local pairing is complete, this is used to set up the remote for
+ - the host we paired with. -}
+finishedLocalPairing :: PairMsg -> SshKeyPair -> Assistant ()
+finishedLocalPairing msg keypair = do
+	sshdata <- liftIO $ installSshKeyPair keypair =<< pairMsgToSshData msg
+	{- Ensure that we know the ssh host key for the host we paired with.
+	 - If we don't, ssh over to get it. -}
+	liftIO $ unlessM (knownHost $ sshHostName sshdata) $
+		void $ sshTranscript
+			[ sshOpt "StrictHostKeyChecking" "no"
+			, sshOpt "NumberOfPasswordPrompts" "0"
+			, "-n"
+			]
+			(genSshHost (sshHostName sshdata) (sshUserName sshdata))
+			("git-annex-shell -c configlist " ++ T.unpack (sshDirectory sshdata))
+			Nothing
+	r <- liftAnnex $ addRemote $ makeSshRemote sshdata
+	repo <- liftAnnex $ Remote.getRepo r
+	liftAnnex $ setRemoteCost repo semiExpensiveRemoteCost
+	syncRemote r
+
+{- Mostly a straightforward conversion.  Except:
+ -  * Determine the best hostname to use to contact the host.
+ -  * Strip leading ~/ from the directory name.
+ -}
+pairMsgToSshData :: PairMsg -> IO SshData
+pairMsgToSshData msg = do
+	let d = pairMsgData msg
+	hostname <- liftIO $ bestHostName msg
+	let dir = case remoteDirectory d of
+		('~':'/':v) -> v
+		v -> v
+	return SshData
+		{ sshHostName = T.pack hostname
+		, sshUserName = Just (T.pack $ remoteUserName d)
+		, sshDirectory = T.pack dir
+		, sshRepoName = genSshRepoName hostname (toOsPath dir)
+		, sshPort = 22
+		, needsPubKey = True
+		, sshCapabilities = [GitAnnexShellCapable, GitCapable, RsyncCapable]
+		, sshRepoUrl = Nothing
+		}
+
+{- Finds the best hostname to use for the host that sent the PairMsg.
+ -
+ - If remoteHostName is set, tries to use a .local address based on it.
+ - That's the most robust, if this system supports .local.
+ - Otherwise, looks up the hostname in the DNS for the remoteAddress,
+ - if any. May fall back to remoteAddress if there's no DNS. Ugh. -}
+bestHostName :: PairMsg -> IO HostName
+bestHostName msg = case remoteHostName $ pairMsgData msg of
+	Just h -> do
+		let localname = h ++ ".local"
+		addrs <- catchDefaultIO [] $
+			getAddrInfo Nothing (Just localname) Nothing
+		maybe fallback (const $ return localname) (headMaybe addrs)
+	Nothing -> fallback
+  where
+	fallback = do
+		let a = pairMsgAddr msg
+		let sockaddr = case a of
+			IPv4Addr addr -> SockAddrInet (fromInteger 0) addr
+			IPv6Addr addr -> SockAddrInet6 (fromInteger 0) 0 addr 0
+		fromMaybe (showAddr a)
+			<$> catchDefaultIO Nothing
+				(fst <$> getNameInfo [] True False sockaddr)
diff --git a/Assistant/Pairing/Network.hs b/Assistant/Pairing/Network.hs
new file mode 100644
index 0000000000..62a4ea02e8
--- /dev/null
+++ b/Assistant/Pairing/Network.hs
@@ -0,0 +1,132 @@
+{- git-annex assistant pairing network code
+ -
+ - All network traffic is sent over multicast UDP. For reliability,
+ - each message is repeated until acknowledged. This is done using a
+ - thread, that gets stopped before the next message is sent.
+ -
+ - Copyright 2012 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module Assistant.Pairing.Network where
+
+import Assistant.Common
+import Assistant.Pairing
+import Assistant.DaemonStatus
+import Utility.ThreadScheduler
+import Utility.Verifiable
+
+import Network.Multicast
+import Network.Info
+import Network.Socket
+import qualified Network.Socket.ByteString as B
+import qualified Data.ByteString.UTF8 as BU8
+import qualified Data.Map as M
+import Control.Concurrent
+
+{- This is an arbitrary port in the dynamic port range, that could
+ - conceivably be used for some other broadcast messages.
+ - If so, hope they ignore the garbage from us; we'll certainly
+ - ignore garbage from them. Wild wild west. -}
+pairingPort :: PortNumber
+pairingPort = 55556
+
+{- Goal: Reach all hosts on the same network segment.
+ - Method: Use same address that avahi uses. Other broadcast addresses seem
+ - to not be let through some routers. -}
+multicastAddress :: AddrClass -> HostName
+multicastAddress IPv4AddrClass = "224.0.0.251"
+multicastAddress IPv6AddrClass = "ff02::fb"
+
+{- Multicasts a message repeatedly on all interfaces, with a 2 second
+ - delay between each transmission. The message is repeated forever
+ - unless a number of repeats is specified.
+ -
+ - The remoteHostAddress is set to the interface's IP address.
+ -
+ - Note that new sockets are opened each time. This is hardly efficient,
+ - but it allows new network interfaces to be used as they come up.
+ - On the other hand, the expensive DNS lookups are cached.
+ -}
+multicastPairMsg :: Maybe Int -> Secret -> PairData -> PairStage -> IO ()
+multicastPairMsg repeats secret pairdata stage = go M.empty repeats
+  where
+	go _ (Just 0) = noop
+	go cache n = do
+		addrs <- activeNetworkAddresses
+		let cache' = updatecache cache addrs
+		mapM_ (sendinterface cache') addrs
+		threadDelaySeconds (Seconds 2)
+		go cache' $ pred <$> n
+	{- The multicast library currently chokes on ipv6 addresses. -}
+	sendinterface _ (IPv6Addr _) = noop
+	sendinterface cache i = void $ tryIO $

(Diff truncated)
webapp: Remove support for local pairing
As a feature only supported by the webapp, and not by git-annex at the
command line, this is by now a very obscure corner of git-annex, and not
one I want to keep maintaining.
It's worth removing it to avoid the security expsure alone. People using
the assistant w/o the webapp probably don't expect it to be listening on
a UDP port for a handrolled protocol, but it was.
The webapp has supported pairing via magic-wormhole since 2016, which
makes a link including between local computers, albeit with the overhead
of tor. That sort of covers the same use case. Of course advanced users
can easily enough add a ssh remote to their repository themselves, using
a hostname on the local network.
Sponsored-by: unqueued
diff --git a/Assistant.hs b/Assistant.hs
index 911ebd33d3..cd81895861 100644
--- a/Assistant.hs
+++ b/Assistant.hs
@@ -40,9 +40,6 @@ import Assistant.Threads.Glacier
 #ifdef WITH_WEBAPP
 import Assistant.WebApp
 import Assistant.Threads.WebApp
-#ifdef WITH_PAIRING
-import Assistant.Threads.PairListener
-#endif
 #else
 import Assistant.Types.UrlRenderer
 #endif
@@ -155,11 +152,6 @@ startDaemon assistant foreground startdelay cannotrun listenhost listenport star
 			then webappthread
 			else webappthread ++
 				[ watch commitThread
-#ifdef WITH_WEBAPP
-#ifdef WITH_PAIRING
-				, assist $ pairListenerThread urlrenderer
-#endif
-#endif
 				, assist pushThread
 				, assist pushRetryThread
 				, assist exportThread
diff --git a/Assistant/Pairing/MakeRemote.hs b/Assistant/Pairing/MakeRemote.hs
deleted file mode 100644
index f4468bc07c..0000000000
--- a/Assistant/Pairing/MakeRemote.hs
+++ /dev/null
@@ -1,98 +0,0 @@
-{- git-annex assistant pairing remote creation
- -
- - Copyright 2012 Joey Hess <id@joeyh.name>
- -
- - Licensed under the GNU AGPL version 3 or higher.
- -}
-
-module Assistant.Pairing.MakeRemote where
-
-import Assistant.Common
-import Assistant.Ssh
-import Assistant.Pairing
-import Assistant.Pairing.Network
-import Assistant.MakeRemote
-import Assistant.Sync
-import Config.Cost
-import Config
-import qualified Types.Remote as Remote
-
-import Network.Socket
-import qualified Data.Text as T
-
-{- Authorized keys are set up before pairing is complete, so that the other
- - side can immediately begin syncing. -}
-setupAuthorizedKeys :: PairMsg -> OsPath -> IO ()
-setupAuthorizedKeys msg repodir = case validateSshPubKey $ remoteSshPubKey $ pairMsgData msg of
-	Left err -> giveup err
-	Right pubkey -> do
-		absdir <- absPath repodir
-		unlessM (liftIO $ addAuthorizedKeys True absdir pubkey) $
-			giveup "failed setting up ssh authorized keys"
-
-{- When local pairing is complete, this is used to set up the remote for
- - the host we paired with. -}
-finishedLocalPairing :: PairMsg -> SshKeyPair -> Assistant ()
-finishedLocalPairing msg keypair = do
-	sshdata <- liftIO $ installSshKeyPair keypair =<< pairMsgToSshData msg
-	{- Ensure that we know the ssh host key for the host we paired with.
-	 - If we don't, ssh over to get it. -}
-	liftIO $ unlessM (knownHost $ sshHostName sshdata) $
-		void $ sshTranscript
-			[ sshOpt "StrictHostKeyChecking" "no"
-			, sshOpt "NumberOfPasswordPrompts" "0"
-			, "-n"
-			]
-			(genSshHost (sshHostName sshdata) (sshUserName sshdata))
-			("git-annex-shell -c configlist " ++ T.unpack (sshDirectory sshdata))
-			Nothing
-	r <- liftAnnex $ addRemote $ makeSshRemote sshdata
-	repo <- liftAnnex $ Remote.getRepo r
-	liftAnnex $ setRemoteCost repo semiExpensiveRemoteCost
-	syncRemote r
-
-{- Mostly a straightforward conversion.  Except:
- -  * Determine the best hostname to use to contact the host.
- -  * Strip leading ~/ from the directory name.
- -}
-pairMsgToSshData :: PairMsg -> IO SshData
-pairMsgToSshData msg = do
-	let d = pairMsgData msg
-	hostname <- liftIO $ bestHostName msg
-	let dir = case remoteDirectory d of
-		('~':'/':v) -> v
-		v -> v
-	return SshData
-		{ sshHostName = T.pack hostname
-		, sshUserName = Just (T.pack $ remoteUserName d)
-		, sshDirectory = T.pack dir
-		, sshRepoName = genSshRepoName hostname (toOsPath dir)
-		, sshPort = 22
-		, needsPubKey = True
-		, sshCapabilities = [GitAnnexShellCapable, GitCapable, RsyncCapable]
-		, sshRepoUrl = Nothing
-		}
-
-{- Finds the best hostname to use for the host that sent the PairMsg.
- -
- - If remoteHostName is set, tries to use a .local address based on it.
- - That's the most robust, if this system supports .local.
- - Otherwise, looks up the hostname in the DNS for the remoteAddress,
- - if any. May fall back to remoteAddress if there's no DNS. Ugh. -}
-bestHostName :: PairMsg -> IO HostName
-bestHostName msg = case remoteHostName $ pairMsgData msg of
-	Just h -> do
-		let localname = h ++ ".local"
-		addrs <- catchDefaultIO [] $
-			getAddrInfo Nothing (Just localname) Nothing
-		maybe fallback (const $ return localname) (headMaybe addrs)
-	Nothing -> fallback
-  where
-	fallback = do
-		let a = pairMsgAddr msg
-		let sockaddr = case a of
-			IPv4Addr addr -> SockAddrInet (fromInteger 0) addr
-			IPv6Addr addr -> SockAddrInet6 (fromInteger 0) 0 addr 0
-		fromMaybe (showAddr a)
-			<$> catchDefaultIO Nothing
-				(fst <$> getNameInfo [] True False sockaddr)
diff --git a/Assistant/Pairing/Network.hs b/Assistant/Pairing/Network.hs
deleted file mode 100644
index 62a4ea02e8..0000000000
--- a/Assistant/Pairing/Network.hs
+++ /dev/null
@@ -1,132 +0,0 @@
-{- git-annex assistant pairing network code
- -
- - All network traffic is sent over multicast UDP. For reliability,
- - each message is repeated until acknowledged. This is done using a
- - thread, that gets stopped before the next message is sent.
- -
- - Copyright 2012 Joey Hess <id@joeyh.name>
- -
- - Licensed under the GNU AGPL version 3 or higher.
- -}
-
-module Assistant.Pairing.Network where
-
-import Assistant.Common
-import Assistant.Pairing
-import Assistant.DaemonStatus
-import Utility.ThreadScheduler
-import Utility.Verifiable
-
-import Network.Multicast
-import Network.Info
-import Network.Socket
-import qualified Network.Socket.ByteString as B
-import qualified Data.ByteString.UTF8 as BU8
-import qualified Data.Map as M
-import Control.Concurrent
-
-{- This is an arbitrary port in the dynamic port range, that could
- - conceivably be used for some other broadcast messages.
- - If so, hope they ignore the garbage from us; we'll certainly
- - ignore garbage from them. Wild wild west. -}
-pairingPort :: PortNumber
-pairingPort = 55556
-
-{- Goal: Reach all hosts on the same network segment.
- - Method: Use same address that avahi uses. Other broadcast addresses seem
- - to not be let through some routers. -}
-multicastAddress :: AddrClass -> HostName
-multicastAddress IPv4AddrClass = "224.0.0.251"
-multicastAddress IPv6AddrClass = "ff02::fb"
-
-{- Multicasts a message repeatedly on all interfaces, with a 2 second
- - delay between each transmission. The message is repeated forever
- - unless a number of repeats is specified.
- -
- - The remoteHostAddress is set to the interface's IP address.
- -
- - Note that new sockets are opened each time. This is hardly efficient,
- - but it allows new network interfaces to be used as they come up.
- - On the other hand, the expensive DNS lookups are cached.
- -}
-multicastPairMsg :: Maybe Int -> Secret -> PairData -> PairStage -> IO ()
-multicastPairMsg repeats secret pairdata stage = go M.empty repeats
-  where
-	go _ (Just 0) = noop
-	go cache n = do
-		addrs <- activeNetworkAddresses
-		let cache' = updatecache cache addrs
-		mapM_ (sendinterface cache') addrs
-		threadDelaySeconds (Seconds 2)
-		go cache' $ pred <$> n
-	{- The multicast library currently chokes on ipv6 addresses. -}
-	sendinterface _ (IPv6Addr _) = noop
-	sendinterface cache i = void $ tryIO $

(Diff truncated)
remove old assistant release notes
diff --git a/doc/assistant/release_notes.mdwn b/doc/assistant/release_notes.mdwn
deleted file mode 100644
index 6c7c432de4..0000000000
--- a/doc/assistant/release_notes.mdwn
+++ /dev/null
@@ -1,422 +0,0 @@
-## version 6.20170101
-
-XMPP support has been removed from the assistant in this release.
-
-If your repositories used XMPP to keep in sync, that will no longer
-work, and you should enable some other remote to keep them in sync.
-A ssh server is one way, or use the new Tor pairing feature.
-
-## version 5.20140421
-
-This release begins to deprecate XMPP support. In particular, if you use
-the assistant with a ssh remote that has this version of git-annex
-installed, you don't need XMPP any longer to get immediate syncing of
-changes.
-
-## version 5.20140411
-
-This release fixes a bug that could cause the assistant to use a *lot* of
-CPU, when monthly fscking was set up.
-
-Automatic upgrading was broken on OSX for previous versions. This has been
-fixed, but you'll need to manually upgrade to this version to get it going
-again. Workaround: Remove the wget bundled inside the git-annex dmg.
-
-## version 5.20140221
-
-The Windows port of the assistant and webapp is now considered to be beta
-quality. There are important missing features (notably Jabber), documented
-on [[todo/windows_support]], but the webapp is broadly usable on Windows
-now.
-
-## version 5.20131221
-
-There is now a arm [[install/linux_standalone]] build of git-annex,
-including the assistant and webapp,
-which can be installed on a variety of systems including Raspberry Pi,
-Synology NAS, and Google Chromebooks. Details in
-[[this forum thread|forum/new_linux_arm_tarball_build]].
-
-## version 5.20131213
-
-The assistant can now be used on Windows! However, it has known problems,
-described in [[todo/windows_support]], and should be considered an
-alpha-level preview.
-
-## version 5.20131127
-
-Starting with this version, when git-annex is installed from a build on
-this website, it will detect when new versions are available, and allow
-easily upgrading. Automatic upgrades can also be configured if desired,
-or automatic upgrade checking can be disabled in the preferences page.
-
-git-annex builds from distributions, like Debian will not automatically
-upgrade; use the distribution's package manager for that. However, the
-git-annex webapp will also detect when a distribution has upgraded
-git-annex and offer to restart the assistant.
-
-## version 4.20131024
-
-This version fixes several different bugs that could cause the webapp to
-refuse to create a repository. Several other bugs are also fixed, including
-a bug that caused it to not add files on Android.
-
-New in this release is the ability to use the webapp to set up scheduled
-consistency checks of your repositories. Many problems with repositories 
-are now automatically corrected, and it can even repair damaged git
-repositories.
-
-This is a recommended upgrade.
-
-## version 4.20131002
-
-Now you can use the webapp to set up an encrypted git repository on a
-remote ssh server, or on rsync.net, and use it as a live cloud backup. Or,
-use the webapp to make an encrypted git repository on a removable drive,
-and store it offsite as a secure backup.
-
-## version 4.20130920
-
-This release is the first to support fully encrypted git repositories
-stored on removable drives. This can be set up easily using the webapp.
-
-## version 4.20130909
-
-This release fixes a crash that could occur when using XMPP with the
-assitant. It has only been seen on OS X so far. The bug is not believed to
-be explitable, but upgrading is still recommended.
-
-## version 4.20130802
-
-This release fixes several bugs, including a reversion introduced in the last
-version that broke direct mode on Windows, Android, and other crippled
-filesystems. It contains a workaround for a bug in recent git pre-releases
-that broke handling of filenames containing spaces.
-It is a highly recommended upgrade.
-
-The webapp can now detect repositories that did not finish getting properly set
-up, and can recover from one common bug that broke local pairing and remote
-ssh server setups on systems using `ssh-agent`.
-
-## version 4.20130723
-
-This release fixes some bugs. Notably it fixes a bug that could result in data
-loss when adding a tarball of a git-annex repository to your git-annex
-repository.
-
-Rsync.net have committed to support git-annex and offer a special
-discounted rate for git-annex users.
-<http://www.rsync.net/products/git-annex-pricing.html>
-
-## version 4.20130709
-
-This release is mostly bug fixes.
-
-One of the bugs involved setting up rsync remotes on servers other than
-rsync.net. The wrong `.ssh/authorized_keys` line was deployed to the
-remote server. If you set up a rsync remote with a past release, and it does
-not work, you will need to manually edit the `.ssh/authorized_keys` file,
-and remove the `command=` forced command.
-
-## version 4.20130621, 4.20130627
-
-These releases mostly consist of bug fixes.
-
-## version 4.20130601
-
-This is a bugfix release, featuring significant XMPP improvements and
-more robustness thanks to automated fuzz testing. Recommended upgrade.
-
-This version changes its XMPP protocol, so it will fail to sync with older
-git-annex versions over XMPP.
-
-## version 4.20130521
-
-This is a bugfix release. Recommended upgrade.
-
-## version 4.20130516
-
-This version contains numerous bug fixes, and improvements.
-
-This is the first release with a fully usable Android app. No command-line
-typing needed to set up syncing to your Android phone or tablet!
-A few of the more advanced features may not work (or not work reliably)
-on Android. The Android app is still beta quality.
-
-This is also the first release with a Windows port! The Windows port
-is in an alpha quality state, and is missing many features.
-It does not yet include the assistant.
-
-## version 4.20130501
-
-This version contains numerous bug fixes, and improvements.
-
-## version 4.20130417
-
-This version contains numerous bug fixes, and improvements.
-
-One bug that was fixed can affect users of gnome-keyring who
-have set up remote repositories on ssh servers using the webapp.
-The gnome-keyring may load the restricted key that is set up
-for that, and make it be used for regular logins to the server;
-with the result that you'll get an error message about "git-annex-shell"
-when sshing to the server. 
-
-If you experience this problem you can fix it by
-moving `.ssh/key.git-annex*` to `.ssh/git-annex/` (creating
-that directory first), and edit `.ssh/config` to reflect the new
-location of the key. You will also need to restart gnome-keyring.
-
-Another change relates to files in `archive/` directories. Client repositories
-now sync these files between themselves like any other files, until
-the files reach an archive repository. Only then are they removed from
-the client repositories. So you need to ensure you have at least one
-archive repository if you want to use the `archive/` directory feature.
-
-## version 4.20130323, 4.20130405
-
-These versions continue fixing bugs and adding features.
-
-## version 4.20130314
-
-This version makes a great many improvements and bugfixes, and is
-a recommended upgrade.
-
-If you have already used the webapp to locally pair two computers,
-a bug caused the paired repository to not be given an appropriate cost.
-To fix this, go into the Repositories page in the webapp, and drag the
-repository for the locally paired computer to come before any repositories
-that it's more expensive to transfer data to.
-
-## version 4.20130227
-
-This release fixes a bug with globbing that broke preferred content expressions.
-So, it is a recommended upgrade from the previous release, which introduced

(Diff truncated)