There is annex.jobs
to configure job count globally but the need for parallelisation varies greatly for me.
For example, sync should do its git commands with as many remotes in parallel as it can to minimise sync runtime but copy, move, fsck etc. on a local slow hard drive should only do one or two files at a time.
Could there be a way to be more precise with how much parallelisation you want by default?
Maybe for each sub-command or resource access pattern.
It could be helpful to have some kind of git config, similar to remote.name.annex-cost, that categorizes remotes by what amount of concurrency is desirable when accessing them.
But then the kind of access can also matter, eg git pull from a ssh remote might as well be run concurrently with all other ssh remote pulls, but not so when downloading annex objects from ssh remotes.
There is also the problem that what action will be taken on a particular file is up to the command, but the amount of concurrency to use has to be determined before running that command on that file. Eg, git-annex get might use a slow hard drive, or a fast ssd that benefits from concurrency. We don't know until that code runs, but we have to decide how many threads to spawn before hand.