Current special remote protocol works on one file at a time. With some remotes, a batch operation can be more efficient, e.g. querying the status of many URLs in one API call. It would be good if special remotes could optionally implement batch versions of their operations, and these versions were used by batch-mode git-annex commands. Or maybe, keep the current set of commands but let the remote read multiple requests and then send multiple replies?
What is being optimised here? A single external special remote process is already used for multiple operations, and the overhead is only in the necessary context switches between processes and pipe IO.
Note that git-annex -J will start up several external special remote processes and distribute jobs between them. The way git-annex processes files, it never builds up a big list of queries to make later; each query or other operation needs to be completed before it knows what the next operation will be for a given key. So it seems hard to see how it could use something like this if it were in the protocol.