NAME
git-annex setpresentkey - change records of where key is present
SYNOPSIS
git annex setpresentkey key uuid [1|0]
DESCRIPTION
This plumbing-level command changes git-annex's records about whether the specified key's content is present in a remote with the specified uuid.
Use 1 to indicate the key is present, or 0 to indicate the key is not present.
OPTIONS
--batch
Enables batch mode, in which lines are read from stdin. The line format is "key uuid [1|0]"
--json
Enable JSON output. This is intended to be parsed by programs that use git-annex. Each line of output is a JSON object.
--json-error-messages
Messages that would normally be output to standard error are included in the JSON instead.
Also the git-annex-common-options(1) can be used.
SEE ALSO
git-annex(1)
AUTHOR
Joey Hess id@joeyh.name
Warning: Automatically converted into a man page by mdwn2man. Edit with care.
That's not the same information that this command deals with. There is the per-remote metadata log, which some export remotes (currently S3) can use to keep track of whatever information is needed to access a given file that was exported to them.
@Ilya_Shlyakhter the way export tree remotes work is git-annex keeps track of the tree object that corresponds to the state of the remote, as well as the usual presense tracking information. It uses the presense tracking to know which files in the tree have reached the remote, and the tree to work out the path to a file on the remote.
So the only way to manipulate its tracking for those is to update the tree that it has recorded as exported there, as well as the presence information this command is about. internals has the details for the export.log.
@joey thanks. But, besides export.log, the S3 remote also keeps some (undocumented?) internal state, and there's not way to update that state to record the fact that git-annex can GET a given key by downloading s3://mybucket/myobject ? Also, I feel uneasy directly manipulating git-annex internal files. Can you think of any plumbing commands, that could be added to support this use case? The use case is, I submit a batch job that takes as input some s3:// objects, writes outputs to other s3:// objects, and returns pointers to these new s3:// objects. I want to register these new objects in git-annex, initially without downloading them, but be able to git-annex-get these objects, drop them from the S3 remote, but later be able to put them back under their original s3:// URIs. The latter ability is needed because (1) many workflows expect filenames to be in a particular form, e.g. mysamplename.pN.bam to represent mysample processed with parameter p=N; and (2) some workflow engines can reuse past results if a step is re-run with the same inputs, but they need the results to be at the same s3:// URI as when the step was first run.