Please describe the problem.
I tried to sync a folder between two laptops, using the webapp and GPG encryption. I started by seting up the repository on laptop1, and then, while all the files were uploading, I went over to laptop2 and set things up there as well.
At first, everything looked fine, laptop1 was uploading and laptop2 was downloading. Then, laptop2 reported "Failed to sync with VPS, and the log file showed a gcrypt error: "Packfile long-hash does not match digest!"
What version of git-annex are you using? On what operating system?
Laptop1 is running wheezy with git annex version 6.20160307+gitgb095561-1~ndall+1 from neurodebian.
VPS is running wheezy
Laptop2 is runing jessie with git annex version 6.20160307+gitgb095561-1~ndall+1 from neurodebian.
Please provide any additional information below.
http://denisa.hobbs.cz/laptop1.daemon.log http://denisa.hobbs.cz/laptop2.daemon.log
Have you had any luck using git-annex before?
If I could get this syncing to work, then that would be great! I don't want to use unison, because that wouldn't be encrypted... So this would be wonderful.
That error message is certainly not coming from git-annex. It sounds like a git error message, but I don't find it in the current git source code.
What it sounds like is some form of corruption of the git repository, probably to a pack file. Since git-annex doesn't have anything to do with writing such files, it's hard to see how this could be a bug in git-annex.
This kind of corruption problem tends to happen when a disk loses data, perhaps having to do with an unclean shutdown. Or perhaps it received bad git repository data from the VPS.
Suggest you run
git fsck
and if it reports problems, you may be ableo to fix them by runninggit annex repair
.get_verify_decrypt_pack()
function in git-remote-gcrypt.Ok, so definitely not a bug in git-annex then.
get_verify_decrypt_pack
downloads an encrypted pack file, and then uses gpg to hash the pack file and compares this to the hash encoded in the name of the pack file.So, this could happen if the pack files in the gcrypt remote have gotten the wrong data into them. Or it could be a bug of some kind in git-remote-gcrypt.
Any experience with using transcrypt instead of git crypt?
https://github.com/elasticdog/transcrypt
Should a developer not be all over this, would seem like a very major issue with the gcrypt backend (and annex's interaction with it) that this outcome could even be possible. You can't be using an encrypted store, which becomes irrepairably corrupted and with some of your data sat outside, unencrypted!
Any thoughts from more knowledgeable developers on how to fix, investigate further, or do something useful on this front?
I'm the git-remote-gcrypt maintainer, but as it stands it's not really possible to do anything about this bug without steps to reliably reproduce the corrupted/unencrypted remote repository.
I have been using git-remote-gcrypt with git-annex and with plain git repos for more than two years and I have never encountered anything like this issue. So there's no starting point to work on a fix for the issue.
@Don and @pot, did you actually see un-encrypted data end up in the gcrypt repository? Or by "same issue" do you mean you saw the same error message, perhaps due to a very different cause?
@jgoerzen, when you say that the gcrypt reposotory had an "unencrypted index", are you referring to a regular git index file, or to a pack's .idx file?
It would be very strange if a gcrypt repository had an index file. I've never been able to use git-remote-gcrypt with a non-bare repository, and bare repositories don't normally have index files.
I think I may have tracked this down partly, I ran out of inodes on a removeable drive and started getting the same error. Will try and rectify underlying inode issue and then see.
(secondary point, how on earth has my docs directory used up all 59 million inodes available to it when put into an annex???)
Not sure how I did it, but I have 2 repos that give me the same error. (Perhaps it happened when killing the assistant in the middle of a sync? No idea…)
However I "solved" the issue for one repo:
git rm
, committed that, and pushed it (the repo is hosted on Gitlab and I can't just remove the branch or do something like that)gcrypt-id| from my
.git/config`git annex sync
againAnd voilà! It works
I don't have any content in this remote, only metadata, so it's easy to do. But I still don't know what caused this.
Hi folks,
I no longer use git-annex so I can't comment directly on this. However, in some testing with git-remote-gcrypt, I came across an issue where changes seem to be lost. In short, every git push behaves as if --force were given. Details in Debian but #877464.
I just reproduced this when pushing to a gcrypt remote on rsync.net using the assistant. There is only one client pushing to the gcrypt remote.
It was during the initial sync of a moderately large amount of data (~22G), perhaps this has something to do with it?
I could reproduce the issue by cloning with gcrypt directly (
git clone gcrypt::ssh://....
).I was able to recover by following the steps outlined in Schnouki's comment (#12), but this is obviously quite an unsatisfactory fix.
I am using annex to replicate important personal data, and I find this issue highly concerning.
Foolishly, I did not keep a copy of the bad repo before I forced pushed over it on the remote, so I do not have a copy available to experiment with
logs
daemon.log
excerpt: https://ipfs.io/ipfs/QmcoPuTLY2v5FWPABQLVwgyqW5WdsvkBbVS33cJh6zjzi4git clone
output:git annex version
:Had a repo exhibit this behavior just now:
XX -> YY
A
@ commitYY
B
@ commitXX
(1 behind)hub
andlab
both @ commitXX
B
pushes and pulls from bothhub
andlab
: OKA
pushes tohub
(updates to commitYY
): OKB
pulls fromhub
: FAIL with Packfile does not match digestB
pulls fromlab
: OKB
pushes tohub
: FAIL with Packfile does not match digestA
pulls fromhub
: OKA
pulls fromlab
: OKWhen looking in
.git/config
I noticed thatremote.hub.gcrypt-id
andremote.lab.gcrypt-id
were identical.To fix, I:
remote.hub.gcrypt-id
from.git/config
on bothA
andB
hub
git push hub
onB
git pull hub master
onA
This resulted in a new and unique value for
remote.hub.gcrypt-id
, which is the same on bothA
andB
.Have not had time to dig into why but this is the only thread I can find about this problem so I figured I would log this somewhere for posterity.
This issue is still present, when using the assistant with git-remote-gcrypt.
Although one can recover from this issue by resetting the repository, this is handicaping, as we have to continuously monitor whether pushes have been successful, which contradicts the idea of the assistant.
This may be anecdotal, but the push before the 'Packfile *** does not match digest!' error, have had some network issues, as follows:
Just to add one more data point – I ran into the situation after a kernel oops had caused my rootfs to become read-only. So maybe related to the "out of inodes" issue reported above.
XXX is obviously my redactions. I was not running the assistant, just plain git-annex.
There was no plaintext on the remote and the only way for me to recover was to nuke the remote repo.
I kept copies of both repos, though I unfortunately cannot share the unencrypted one.