As the title suggests, stopping the assistant through the terminal doesn't work as expected and I had to kill the assitant by hand, which in turn means, that git annex has to repair the repo. A workaround seems to be to start the assistant through "git annex webapp" and use the shutdown daemon button in the webapp. Before this workaround does work, you have to "git annex assistant --autostop" and restart, so that launchctl won't keep the daemon running.
It seems to me that you could just click on "syncing enabled" in the webapp, to toggle it to "syncing disabled". This stops all data transfers without killing the program.
The problem you describe with stopping doesn't quite make sense to me.
For one things,
git annex assistant --autostop
does the exact same thing as changing directory into the repo andgit annex assistant --stop
. But, you seem to say that one works and the other doesn't.For another,
git annex assistant --stop
does exactly the same thing as killing the process by hand! That is, it sends it a TERM signal. Perhaps you killed the process by hand with a stronger signal than TERM, but I don't see how even that would cause it to need to repair the repo.Why is launchd restarting the daemon when it exits?
~/Library/LaunchAgents/com.branchable.git-annex.assistant.plist
has RunAtLoad set, which AFAIK is supposed to make it run it once when you log in. It does not have KeepAlive set, so unless you've changed the configuration to include KeepAlive, I don't know why launchd would behave that way.Why do you think that the b2 special remote is causing the problem?
(Seems unlikely to me..)
I did assume so, because b2s process seems to be running when the assistant fails to shut down. But a new, maybe related problem might have come to light: I have/had a gitlab remote, that grew so big (30 gig) that they shut it down, in the sense that I can't access it anymore (even reading from it seems to have stoped working). I believe that this gitlab remote has some objects that aren't available anymore and git annex seems to try to repair stuff that is not repairable. As always: This is all said with a big "maybe", since I wouldn't know how to test my repo against that hypothesis. I have now done
git annex untrust gitlab
and will try if that helps. All this seem very mysterious to me, since I can't believe that I failed remote should prevent the repo to find an acceptable state.