Today, added a thread that deals with recovering when there's been a loss of network connectivity. When the network's down, the normal immediate syncing of changes of course doesn't work. So this thread detects when the network comes back up, and does a pull+push to network remotes, and triggers scanning for file content that needs to be transferred.

I used dbus again, to detect events generated by both network-manager and wicd when they've sucessfully brought an interface up. Or, if they're not available, it polls every 30 minutes.

When the network comes up, in addition to the git pull+push, it also currently does a full scan of the repo to find files whose contents need to be transferred to get fully back into sync.

I think it'll be ok for some git pulls and pushes to happen when moving to a new network, or resuming a laptop (or every 30 minutes when resorting to polling). But the transfer scan is currently really too heavy to be appropriate to do every time in those situations. I have an idea for avoiding that scan when the remote's git-annex branch has not changed. But I need to refine it, to handle cases like this:

  1. a new remote is added
  2. file contents start being transferred to (or from it)
  3. the network is taken down
  4. all the queued transfers fail
  5. the network comes back up
  6. the transfer scan needs to know the remote was not all in sync before #3, and so should do a full scan despite the git-annex branch not having changed

Doubled the ram in my netbook, which I use for all development. Yesod needs rather a lot of ram to compile and link, and this should make me quite a lot more productive. I was struggling with OOM killing bits of chromium during my last week of development.