Recent comments posted to this site:

comment 6

Finally ran into this myself, and I observed several podcast hosts still not supporting EMS even now.

Implemented a config to solve this:

git config annex.security.allow-insecure-https tls-1.2-no-EMS

I do caution against setting this globally for security reasons. At least not without understanding the security implications, which I can't say I do.

Even setting it in a single repo could affect other connections by git-annx to eg, API endpoints used for storage.

Personally, I am setting it only when importing feeds from those hosts:

git -c annex.security.allow-insecure-https=tls-1.2-no-EMS annex importfeed
Comment by joey
workaround

Workaround: Make git-annex use curl for url downloads. Eg:

git config annex.security.allowed-ip-addresses all
git config annex.web-options --netrc

Note this using curl has other security implications, including letting git-annex download from IPs on the LAN.

Comment by joey
comment 4

After a lot of thought and struggling with layering issues between fsck and the S3 remote, here is a design to solve #2:

Add a new method repairCorruptedKey :: Key -> Annex Bool

fsck calls this when it finds a remote does not have a key it expected it to have, or when it downloads corrupted content.

If repairCorruptedKey returns True, it was able to repair a problem, and the Key should be able to be downloaded from the remote still. If it returns False, it was not able to repair the problem.

Most special remotes will make this pure False. For S3 with versioning=yes, it will download the object from the bucket, using each recorded versionId. Any versionId that does not work will be removed. And return True if any download did succeed.

In a case where the object size is right, but it's corrupt, fsck will download the object, and then repairCorruptedKey will download it a second time. If there were 2 files with the same content, it would end up being downloaded 3 times! So this can be pretty expensive, but it's simple and will work.

Comment by joey
comment 1

Rather than altering the exported git tree, it could removeExport and then update the export log to say that the export is incomplete.

That would result in a re-export putting the file back on the remote.

It's not uncommon to eg want to git-annex move foo --from remote, due to it being low on space, or to temporarily make it unavailable, and later send the file back to the remote. Supporting drop from export remotes in this way would allow for such a workflow, although with the difference that git-annex export would be needed to put the file back.

It might also be possible to make sending a particular file to an export remote succeed when the export to the remote is incomplete and the file is in the exported tree. Then git-annex move foo --to remote would work to put the file back.

Comment by joey
comment 3

If drop from export remote were implemented that would take care of #1.

The user can export a tree that removes the file themselves. fsck even suggests doing that when it finds a corrupted file on an exporttree remote, since it's unable to drop it in that case.

But notice that the fsck run above does not suggest doing that. Granted, with a S3 bucket with versioning, exporting a tree won't remove the corrupted version of the file from the remote anyway.

It seems that dealing with #2 here is enough to recover the problem dataset, and #1 can be left to that other todo.

Comment by joey
comment 4

A balance might be that if it fails to connect to the remote.name.annexUrl, it could re-check it then.

Would this include re-checking when remote.name.annexUrl is unset? That would be necessary in the situations where either the client didn't understand p2phttp when the repository was closed or when the server-side didn't provide p2phttp yet.

Given that the clone happened in the knowledge that "dumb http" was the only supported http protocol and read only, I am now questioning if such a automatic upgrade to p2phttp would really be needed, or even desirable. Dumb http continues to work anyway.

Only re-checking if remote.name.annexUrl is set already would solve the issue of relocating the p2phttp endpoint.

Comment by matrss
Poor Bunny
Another standout feature is replayability. Each run feels different due to Poor Bunny random trap patterns, and the desire to beat your previous high score creates a strong “one more try” loop.
Comment by cxararea
Melon playground - Gaming is good

One of the most impressive aspects of Melon Playground is its physics system. Every action feels meaningful because small changes can lead to very different outcomes. Whether you’re connecting objects, applying pressure, or testing explosions, the results often feel unpredictable and entertaining. This makes experimentation highly addictive, as players are constantly curious to see “what happens if” they try something new.

The ragdoll physics of the characters add another layer of fun. Watching how they react to impacts, tools, and environmental hazards can be both humorous and fascinating, especially when combined with creative setups.

Comment by cxararea