I tried a cluster where each node was in the archive group. Sending to the cluster caused the file the end up on multiple nodes, though the preferred content should have allowed it to be stored on only one.
This is because a cluster checks preferred content for each node and sends to all nodes that want it. Which works fine when using balanced preferred content expressions, but for archive, they all want it until 1 has it.
So to support archive better, after finding a node that wants the content, when checking the second node it would need to check its preferred content under the assumption that the first node already contains the content. And so on. Currently this is not supported when checking preferred content, but something similar is done when dropping, with a set of repos to assume don't contain the content any longer.
(Oddly, in my case, it seemed to always end up on 2 nodes out of 4, I don't know why it didn't also get sent to the other 2.)
(Not considering this a bug, because cluster was designed to be used with balanced preferred content, which will probably work better in many ways. Still it would be good to support this, especially for when existing archive repositories get put in a cluster.)
--Joey
If this were implemented, and one of the nodes was full, and that happened to be the node whose preferred content expression were evaluated first, it would try to store there and fail, and the content would not be stored in any of the other nodes. Which seems potentially worse than the file being stored on multiple nodes currently.
Of course, this is the problem that
sizebalanced
preferred content solves. Since sizebalanced is otherwise much like the archive group, it would make sense to just switch the cluster nodes to use it.As for other preferred content expressions, unless they use
copies=groupname:number
orlackingcopies
, whether one node wants content won't be influenced by what other nodes have it. So evaluating in preferred content in parallel is ok for those.I think I've talked myself out of making a change!