When using 7-zip with 7z u -uq0 -ssw -mx9 -stl files.7z files (update-synchronize ultra-compression) I think I've seen a slight random size difference of the same archive triggering my backing-up program to classify the archive as new. I think I read somewhere about an interplay between the number of actual threads and the number of blocks which could result in a slight difference of the archive size between runs. For now I'm using 1 thread -mmt1 for a deterministic result, but it's slower and perhaps not really necessary?
An archive rebuilt from exactly same files would still be considered "new" if either its modification time (-stl fixes that) or its size (-mmt1 hopefully unnecessary?) differs from the cloud copy.
I could keep experimenting by myself but I hope there is a definite answer. Thanks