Yes, yet the current situation is imho better in terms of “not having unusable archives because somehow the whole tar/gz checksum got randomly corrupted”, which was the initial point of that change.
This is also related to the fact that the old code was doing both the archiving (tar) and compression (gz) at the same time, increasing the number of issues that may arises. Doing both thing sequentially (so compressing the backup only when it’s done) should be more robust while also being more flexible in terms of what compression algorithm to use. But it’s not that trivial because of other thing (e.g. backup_info
must be able to access info.json
on the fly without uncompressing the entire archive)
Anyway, as pointed out previously, volunteers time is limited, there’s only one thousand topics to deal with in parallel and it’s not really high priority, the high priority being borg… That doesn’t stop anybody from working on this, though.
Last but not least, your example seems highly biases : I’m quite surprised that you’re able to compress 6GB of archive into 200 MB … What are these data ? It sounds like it may be 6GB of non-multimedia files (or redundant files) such that it’s possible to obtain a high nice compression ratio somehow. I don’t have any quantified study for this but I’m guessing that in most cases, people either have a bunch of multimedia files that are not compressible, or not-such-a-large-amount-of-data.