![]() This is very interesting! Being able to download just what you don't have could definitely be nice. Insight into the content of my archives Is there any way to get just that 2% back So the question will really be about when:) And that I'm not totally sure about atm The short story is, I want to add this to BlobBackup. I've encountered this use case in my career a few times and so this idea has come to mind before. Especially any company that needs to abide by some strict privacy regulations (that require all data for something be wiped). I can see this being very handy for lots of people. The ability to completely remove a file/folder from a backup archive You bring up some interesting feature ideas. The data format especially is simpler than the other tools and in turn less error prone. And the speed/size benefits are merely a nice side effect. The core focus for us has really been simplicity. I am surprised by the focus on speed and backup size If the only way to test each file in the archive against the master folder/volume is to extract it, it will become prohibitively expensive. Is there any way to get just that 2% back from a blog archive? I mean the ability to iterate over the files in the archive and extract only those that do not match anything in the current folder and to do so without extracting every archived file along the way. Maybe 3% is the modification history of existing files, but that last 2% is stuff I removed and maybe even stuff I don't recall removing. After years of backups my archives are 95% just replicas of my current data. The ability to extract all files from an archive that don't exist in the current folder/volume being backed up. Sometimes gone means gone and I don't want even an encrypted copy staying around. Sometimes when I delete something I want it gone permanently from every backup ever. The ability to completely remove a file/folder from a backup archive, i.e. What would get my attention is any of the following: ![]() Backups are in the middle of the night and space is cheap. That it may take longer and take more space is insignificant when compared to backup reliability. ![]() Arq has been bulletproof for me even though it is slower and creates larger archives. Neither is all that important to me so long as the contenders are in the same ballpark. I am surprised by the focus on speed and backup size, however. It was added in v1.9.5.Thank you for the introduction to BlobBackup. A too high value impairs prioritization due to HOL blocking. A too low value results in higher overhead. Sets the maximum size of chunks into which the response body is sliced. I found the HTTP/2 parameter http2_chunk_size but I'm unsure if it's related. This sounds like it's disabling the chunked transfer, but I have not found the configuration option to limit the chunk size for the chunked transfer. It may come in handy when using a software failing to support chunked encoding despite the standard’s requirement. Similarly i can configure: chunked_transfer_encoding which the docs say isĪllows disabling chunked transfer encoding in HTTP/1.1. Which does not sound like it's limiting the size of response chunks, but rather how buffer size for incoming packages with any kind chunk size. It is usually 16K on other 64-bit platforms. This is 8K on x86, other 32-bit platforms, and x86-64. By default, buffer size is equal to two memory pages. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file. ![]() Sets buffer size for reading client request body. I found various parameters that do similar things in the nginx docs, among them client_body_buffer_size. I'm trying to limit the chunk size, as in, the maximum size a response body can have for an nginx v1.19.7.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |