1 Answers
I took your use of byte range fetch to mean multipart downloads rather than just a specific range.
AWS builds a lot of this into the clients for use, for example in python, the Transfer configuration sets how many retries to use https://boto3.amazonaws.com/v1/documentation/api/latest/reference/customizations/s3.html#boto3.s3.transfer.TransferConfig.
This allows you to set how large you want your ranges and how many retries you want to allow.
Multipart uploads are more complex – if an upload part fails the whole job will fail and the SDK will clean up for us. If you want to have more control you can create a multipart upload and handle the failure of an individual part yourself. This might be worth implementing if you have really large files to load.
In the default setting, the download failure of one multi-part is a download failure of the entire file, but if I create a multi-part upload, I can control the multi-part individually. Am I understand right?
Yes – this is a great blog on how to use the multipart upload from the CLI and what to do if one part fails. https://aws.amazon.com/premiumsupport/knowledge-center/s3-multipart-upload-cli/
okay~ thank you~