improve: use chunked transfer encoding to stream large files #714
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Before
When upload a large file, we slice the large binary stream into chunks (16MB per chunk by default -- can be specified by
MAPILLARY_TOOLS_UPLOAD_CHUNK_SIZE_MB
), and then upload each chunk to the server per HTTP request. It means to upload a 16GB file, it will send over 1000 HTTP requests.I suspect that many potential issues are caused by initializing HTTP requests instead of the actual data transferring after the initialization, especially in an unreliable and dynamic network environment. To address these issues, we are switching to Chunked transfer encoding that the upload server supports.
After
To upload a 16GB file, we still slice it into chunks by 16MB, but within a single HTTP request. This is what Chunked transfer encoding supports, and can be easily done by passing a chunk generator to
requests.post(data=chunk_generator)
.