Skip to content

Conversation

ptpt
Copy link
Member

@ptpt ptpt commented Mar 5, 2025

Before

When upload a large file, we slice the large binary stream into chunks (16MB per chunk by default -- can be specified by MAPILLARY_TOOLS_UPLOAD_CHUNK_SIZE_MB), and then upload each chunk to the server per HTTP request. It means to upload a 16GB file, it will send over 1000 HTTP requests.

I suspect that many potential issues are caused by initializing HTTP requests instead of the actual data transferring after the initialization, especially in an unreliable and dynamic network environment. To address these issues, we are switching to Chunked transfer encoding that the upload server supports.

After

To upload a 16GB file, we still slice it into chunks by 16MB, but within a single HTTP request. This is what Chunked transfer encoding supports, and can be easily done by passing a chunk generator to requests.post(data=chunk_generator).

@ptpt ptpt changed the title improve: use data generator (chunks) to stream large files improve: use chunked transfer encoding to stream large files Mar 5, 2025
@ptpt ptpt merged commit a619ea0 into main Mar 6, 2025
19 checks passed
@ptpt ptpt deleted the improve-upload-chunks branch March 6, 2025 04:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants