Skip to content

Large files / many files issues when using filesystem #473

@Jayd603

Description

@Jayd603

It appears like directory listings are recursive (that is, s3proxy is reading in all files before presenting the client with anything) and even listing the root directory leads to a traversal before the list is shown, even if only one directory needs to be shown it can take awhile to list directories if there are a lot of files within sub-directories, especially if you are pointing the filesystem base dir to a network share mount.

It also seems like s3proxy limits request size - so if xbcloud wants to send 100MB chunks (using --read-buffer-size) the requests fail.

blocks.ibd.00000000000000000000, size: 52428858
221117 20:05:05 xbcloud: S3 error message: MaxMessageLengthExceededYour request was too big.4442587FB7D0A2F9
221117 20:05:05 xbcloud: error: failed to upload chunk

Edit: these should probably be separate issues, my bad. for the directory listings, If i list a directory through s3proxy that has no sub-directories it's fast. This is why I have the theory about gathering all file info recursively before display as being the issue. The slowness is amplified when using a network share on the file system. I think that is where it really becomes noticeable, the filesystem backend code was probably not written with network shares in mind. It still might be doing things in a not very efficient way regardless though.

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions