Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

write 5+ GiB (matrices) to S3Store #687

Merged
merged 1 commit into from
May 8, 2019
Merged

write 5+ GiB (matrices) to S3Store #687

merged 1 commit into from
May 8, 2019

Commits on May 7, 2019

  1. ensure S3Store does not attempt to write too-large chunks to S3 (…

    …5+ GiB)
    
    Underlying library ``s3fs`` automatically writes objects to S3 in "chunks"
    or "parts" -- *i.e.* via multipart upload -- in line with S3's *minimum*
    limit for multipart of 5 MiB.
    
    This should, in general, avoid S3's *maximum* limit per (part) upload of
    5 GiB. **However**, ``s3fs`` assumes that no *single* ``write()`` might
    exceed the maximum, and as such fails to chunk out such too-large upload
    requests prompted by singular writes of 5+ GiB.
    
    This can and should be resolved in ``s3fs``. But first, it can, should
    be and is resolved here in ``S3Store``.
    
    resolves #530
    jesteria committed May 7, 2019
    Configuration menu
    Copy the full SHA
    be4f431 View commit details
    Browse the repository at this point in the history