Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use compressed CSVs [Resolves #498] #626

Merged
merged 1 commit into from
Mar 6, 2019
Merged

Use compressed CSVs [Resolves #498] #626

merged 1 commit into from
Mar 6, 2019

Conversation

thcrock
Copy link
Contributor

@thcrock thcrock commented Mar 6, 2019

  • Make the CSVMatrixStore use compression and rename files to csv.gz

- Make the CSVMatrixStore use compression and rename files to csv.gz
@codecov-io
Copy link

Codecov Report

Merging #626 into master will increase coverage by <.01%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #626      +/-   ##
==========================================
+ Coverage   70.28%   70.29%   +<.01%     
==========================================
  Files          87       87              
  Lines        5775     5776       +1     
==========================================
+ Hits         4059     4060       +1     
  Misses       1716     1716
Impacted Files Coverage Δ
src/triage/component/catwalk/storage.py 92.57% <100%> (+0.02%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e14429e...9de76f7. Read the comment docs.

Copy link
Member

@jesteria jesteria left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool.

Do we want to (need to) support existing CSVs that aren't compressed, (say for upgrades)? I don't know that that's a concern at all, but it seems as though it could be. And, though it could add a little bit of complexity to this, (e.g. branching), perhaps not much, I don't think.

Say, if we can't rely on Pandas to infer compression, we might pretty easily set the compression argument based on the path, since we know it, e.g.: …(…, compression=self.compression):

@property
def compression(self):
    return 'gzip' if self.matrix_base_store.path.endswith('.gz') else None

…(or something). That's not necessarily sensible/working code as is – and, it might really not make sense to support uncompressed CSVs. But, it seemed like it might be easy enough.


Also, while I agree 100% that this resolves the original issue, and in a lovely way, is there any desire to support HDF backed up in S3? If so, that could simply be another Issue – perhaps a lower-priority one, of course.


def save(self):
self.matrix_base_store.write(self.full_matrix_for_saving.to_csv(None).encode("utf-8"))
self.matrix_base_store.write(gzip.compress(self.full_matrix_for_saving.to_csv(None).encode("utf-8")))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like DataFrame.to_csv supports compression.

I suppose we can't rely on it to infer compression, because we hand it file descriptors rather than paths, (and they're S3 paths, which it might not consider "path-like") – or, in this case, we hand it None, which is very un-path-like. (I'm just guessing that that was the issue you came across.)

Regardless, if necessary, it appears that we can invoke it here as:

MATRIX.to_csv(…, compression='gzip')

…But, is the issue that it ignores this when path is None?

[docs]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Related, is there a reason we don't just do):

self.full_matrix_for_saving.to_csv(self.matrix_base_store, compression='gzip')

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pandas doesn't support compression for saving to filehandles with to_csv, only filenames. I found this on a Stackoverflow, which referred to a comment in the pandas source file, which I confirmed by trying it on my own: it 'worked' but the files were the same size.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've never felt so lied to in my entire life, Pandas 😿

I hope this doesn't risk more memory issues, holding the string temporarily in RAM 🤷‍♂️

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think given how well these matrices compress, the compressed string also existing in RAM (an extra 10% of the original) is not a terrible problem.

Unless you mean the original, uncompressed string (i.e. what we are doing with the None target), which is much worse. I may try and address this in the 'matrix building memory fix' PR that we just talked about and I'm about to start right now, as the main concern there is going to be for memory usage and maybe it'll be worth it to figure out what's needed to bypass to_csv and do the saving without extra memory.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly 👍

I imagine that even if you can't or don't want to bypass to_csv, you can probably tweak it to be RAM-courteous, (even if that meant something terrible like ohio) 😉

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. I mean, you can iterate through the contents like a loop, and just do plain old CSV write. Pandas advertises that they do all these speedups in their to_csv and read_csv, probably involving C, and I'm guessing there's truth to that and my first thought wouldn't be to do that, but maybe it wouldn't be so bad for us (especially given the RAM considerations).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I mean, off the top of my head, it could be a choice between:

a) writing to CSV in Python, outside of Pandas, to something like:

with self.matrix_base_store.open('wb') as fd:
    writer = csv.writer(GzipFile(fileobj=fd))

b) trying to hold onto Pandas's utility and ostensible optimizations:

with self.matrix_base_store.open('wb') as fd, \
        PipeTextIO(self.full_matrix_for_saving.to_csv) as pipe:
    zipped = GzipFile(fileobj=fd)
    for line in pipe:
        zipped.write(line)

…and the two could be compared for speed, resource usage, complexity.

(For example, though A looks short-and-simple above, and though in B we might lose all of Pandas optimizations by forcing it through our in-Python pipe … in fact, A could be a bit long / sticky in actual implementation, because we're taking over DataFrame.to_csv from Pandas.)

@thcrock
Copy link
Contributor Author

thcrock commented Mar 6, 2019

I don't see a need to add complexity to handle upgrading. The behavior if you don't do anything is just that it won't see the old matrices and thus rebuild the matrices. Not optimal, but not the end of the world. And if you want to optimize your directory on your own, you can do so with a few lines of bash.

As far as leaving open a low-priority issue for HDF-S3, I don't think this is needed either. I'm of the mind that if somebody in the future wants that, they can create an issue for it, and extraneous issues cluttering up the issues page do nothing but make it slightly harder to look at the current issues.

Copy link
Member

@jesteria jesteria left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Might want to make clear in closing the Issue how it was resolved, (not with HDF).

Regardless, I think it's a good resolution 👍

@jesteria jesteria merged commit 0229c66 into master Mar 6, 2019
@jesteria jesteria deleted the compress_csvs branch March 6, 2019 22:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants