Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Reduce Memory Usage of Matrix Building [Resolves #372]
Matrix building is still very memory-intensive, for no particularly good reason: we're not using the matrices at that point, just transferring them from database to disk with an in-Python join to get around column limits. While we're still using pandas to build the matrices themselves, this is hard to get around: any type of pandas join will always use multiple times the memory needed. Bringing the memory usage down to what is actually needed for the data is better, but even better is to make the memory usage controllable by never keeping the matrix in memory. Using Ohio's PipeTextIO makes this technically feasible, but to make it work out we also need to remove HDF support. HDF support was added merely for the compression capabilities, and with recent changes to compress CSVs, this is no longer needed. MatrixStore changes: - Remove HDFMatrixStore and hdf support from the experiment and CLI - Modify MatrixStore.save to take in a bytestream instead of assuming it has a dataframe available to convert - Add null column check to loading/preprocessing instead of after we build matrix MatrixBuilder changes: - Convert intermediate-dataframe generating functions to just be query-generating functions, because we can't use intermediate dataframes anymore. Also, these queries no longer duplicate the index (entity id, as of date) so the Python joining code doesn't have to manually remove them. - Since there is no dataframe anymore, the row count has to come from the database. - Add more prebuild checks to make sure that the joins will work; without a dataframe.join at the end, column mismatches will no longer have explicit errors without doing this Other changes: - Remove unused utils that mentioned hdf - Remove HDF section from experiment run document
- Loading branch information