Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GH-41719: [C++][Parquet] Cannot read encrypted parquet datasets via _metadata file #41821

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

rok
Copy link
Member

@rok rok commented May 24, 2024

Rationale for this change

Metadata written into _metadata file appears to not be encrypted.

What changes are included in this PR?

This adds a code path to encrypt _metadata file and a test.

Are these changes tested?

Yes

Are there any user-facing changes?

This adds user facing encryption_properties parameter to pyarrow.parquet.write_metadata.

Copy link

⚠️ GitHub issue #41719 has been automatically assigned in GitHub to PR creator.

@rok rok force-pushed the gh-41719_fix__metadata_file branch from 7be8f5e to 26cfd63 Compare June 2, 2024 14:12
@rok rok marked this pull request as ready for review June 2, 2024 14:13
@rok rok requested a review from wgtmac as a code owner June 2, 2024 14:13
@rok rok requested review from pitrou and removed request for wgtmac June 2, 2024 14:14
@rok
Copy link
Member Author

rok commented Jun 2, 2024

@AudriusButkevicius could you check if the read metadata contains expected properties?

@AudriusButkevicius
Copy link

AudriusButkevicius commented Jun 2, 2024

Hey, thanks for working on this, looks great, however seems that something is missing.

You can read the metadata file, but seems no files are actually read when reading the dataset via the metadata file, so the table ends up empty (but with the right schema).

(arrow) root@base ~/code/arrow # python testx.py
/tmp/tmp8qr3xv_t
Writing
pyarrow.Table
col1: int64
col2: int64
year: int64
----
col1: [[1,2,3]]
col2: [[1,2,3]]
year: [[2020,2020,2021]]
write done


Reading
pyarrow.Table
col1: int64
col2: int64
year: int16
----
col1: []
col2: []
year: []

Code that reproduces the issue

Test code
import os
import tempfile

import pyarrow.parquet.encryption as pe
import pyarrow.parquet as pq
import pyarrow.dataset as ds
import pyarrow as pa
import base64
import polars as pl


class KmsClient(pe.KmsClient):
    def unwrap_key(self, wrapped_key, master_key_identifier):
        return base64.b64decode(wrapped_key)

    def wrap_key(self, key_bytes, master_key_identifier):
        return base64.b64encode(key_bytes)


def write(location):
    cf = pe.CryptoFactory(lambda *a, **k: KmsClient())
    df = pl.DataFrame({
        "col1": [1, 2, 3],
        "col2": [1, 2, 3],
        "year": [2020, 2020, 2021]
    })
    ecfg = pe.EncryptionConfiguration(
        footer_key="TEST",
        column_keys={
            "TEST": ["col2"]
        },
        double_wrapping=False,
        plaintext_footer=False,
    )
    table = df.to_arrow()
    print("Writing")
    print(table)

    parquet_encryption_cfg = ds.ParquetEncryptionConfig(
        cf, pe.KmsConnectionConfig(), ecfg
    )

    metadata_collector = []

    pq.write_to_dataset(
        table,
        location,
        partitioning=ds.partitioning(
            schema=pa.schema([
                pa.field("year", pa.int16())
            ]),
            flavor="hive"
        ),
        encryption_config=parquet_encryption_cfg,
        metadata_collector=metadata_collector
    )

    encryption_properties = cf.file_encryption_properties(pe.KmsConnectionConfig(), ecfg)

    pq.write_metadata(
        pa.schema(
            field
            for field in table.schema
            if field.name != "year"
        ),
        os.path.join(location, "_metadata"),
        metadata_collector,
        encryption_properties=encryption_properties,
    )
    print("write done")


def read(location):
    decryption_config = pe.DecryptionConfiguration(cache_lifetime=300)
    kms_connection_config = pe.KmsConnectionConfig()
    cf = pe.CryptoFactory(lambda *a, **k: KmsClient())
    parquet_decryption_cfg = ds.ParquetDecryptionConfig(
        cf, kms_connection_config, decryption_config
    )

    decryption_properties = cf.file_decryption_properties(
        kms_connection_config, decryption_config)
    pq_scan_opts = ds.ParquetFragmentScanOptions(
        decryption_config=parquet_decryption_cfg,
        # If using build from master
        decryption_properties=decryption_properties
    )
    pformat = pa.dataset.ParquetFileFormat(default_fragment_scan_options=pq_scan_opts)

    dataset = ds.parquet_dataset(
        os.path.join(location, "_metadata"),
        format=pformat,
        partitioning=ds.partitioning(
            schema=pa.schema([
                pa.field("year", pa.int16())
            ]),
            flavor="hive"
        )
    )
    print("Reading")
    print(dataset.to_table())

if __name__ == '__main__':
    location = tempfile.mkdtemp(suffix=None, prefix=None, dir=None)
    print(location)
    os.makedirs(location, exist_ok=True)
    write(location)
    print("\n")
    read(location)

@AudriusButkevicius
Copy link

Seems that dataset.get_fragments() doesn't return anything.

@rok
Copy link
Member Author

rok commented Jun 2, 2024

Seems that dataset.get_fragments() doesn't return anything.

I think c++ logic as-is doesn't store row groups, let me take a look and get back.

@rok rok added this to the 17.0.0 milestone Jun 2, 2024
@AudriusButkevicius
Copy link

AudriusButkevicius commented Jun 2, 2024

I think the whole premise of _metadata files (not _common_metadata) is to store row group details as well as paths, so when you perform a read via the _metadata file, it knows exactly which files and row groups to read without having to open every file in the dataset. At least this is what happens when encryption is disabled.

@wgtmac
Copy link
Member

wgtmac commented Jun 3, 2024

Although I understand the intention of this issue and the corresponding fix, I don't think the design of parquet encryption has included the _metadata summary file because it may point to several different encrypted parquet files. It would be great if @ggershinsky could advise to see if there is any defection in this use case.

@wgtmac
Copy link
Member

wgtmac commented Jun 3, 2024

BTW, _metadata summary file is something that we strive to deprecate in the new parquet v3 design. If you have any concrete use case, please let us know to see if we can improve. @AudriusButkevicius

@rok rok force-pushed the gh-41719_fix__metadata_file branch from 26cfd63 to e0c9418 Compare June 3, 2024 03:19
@rok
Copy link
Member Author

rok commented Jun 3, 2024

I don't think the design of parquet encryption has included the _metadata summary file because it may point to several different encrypted parquet files.

Why would this be an issue? Because these files might be encrypted with different keys?

@wgtmac
Copy link
Member

wgtmac commented Jun 3, 2024

Yes, these files may have different master keys, which are unable to be referenced by a single footer IMHO.

@rok
Copy link
Member Author

rok commented Jun 3, 2024

But the files can have the same key and decryption would error if not? That sounds ok to me.

@rok
Copy link
Member Author

rok commented Jun 3, 2024

Python error is thrown here, but I'm not sure why.

@AudriusButkevicius
Copy link

AudriusButkevicius commented Jun 3, 2024

My intended use of this is to reduce strain on the filesystem when reading large (many files) datasets from a network attached filesystem, by reading the metadata file instead of many separate files.

I also have a hard requirement for encryption sadly as the data is sensitive.

It would be amazing if this worked with encrypted datasets assuming the key is the same.

I would also be ok storing the metadata in plaintext, perform fragment filtering based on row-group stats, and then re-read and decrypt footers of the chosen files. Obviously this is ok for my usecase but generally might not be ok.

@AudriusButkevicius
Copy link

I didn't get any error when reading, seems that it just returns no data.

@wgtmac
Copy link
Member

wgtmac commented Jun 3, 2024

For the record: the community is inclined to deprecate ColumnChunk.file_path: https://lists.apache.org/thread/qprmj710yq92ybyt89m5xgtqyz3o3st2 and https://github.com/apache/parquet-format/pull/242/files#r1603234502

I'm not sure if we want to support and maintain this. cc @emkornfield @pitrou @mapleFU

@AudriusButkevicius
Copy link

Left my 2c there. Explained why I would be sad if that happened, and would probably have to re-implement the same feature.

@rok
Copy link
Member Author

rok commented Jun 3, 2024

@wgtmac I'm not sure I follow. We already have WriteMetaDataFile to produce _metadata file and if we just add WriteEncryptedMetadataFile to produce encrypted _metadata files we're not really adding additional additional complexity (outside of WriteEncryptedMetadataFile) or am I missing something? :)

@ggershinsky
Copy link
Contributor

Although I understand the intention of this issue and the corresponding fix, I don't think the design of parquet encryption has included the _metadata summary file because it may point to several different encrypted parquet files. It would be great if @ggershinsky could advise to see if there is any defection in this use case.

Yep, we haven't worked on supporting this (basically, there was no requirement; seemed heading towards deprecation).
In general, using different encryption keys for different data files is considered to be a good security practice (mainly because there is a limit on number of crypto operations with one key; also, the key leak scope is smaller) - that's why we generate a fresh key for each parquet file in most of the APIs (Parquet, Arrow, Spark, Iceberg, etc). However, there are obviously some low-level parquet APIs that will allow to pass the same key to many files - if used carefully (making sure, somehow, not to exceed the limit), this might be ok in some cases. The limit is hight (~1 billion pages, somethings like 10TB-1PB of data), but if exceeded, the cipher breaks and the data can be decrypted.
Another option could be to create a separate key for the _metadata summary file, and manage it separately from the data file keys.

@pitrou
Copy link
Member

pitrou commented Jun 3, 2024

mainly because there is a limit on number of crypto operations with one key

What is the theoretical limit, assuming a 256-bit AES key?

Also, if column key encryption is used, wouldn't the limit basically become irrelevant?

@ggershinsky
Copy link
Contributor

The AES GCM math - when an attacker has 2^32 blocks, encrypted with the same key (and random nonces) - the key can be derived from this information. Since the vast majority of blocks in a parquet files are the page headers and the pages, the limit is a couple of billion of data pages.

Ok, I'm reading through the NIST 800-38D document (section 8), and this is apparently a statistical limit to ensure the nonce reuse probability stays below 2^-32. Am I right?

Yep. This is the official reference.

In any case, it would be nice if such limits were spelled out in https://github.com/apache/parquet-format/blob/master/Encryption.md

Agreed. I'll send a patch.

@rok
Copy link
Member Author

rok commented Jun 4, 2024

Can we agree this proposal makes sense in principle?
Assuming the collector can decrypt all files and encrypt them with a new key into a _metadata file of course.

@ggershinsky
Copy link
Contributor

ggershinsky commented Jun 5, 2024

Can we agree this proposal makes sense in principle? Assuming the collector can decrypt all files and encrypt them with a new key into a _metadata file of course.

My 2c. If I understand the proposal correctly, it does not require all parquet files to be encrypted with the same data keys. Instead, the collector process can decrypt a footer (and column metadata) of any parquet file, regardless of their data keys, because for example the collector is authorized for the footer and column master keys. Technically, this is done by getting a decryption properties object from the relevant crypto factory. Then, the collector uses the same crypto factory to create a new encryption properties object (that has a footer and column data keys, as required) - and applies this object to all collected footers when writing them to the metadata file. Therefore, the future readers can (or cannot) read those encrypted footers and column metadata/stats according to the reader authorization (checked automatically when calling the crypto factory, as usual).
If what I wrote is accurate, then the proposal sounds very good to me.
A couple of technical points.

  • On the reader side - if a reader is not authorized for a certain column, then the column_metadata/stats for this column should not be extracted from the metadata file. In other words, column metadata decryption should be reactive - done only if the column is projected in the reader query.
  • If the parquet file encryption is configured with the "external key material" mode - then we need to make sure this mode works ok for the metadata file writing/reading. Maybe this mode can be simply turned off for the metadata files.

@rok
Copy link
Member Author

rok commented Jun 5, 2024

Thanks @ggershinsky, that's useful and encouraging!

@raulcd raulcd removed this from the 17.0.0 milestone Jun 28, 2024
@rok rok force-pushed the gh-41719_fix__metadata_file branch 2 times, most recently from 7678982 to 8b3ea8c Compare July 23, 2024 01:57
@rok rok force-pushed the gh-41719_fix__metadata_file branch from 8b3ea8c to 02e2e21 Compare July 29, 2024 23:39
@rok rok force-pushed the gh-41719_fix__metadata_file branch from cd5bf41 to a4d58e0 Compare August 6, 2024 21:31
Comment on lines 576 to 600
if (file_encryption_properties->encrypted_footer()) {
PARQUET_THROW_NOT_OK(sink->Write(kParquetEMagic, 4));

PARQUET_ASSIGN_OR_THROW(int64_t position, sink->Tell());
auto metadata_start = static_cast<uint64_t>(position);

auto writer_props = parquet::WriterProperties::Builder()
.encryption(file_encryption_properties)
->build();
auto builder = FileMetaDataBuilder::Make(metadata.schema(), writer_props);

auto footer_metadata = builder->Finish(metadata.key_value_metadata());
auto crypto_metadata = builder->GetCryptoMetaData();
WriteFileCryptoMetaData(*crypto_metadata, sink.get());

auto footer_encryptor = file_encryptor->GetFooterEncryptor();
WriteEncryptedFileMetadata(metadata, sink.get(), footer_encryptor, true);
PARQUET_ASSIGN_OR_THROW(position, sink->Tell());
auto footer_and_crypto_len = static_cast<uint32_t>(position - metadata_start);
PARQUET_THROW_NOT_OK(
sink->Write(reinterpret_cast<uint8_t*>(&footer_and_crypto_len), 4));
PARQUET_THROW_NOT_OK(sink->Write(kParquetEMagic, 4));
} else {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm doing metadata encryption similarly to approach here:

if (file_encryption_properties->encrypted_footer()) {
// encrypted footer
file_metadata_ = metadata_->Finish(key_value_metadata_);
PARQUET_ASSIGN_OR_THROW(int64_t position, sink_->Tell());
uint64_t metadata_start = static_cast<uint64_t>(position);
auto crypto_metadata = metadata_->GetCryptoMetaData();
WriteFileCryptoMetaData(*crypto_metadata, sink_.get());
auto footer_encryptor = file_encryptor_->GetFooterEncryptor();
WriteEncryptedFileMetadata(*file_metadata_, sink_.get(), footer_encryptor, true);
PARQUET_ASSIGN_OR_THROW(position, sink_->Tell());
uint32_t footer_and_crypto_len = static_cast<uint32_t>(position - metadata_start);
PARQUET_THROW_NOT_OK(
sink_->Write(reinterpret_cast<uint8_t*>(&footer_and_crypto_len), 4));
PARQUET_THROW_NOT_OK(sink_->Write(kParquetEMagic, 4));
} else { // Encrypted file with plaintext footer

However it seems decryption fails (see below) when using RowGroup metadata (after deserialization and decryption).
ARROW_ASSIGN_OR_RAISE(auto path,
FileFromRowGroup(filesystem.get(), base_path, *row_group,
options.validate_column_chunk_paths));
This makes me think I'm either not serializing correctly or there's an issue with encryption/decryption properties I'm supplying.

@wgtmac @pitrou does anything obviously wrong stands out here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What error did you get? I suspect it has some issues with row group ordinal. Please search "row group ordinal" from https://github.com/apache/parquet-format/blob/master/Encryption.md

Copy link
Member Author

@rok rok Aug 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The actual error is thrown here:

auto path = row_group.ColumnChunk(0)->file_path();

in debugger I see row_group.ColumnChunk(0) as null and row_group.ColumnChunk(0)->file_path() fails.

The error message I'm getting is:

unknown file: Failure
C++ exception with description "Failed decryption finalization" thrown in the test body.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the test FileMetadata object this reads several files, extracts their metadata and merges them (metadata->AppendRowGroups(*metadata_vector[1])).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could solve this by storing file aad prefixes into key_value_metadata so they can be used at read time to decode row group metadata. This seems to work (see cpp/src/parquet/metadata.cc), but reading actual data with scanner->ToTable() is somewhat less straightforward. but seems doable. Before proceeding, I'd like to ask here:

  • does this approach make sense?
  • what would a good location be for injecting aad prefixes be for reading column data?

cc @wgtmac @ggershinsky

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The format has a field for aad prefixes, https://github.com/apache/parquet-format/blob/master/src/main/thrift/parquet.thrift#L1126 .

It is file-wide, of course. So if parquet files had different aad prefixes, a new one has to be used for the merged metadata.

Per my comment above, I think the only way to make this work is to fully decrypt the footers of the parquet files (using their encryption params, inc keys and aad_prefixes), merge them, and then encrypt the result with a new key/aad_prefix

@github-actions github-actions bot added awaiting changes Awaiting changes and removed awaiting committer review Awaiting committer review labels Aug 6, 2024
@rok rok force-pushed the gh-41719_fix__metadata_file branch from a4d58e0 to fa448cf Compare August 16, 2024 13:58
@github-actions github-actions bot added awaiting change review Awaiting change review and removed awaiting changes Awaiting changes labels Aug 16, 2024
@rok rok force-pushed the gh-41719_fix__metadata_file branch from fa448cf to 4348123 Compare August 16, 2024 19:04
@rok rok force-pushed the gh-41719_fix__metadata_file branch from 4348123 to f70a009 Compare August 23, 2024 20:00
@github-actions github-actions bot added awaiting changes Awaiting changes and removed awaiting change review Awaiting change review labels Sep 11, 2024
@rok rok force-pushed the gh-41719_fix__metadata_file branch from d784b1e to d764b57 Compare September 26, 2024 22:20
@github-actions github-actions bot added awaiting change review Awaiting change review and removed awaiting changes Awaiting changes labels Sep 26, 2024
@rok rok force-pushed the gh-41719_fix__metadata_file branch 3 times, most recently from f0554e6 to 50fee62 Compare September 28, 2024 00:41
@rok rok force-pushed the gh-41719_fix__metadata_file branch from 50fee62 to d46fea4 Compare September 28, 2024 20:57
@rok rok force-pushed the gh-41719_fix__metadata_file branch from 7c4b5bb to ec833ef Compare September 30, 2024 13:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants