Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump to PyArrow 17.0.0 #929

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Fokko
Copy link
Contributor

@Fokko Fokko commented Jul 15, 2024

No description provided.

@Fokko
Copy link
Contributor Author

Fokko commented Jul 16, 2024

Vote has been passed: https://lists.apache.org/thread/mnzdpwzhctx6yrjl16zn8hl7pcxxt575

@sungwy
Copy link
Collaborator

sungwy commented Jul 16, 2024

Amazing. Given that this issue (#936) also requires 17.0.0 for the fix, maybe it's right for us to move forward with onboarding 17.0.0

@Fokko
Copy link
Contributor Author

Fokko commented Jul 16, 2024

@syun64 It has been released, and I've updated the lockfile

@sungwy
Copy link
Collaborator

sungwy commented Jul 16, 2024

@syun64 It has been released, and I've updated the lockfile

Awesome :) we live in exciting times 🎉

@Fokko
Copy link
Contributor Author

Fokko commented Jul 16, 2024

@raulcd
Copy link
Member

raulcd commented Jul 17, 2024

The missing wheels and source distribution for pyarrow 17.0.0 have been uploaded to PyPI. Sorry for the inconvenience.

@Fokko
Copy link
Contributor Author

Fokko commented Jul 17, 2024

@raulcd No problem, thanks for the heads up here 👍

@Fokko Fokko force-pushed the fd-test-against-pyarrow-17 branch from e972de5 to a396149 Compare July 19, 2024 12:07
@Fokko Fokko marked this pull request as ready for review July 19, 2024 12:07
@Fokko
Copy link
Contributor Author

Fokko commented Jul 19, 2024

@syun64 @HonahX @kevinjqliu This provides a nice cleanup of the types (and probably also a speed-up), the downside is that we have to raise the lower bound to PyArrow 17. PTAL

pa.field(
"address",
pa.struct([
pa.field("street", pa.large_string()),
pa.field("city", pa.large_string()),
pa.field("street", pa.string()),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

totally outsider here but curious, was there a bug on pyarrow that made those large_string instead of string?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@raulcd - It wasn't a bug, but actually an intentional change for the time being. If we update to PyArrow 17.0.0 we will be able to revert that change, and let the encoding in the parquet file dictate whether the table should be read as a large or small type for the Table API.

@raulcd
Copy link
Member

raulcd commented Jul 19, 2024

btw, are dependabot PRs automatically merged? It seems it updated pyarrow (4282d2f)

@Fokko
Copy link
Contributor Author

Fokko commented Jul 19, 2024

btw, are dependabot PRs automatically merged? It seems it updated pyarrow (4282d2f)

No, we still have to merge them by hand #937. The good thing is that the current codebase runs with 17 :)

@sungwy
Copy link
Collaborator

sungwy commented Jul 19, 2024

@syun64 @HonahX @kevinjqliu This provides a nice cleanup of the types (and probably also a speed-up), the downside is that we have to raise the lower bound to PyArrow 17. PTAL

Great question @Fokko ... after thinking a lot about this the past week, here's my long answer organized by different topics of consideration

Benefits of 17.0.0

  • Table Scan API to_arrow() will now be able to infer the type correctly based on the encoding in a parquet file (instead of always reading as large types). If there's a discrepancy in the encodings across the files, this will result in some of the tables's data being casted to larger types. For to_arrow_batches() this means that we will be reading the batches with types matching the encoding in the parquet file, and then always casting to large types **
  • We will be able conform with the Iceberg Spec's physical type mapping for DecimalType: [Spec][Upstream] Mapping from DecimalType to Parquet physical type not aligned with spec #936

User's ability to use PyIceberg in applications

  • pyarrow 8.0.0 ~ pyarrow 17.0.0 all have the same single subdependency on numpy of the same version, which means it shouldn't be all that difficult for users to switch to a higher version of pyarrow in existing applications
  • However, switching versions of a core dependency comes with the risk of introducing changes to the data underneath. Although the arrow representation under the hood won't change, I wonder if there'd still be subtle differences in the API args that would need to be considered carefully in a version update. I'd imagine that it would require a lot of effort for owners of existing Production applications to update the PyArrow version and QC their output artifacts as a pre-requisite for adding a PyIceberg dependency to their application (if they want to use pyarrow for scans and writes)
  • This would also mean that there is only one version of PyArrow available for users to use with PyIceberg - there's some element of risk in having just one version of a package available to use. For example, what if there's a really bad issue with 17.0.0 that affects a specific use case?

** -> I'm of the impression that while this change seems to make sense from the perspective of preserving type or encoding correctness, it will actually result in a performance regression due to the fact that we will be reading most batches as small types, but having to cast them to large types (infrequently for pa.Table, but always for pa.RecordBatchReader). Another option is to always choose to cast to a small type instead in to_arrow_batch_reader

Based on these points, I'm leaning towards not aggressively increasing the lower bound to 17.0.0, at least for this minor release, but I'm very excited to hear what others think as well!

@kevinjqliu
Copy link
Contributor

@syun64 already pointed to the cost/benefits of upgrading.

I lean more towards correctness than performance. What is the correctness issue if we do not upgrade? As I understand from the above, if the parquet file is of type string, we read it as large_string but write it as string again.

As for updating the minimum dependency to pyarrow 17.0.0, I would prefer to wait for the new arrow version to be baked for a time before we require all new versions of Pyiceberg to use it.

I also think the 0.7.0 release's feature set is getting massive. We can add this upgrade as a fast-follow release.

@Fokko Fokko changed the title Test again PyArrow 17.0.0 Bump to PyArrow 17.0.0 Jul 19, 2024
@Fokko Fokko added this to the PyIceberg 0.8.0 release milestone Jul 23, 2024
@Fokko Fokko force-pushed the fd-test-against-pyarrow-17 branch 4 times, most recently from 9969926 to 921cd84 Compare August 12, 2024 09:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants