Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add dunder method for Arrow C Data Interface to DataFrame and Column objects #279

Open
jorisvandenbossche opened this issue Oct 12, 2023 · 6 comments

Comments

@jorisvandenbossche
Copy link
Member

jorisvandenbossche commented Oct 12, 2023

The Python Arrow community is adding a public way to interchange data through the C Data Interface, using PyCapsule objects holding the C struct, similary as DLPack's python interface: http://crossbow.voltrondata.com/pr_docs/37797/format/CDataInterface/PyCapsuleInterface.html

We have DLPack support at the Buffer level, and similarly, I think it would be useful to add Arrow support at the DataFrame and Column level.

Concretely, I would propose adding an optional __arrow_c_schema__, __arrow_c_array__ and __arrow_c_stream__ methods to both the DataFrame and Column interchange objects. Those methods would be optional, with their presence indicating that this specific implementation of the interchange object supports the Arrow interface.
Consumers of the interchange protocol could then check for the presence of those methods, and try them first for an easier and faster conversion, and otherwise use the standard APIs through the Column and Buffer objects (example: pyarrow and polars interchanging data).

It might be a bit strange to add both the array and stream interface methods, but that is due to that the interchange protocol hasn't really made a distinction between a single chunk vs a chunked object (#250). But I think the array method could then raise an error if the DataFrame or Column still exists of more than 1 chunk.

This would address #48 but without being tied to a specific library implementation, but solely memory layout.

@kkraus14
Copy link
Collaborator

I think we'd want to use the C Device Data Interface as opposed to the C Data Interface in order to support non-CPU memory as well?

@jorisvandenbossche
Copy link
Member Author

The current Python Arrow capsule protocol only supports the C Data Interface and not yet the device version, so for now it would only support CPU memory (libraries with GPU memory would thus not yet add such a method to their interchange object)

@kkraus14
Copy link
Collaborator

I think we should just wait until there's a Python protocol for the C Device Data Interface then instead of plumbing in the C Data Interface. That gives a single implementation that could be universally supported as opposed to adding something that downstream consumers will either have to check for existence or check if it throws or something else.

cbourjau added a commit to cbourjau/dataframe-api that referenced this issue Oct 25, 2023
@rgommers
Copy link
Member

I see the device data interface is still marked as experimental in Arrow (https://arrow.apache.org/docs/dev/format/CDeviceDataInterface.html). Is there a timeline for it appearing in PyArrow?

@jorisvandenbossche
Copy link
Member Author

Is there a timeline for it appearing in PyArrow?

(for the device interface) Hopefully in the next release (15.0), depending on the progress in apache/arrow#38325)


Some questions here on what the expected return data would be when the data does not match default Arrow types exactly.

I think in general DLPack is expected to be zero-copy (and if that is not possible (because of the data type, the device, or being distributed, ...), you raise an error instead). The question is whether we want to define the same expectation here?

  • The stream protocol does support chunked data, so this matches the model of the interchange protocol (so if we want zero copy, we might want to add only __arrow_c_stream__ to the DataFrame object, or let __arrow_c_array__ raise an error if the DataFrame has more than 1 chunk.
  • The interchange protocol supports data type layouts that are not the native ones in Arrow (although everything can be represented). So for each of those data types, we have the choice between returning the closest native Arrow type (but this might not be zero copy) or return the data as-is potentially using an Arrow extension type. Examples:
    • Assume you have a float column that uses NaN as missing value sentinel. Converting that zero-copy to Arrow would give an array that has different missing value semantics. One option is to create the mapping validity bitmap on export to Arrow (not fully zero-copy), or otherwise return the data as is (potentially defining an extension type to mark it as "float-with-nan").
    • Assume you have a boolean column with byte. Do we convert that to an Arrow bit-based boolean array (non-zero-copy), or do we convert that zero-copy to (u)int8 and use an extension type to indicate this represents bools? (such an extension type might actually also be useful for cudf?)

For the types (native arrow vs extension types), there is a trade-off in performance (keep everything zero-copy, let the consumer decide what conversion they need) vs usability and compatibility (those extension types will not be understood by everyone, at least not initially)

MarcoGorelli added a commit that referenced this issue Dec 7, 2023
* Add example of a sklearn like pipeline

* Review comments

* Update comment after discussion in #279

* update

---------

Co-authored-by: MarcoGorelli <33491632+MarcoGorelli@users.noreply.github.com>
@rgommers
Copy link
Member

The interchange protocol supports data type layouts that are not the native ones in Arrow

This may not be a problem unless the __arrow_* methods are implemented by a library that does not actually use Arrow under the hood. Which probably is not desirable? Last time this was discussed, it was more a shortcut for "can we avoid the overhead of the interchange protocol if both producer and consumer already use Arrow under the hood"? That's also in your issue description above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants