-
Notifications
You must be signed in to change notification settings - Fork 230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] java.lang.ClassCastException: GpuCompressedColumnVector cannot be cast to GpuColumnVector #2378
Comments
@zhnin thanks for the details in this issue. I'll try to reproduce this locally and get back to you. |
@zhnin I am able to reproduce thanks to the well documented issue. Looking into a fix. |
@andygrove suggested looking at this one rule, and removing it makes @zhnin's case pass. @revans2 @jlowe @andygrove, we could either remove this, or perhaps test for compressed vectors in C2R. Any strong feelings?
|
I think there's basically three ways to tackle this:
I think the first option is the best option, at least in the short term. Getting good decompression performance does require some batching which is what GpuCoalesceExec is already doing, and I'd rather not spread the knowledge and handling of compressed batches to something like GpuColumnarToRow. Having a separate exec for handling decompression could be a bit cleaner, but it's a more invasive change and will have a lot of overlap with coalesce exec since we want to build bigger batches for better decompression parallelism. So my vote is to make the rule a bit smarter and not have it optimize a GpuCoalesceExec if its preceding node is a shuffle. |
@zhnin could you try again with the latest changes in branch-0.6 to make sure it works for you? |
Yes, i have tested, it works to me. |
Describe the bug
Hi, when i runTPCx-BB Query7 and Query15 (SF1, use decimal) with
I got a exception:
Steps/Code to reproduce bug
spark-shell --master spark://master:7077
Environment details (please complete the following information)
Additional context
Add any other context about the problem here.
spark.rapids.sql.decimalType.enabled=false
, it could run.The text was updated successfully, but these errors were encountered: