Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] TPC-DS query 77 at scale=1TB fails with maxResultSize exceeded error #1284

Closed
jlowe opened this issue Dec 4, 2020 · 1 comment
Closed
Assignees
Labels
bug Something isn't working P0 Must have for release

Comments

@jlowe
Copy link
Member

jlowe commented Dec 4, 2020

Running TPC-DS query 77 with scale factor 1TB data with decimals on a recent 0.3-SNAPSHOT is failing with the following error:

Job aborted due to stage failure: Total size of serialized results of 30095 tasks (1024.0 MiB) is bigger than spark.driver.maxResultSize (1024.0 MiB)

The same query with the same data runs successfully with version 0.2.

@jlowe jlowe added bug Something isn't working ? - Needs Triage Need team to review and classify labels Dec 4, 2020
@sameerz sameerz added P0 Must have for release and removed ? - Needs Triage Need team to review and classify labels Dec 8, 2020
@sameerz sameerz added this to the Dec 7 - Dec 18 milestone Dec 8, 2020
@jlowe
Copy link
Member Author

jlowe commented Dec 9, 2020

This is fixed by #1310

@jlowe jlowe closed this as completed Dec 9, 2020
tgravescs pushed a commit to tgravescs/spark-rapids that referenced this issue Nov 30, 2023
…IDIA#1284)

Signed-off-by: spark-rapids automation <70000568+nvauto@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working P0 Must have for release
Projects
None yet
Development

No branches or pull requests

2 participants