Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Spark321Shims.getParquetFilters failed with NoSuchMethodError #4665

Closed
jlowe opened this issue Jan 31, 2022 · 2 comments
Closed

[BUG] Spark321Shims.getParquetFilters failed with NoSuchMethodError #4665

jlowe opened this issue Jan 31, 2022 · 2 comments
Assignees
Labels
bug Something isn't working build Related to CI / CD or cleanly building

Comments

@jlowe
Copy link
Member

jlowe commented Jan 31, 2022

test_join_bucketed_table failed in the nightly tests on Spark 3.2.1 with NoSuchMethodError:

[2022-01-29T04:41:44.694Z] ________________________ test_join_bucketed_table[true] ________________________
[2022-01-29T04:41:44.694Z] 
[2022-01-29T04:41:44.694Z] repartition = 'true'
[2022-01-29T04:41:44.694Z] spark_tmp_table_factory = <conftest.TmpTableFactory object at 0x7fb2fc92da60>
[2022-01-29T04:41:44.694Z] 
[2022-01-29T04:41:44.694Z]     @ignore_order
[2022-01-29T04:41:44.694Z]     @allow_non_gpu('DataWritingCommandExec')
[2022-01-29T04:41:44.694Z]     @pytest.mark.xfail(condition=is_emr_runtime(),
[2022-01-29T04:41:44.694Z]         reason='https://github.com/NVIDIA/spark-rapids/issues/821')
[2022-01-29T04:41:44.694Z]     @pytest.mark.parametrize('repartition', ["true", "false"], ids=idfn)
[2022-01-29T04:41:44.694Z]     def test_join_bucketed_table(repartition, spark_tmp_table_factory):
[2022-01-29T04:41:44.694Z]         def do_join(spark):
[2022-01-29T04:41:44.694Z]             table_name = spark_tmp_table_factory.get()
[2022-01-29T04:41:44.694Z]             data = [("http://fooblog.com/blog-entry-116.html", "https://fooblog.com/blog-entry-116.html"),
[2022-01-29T04:41:44.694Z]                     ("http://fooblog.com/blog-entry-116.html", "http://fooblog.com/blog-entry-116.html")]
[2022-01-29T04:41:44.694Z]             resolved = spark.sparkContext.parallelize(data).toDF(['Url','ResolvedUrl'])
[2022-01-29T04:41:44.694Z]             feature_data = [("http://fooblog.com/blog-entry-116.html", "21")]
[2022-01-29T04:41:44.694Z]             feature = spark.sparkContext.parallelize(feature_data).toDF(['Url','Count'])
[2022-01-29T04:41:44.694Z]             feature.write.bucketBy(400, 'Url').sortBy('Url').format('parquet').mode('overwrite')\
[2022-01-29T04:41:44.694Z]                      .saveAsTable(table_name)
[2022-01-29T04:41:44.694Z]             testurls = spark.sql("SELECT Url, Count FROM {}".format(table_name))
[2022-01-29T04:41:44.694Z]             if (repartition == "true"):
[2022-01-29T04:41:44.694Z]                     return testurls.repartition(20).join(resolved, "Url", "inner")
[2022-01-29T04:41:44.694Z]             else:
[2022-01-29T04:41:44.694Z]                     return testurls.join(resolved, "Url", "inner")
[2022-01-29T04:41:44.694Z] >       assert_gpu_and_cpu_are_equal_collect(do_join, conf={'spark.sql.autoBroadcastJoinThreshold': '-1'})
[2022-01-29T04:41:44.694Z] 
[2022-01-29T04:41:44.694Z] ../../src/main/python/join_test.py:669: 
[2022-01-29T04:41:44.694Z] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
[2022-01-29T04:41:44.694Z] ../../src/main/python/asserts.py:505: in assert_gpu_and_cpu_are_equal_collect
[2022-01-29T04:41:44.694Z]     _assert_gpu_and_cpu_are_equal(func, 'COLLECT', conf=conf, is_cpu_first=is_cpu_first)
[2022-01-29T04:41:44.694Z] ../../src/main/python/asserts.py:425: in _assert_gpu_and_cpu_are_equal
[2022-01-29T04:41:44.694Z]     run_on_gpu()
[2022-01-29T04:41:44.694Z] ../../src/main/python/asserts.py:419: in run_on_gpu
[2022-01-29T04:41:44.695Z]     from_gpu = with_gpu_session(bring_back, conf=conf)
[2022-01-29T04:41:44.695Z] ../../src/main/python/spark_session.py:103: in with_gpu_session
[2022-01-29T04:41:44.695Z]     return with_spark_session(func, conf=copy)
[2022-01-29T04:41:44.695Z] ../../src/main/python/spark_session.py:70: in with_spark_session
[2022-01-29T04:41:44.695Z]     ret = func(_spark)
[2022-01-29T04:41:44.695Z] ../../src/main/python/asserts.py:198: in <lambda>
[2022-01-29T04:41:44.695Z]     bring_back = lambda spark: limit_func(spark).collect()
[2022-01-29T04:41:44.695Z] /home/jenkins/agent/workspace/jenkins-rapids_integration-pre_release-github-422-321/jars/spark-3.2.1-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/sql/dataframe.py:693: in collect
[2022-01-29T04:41:44.695Z]     sock_info = self._jdf.collectToPython()
[2022-01-29T04:41:44.695Z] /home/jenkins/agent/workspace/jenkins-rapids_integration-pre_release-github-422-321/jars/spark-3.2.1-bin-hadoop3.2/python/lib/py4j-0.10.9.3-src.zip/py4j/java_gateway.py:1321: in __call__
[2022-01-29T04:41:44.695Z]     return_value = get_return_value(
[2022-01-29T04:41:44.695Z] /home/jenkins/agent/workspace/jenkins-rapids_integration-pre_release-github-422-321/jars/spark-3.2.1-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/sql/utils.py:111: in deco
[2022-01-29T04:41:44.695Z]     return f(*a, **kw)
[2022-01-29T04:41:44.695Z] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
[2022-01-29T04:41:44.695Z] 
[2022-01-29T04:41:44.695Z] answer = 'xro622'
[2022-01-29T04:41:44.695Z] gateway_client = <py4j.clientserver.JavaClient object at 0x7fb3178615b0>
[2022-01-29T04:41:44.695Z] target_id = 'o621', name = 'collectToPython'
[2022-01-29T04:41:44.695Z] 
[2022-01-29T04:41:44.695Z]     def get_return_value(answer, gateway_client, target_id=None, name=None):
[2022-01-29T04:41:44.695Z]         """Converts an answer received from the Java gateway into a Python object.
[2022-01-29T04:41:44.695Z]     
[2022-01-29T04:41:44.695Z]         For example, string representation of integers are converted to Python
[2022-01-29T04:41:44.695Z]         integer, string representation of objects are converted to JavaObject
[2022-01-29T04:41:44.695Z]         instances, etc.
[2022-01-29T04:41:44.695Z]     
[2022-01-29T04:41:44.695Z]         :param answer: the string returned by the Java gateway
[2022-01-29T04:41:44.695Z]         :param gateway_client: the gateway client used to communicate with the Java
[2022-01-29T04:41:44.695Z]             Gateway. Only necessary if the answer is a reference (e.g., object,
[2022-01-29T04:41:44.695Z]             list, map)
[2022-01-29T04:41:44.695Z]         :param target_id: the name of the object from which the answer comes from
[2022-01-29T04:41:44.695Z]             (e.g., *object1* in `object1.hello()`). Optional.
[2022-01-29T04:41:44.695Z]         :param name: the name of the member from which the answer comes from
[2022-01-29T04:41:44.695Z]             (e.g., *hello* in `object1.hello()`). Optional.
[2022-01-29T04:41:44.695Z]         """
[2022-01-29T04:41:44.695Z]         if is_error(answer)[0]:
[2022-01-29T04:41:44.695Z]             if len(answer) > 1:
[2022-01-29T04:41:44.695Z]                 type = answer[1]
[2022-01-29T04:41:44.695Z]                 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
[2022-01-29T04:41:44.695Z]                 if answer[1] == REFERENCE_TYPE:
[2022-01-29T04:41:44.695Z] >                   raise Py4JJavaError(
[2022-01-29T04:41:44.695Z]                         "An error occurred while calling {0}{1}{2}.\n".
[2022-01-29T04:41:44.695Z]                         format(target_id, ".", name), value)
[2022-01-29T04:41:44.695Z] E                   py4j.protocol.Py4JJavaError: An error occurred while calling o621.collectToPython.
[2022-01-29T04:41:44.695Z] E                   : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 17.0 failed 1 times, most recent failure: Lost task 0.0 in stage 17.0 (TID 59) (10.233.67.241 executor 0): java.lang.NoSuchMethodError: org.apache.spark.sql.execution.datasources.DataSourceUtils$.datetimeRebaseMode(Lscala/Function1;Ljava/lang/String;)Lscala/Enumeration$Value;
[2022-01-29T04:41:44.695Z] E                   	at com.nvidia.spark.rapids.shims.v2.Spark320until322Shims.getParquetFilters(Spark320until322Shims.scala:40)
[2022-01-29T04:41:44.695Z] E                   	at com.nvidia.spark.rapids.shims.v2.Spark320until322Shims.getParquetFilters$(Spark320until322Shims.scala:29)
[2022-01-29T04:41:44.695Z] E                   	at com.nvidia.spark.rapids.shims.spark321.Spark321Shims.getParquetFilters(Spark321Shims.scala:29)
[2022-01-29T04:41:44.695Z] E                   	at com.nvidia.spark.rapids.GpuParquetFileFilterHandler.filterBlocks(GpuParquetScanBase.scala:346)
[2022-01-29T04:41:44.695Z] E                   	at com.nvidia.spark.rapids.GpuParquetMultiFilePartitionReaderFactory.$anonfun$buildBaseColumnarReaderForCoalescing$1(GpuParquetScanBase.scala:450)
[2022-01-29T04:41:44.695Z] E                   	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
[2022-01-29T04:41:44.695Z] E                   	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
[2022-01-29T04:41:44.695Z] E                   	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
[2022-01-29T04:41:44.695Z] E                   	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
[2022-01-29T04:41:44.695Z] E                   	at scala.collection.TraversableLike.map(TraversableLike.scala:286)
[2022-01-29T04:41:44.695Z] E                   	at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
[2022-01-29T04:41:44.695Z] E                   	at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:198)
[2022-01-29T04:41:44.695Z] E                   	at com.nvidia.spark.rapids.GpuParquetMultiFilePartitionReaderFactory.buildBaseColumnarReaderForCoalescing(GpuParquetScanBase.scala:449)
[2022-01-29T04:41:44.695Z] E                   	at com.nvidia.spark.rapids.MultiFilePartitionReaderFactoryBase.createColumnarReader(GpuMultiFileReader.scala:195)
[2022-01-29T04:41:44.695Z] E                   	at com.nvidia.spark.rapids.GpuDataSourceRDD.compute(GpuDataSourceRDD.scala:48)
[2022-01-29T04:41:44.695Z] E                   	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
[2022-01-29T04:41:44.695Z] E                   	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
[2022-01-29T04:41:44.695Z] E                   	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
[2022-01-29T04:41:44.695Z] E                   	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
[2022-01-29T04:41:44.695Z] E                   	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
[2022-01-29T04:41:44.695Z] E                   	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
[2022-01-29T04:41:44.695Z] E                   	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
[2022-01-29T04:41:44.695Z] E                   	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
[2022-01-29T04:41:44.695Z] E                   	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
[2022-01-29T04:41:44.695Z] E                   	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
[2022-01-29T04:41:44.695Z] E                   	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.scheduler.Task.run(Task.scala:131)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
[2022-01-29T04:41:44.696Z] E                   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[2022-01-29T04:41:44.696Z] E                   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[2022-01-29T04:41:44.696Z] E                   	at java.lang.Thread.run(Thread.java:748)
[2022-01-29T04:41:44.696Z] E                   
[2022-01-29T04:41:44.696Z] E                   Driver stacktrace:
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2454)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2403)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2402)
[2022-01-29T04:41:44.696Z] E                   	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
[2022-01-29T04:41:44.696Z] E                   	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
[2022-01-29T04:41:44.696Z] E                   	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2402)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1160)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1160)
[2022-01-29T04:41:44.696Z] E                   	at scala.Option.foreach(Option.scala:407)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1160)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2642)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2584)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2573)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:938)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2214)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2235)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2254)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2279)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1030)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.rdd.RDD.withScope(RDD.scala:414)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.rdd.RDD.collect(RDD.scala:1029)
[2022-01-29T04:41:44.696Z] E                   	at com.nvidia.spark.rapids.GpuRangePartitioner$.sketch(GpuRangePartitioner.scala:47)
[2022-01-29T04:41:44.696Z] E                   	at com.nvidia.spark.rapids.GpuRangePartitioner$.createRangeBounds(GpuRangePartitioner.scala:132)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExecBase$.getPartitioner(GpuShuffleExchangeExecBase.scala:354)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExecBase$.prepareBatchShuffleDependency(GpuShuffleExchangeExecBase.scala:270)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExecBase.shuffleDependencyColumnar$lzycompute(GpuShuffleExchangeExecBase.scala:210)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExecBase.shuffleDependencyColumnar(GpuShuffleExchangeExecBase.scala:200)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExecBase.$anonfun$doExecuteColumnar$1(GpuShuffleExchangeExecBase.scala:225)
[2022-01-29T04:41:44.696Z] E                   	at com.nvidia.spark.rapids.shims.v2.Spark320PlusShims.attachTreeIfSupported(Spark320PlusShims.scala:1004)
[2022-01-29T04:41:44.696Z] E                   	at com.nvidia.spark.rapids.shims.v2.Spark320PlusShims.attachTreeIfSupported$(Spark320PlusShims.scala:999)
[2022-01-29T04:41:44.696Z] E                   	at com.nvidia.spark.rapids.shims.spark321.Spark321Shims.attachTreeIfSupported(Spark321Shims.scala:29)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExecBase.doExecuteColumnar(GpuShuffleExchangeExecBase.scala:222)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeColumnar$1(SparkPlan.scala:211)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:222)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
[2022-01-29T04:41:44.696Z] E                   	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:219)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.execution.SparkPlan.executeColumnar(SparkPlan.scala:207)
[2022-01-29T04:41:44.697Z] E                   	at com.nvidia.spark.rapids.GpuShuffleCoalesceExec.doExecuteColumnar(GpuShuffleCoalesceExec.scala:66)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeColumnar$1(SparkPlan.scala:211)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:222)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:219)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.execution.SparkPlan.executeColumnar(SparkPlan.scala:207)
[2022-01-29T04:41:44.697Z] E                   	at com.nvidia.spark.rapids.GpuSortExec.doExecuteColumnar(GpuSortExec.scala:128)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeColumnar$1(SparkPlan.scala:211)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:222)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:219)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.execution.SparkPlan.executeColumnar(SparkPlan.scala:207)
[2022-01-29T04:41:44.697Z] E                   	at com.nvidia.spark.rapids.GpuColumnarToRowExecParent.doExecute(GpuColumnarToRowExec.scala:319)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:184)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:222)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:219)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:180)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:325)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:391)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.Dataset.$anonfun$collectToPython$1(Dataset.scala:3538)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3706)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3704)
[2022-01-29T04:41:44.697Z] E                   	at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3535)
[2022-01-29T04:41:44.697Z] E                   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[2022-01-29T04:41:44.697Z] E                   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[2022-01-29T04:41:44.697Z] E                   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[2022-01-29T04:41:44.697Z] E                   	at java.lang.reflect.Method.invoke(Method.java:498)
[2022-01-29T04:41:44.697Z] E                   	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
[2022-01-29T04:41:44.697Z] E                   	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
[2022-01-29T04:41:44.697Z] E                   	at py4j.Gateway.invoke(Gateway.java:282)
[2022-01-29T04:41:44.697Z] E                   	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
[2022-01-29T04:41:44.697Z] E                   	at py4j.commands.CallCommand.execute(CallCommand.java:79)
[2022-01-29T04:41:44.697Z] E                   	at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
[2022-01-29T04:41:44.697Z] E                   	at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
[2022-01-29T04:41:44.697Z] E                   	at java.lang.Thread.run(Thread.java:748)
[2022-01-29T04:41:44.697Z] E                   Caused by: java.lang.NoSuchMethodError: org.apache.spark.sql.execution.datasources.DataSourceUtils$.datetimeRebaseMode(Lscala/Function1;Ljava/lang/String;)Lscala/Enumeration$Value;
[2022-01-29T04:41:44.697Z] E                   	at com.nvidia.spark.rapids.shims.v2.Spark320until322Shims.getParquetFilters(Spark320until322Shims.scala:40)
[2022-01-29T04:41:44.697Z] E                   	at com.nvidia.spark.rapids.shims.v2.Spark320until322Shims.getParquetFilters$(Spark320until322Shims.scala:29)
[2022-01-29T04:41:44.697Z] E                   	at com.nvidia.spark.rapids.shims.spark321.Spark321Shims.getParquetFilters(Spark321Shims.scala:29)
[2022-01-29T04:41:44.697Z] E                   	at com.nvidia.spark.rapids.GpuParquetFileFilterHandler.filterBlocks(GpuParquetScanBase.scala:346)
[2022-01-29T04:41:44.697Z] E                   	at com.nvidia.spark.rapids.GpuParquetMultiFilePartitionReaderFactory.$anonfun$buildBaseColumnarReaderForCoalescing$1(GpuParquetScanBase.scala:450)
[2022-01-29T04:41:44.697Z] E                   	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
[2022-01-29T04:41:44.697Z] E                   	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
[2022-01-29T04:41:44.697Z] E                   	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
[2022-01-29T04:41:44.697Z] E                   	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
[2022-01-29T04:41:44.697Z] E                   	at scala.collection.TraversableLike.map(TraversableLike.scala:286)
[2022-01-29T04:41:44.697Z] E                   	at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
[2022-01-29T04:41:44.697Z] E                   	at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:198)
[2022-01-29T04:41:44.697Z] E                   	at com.nvidia.spark.rapids.GpuParquetMultiFilePartitionReaderFactory.buildBaseColumnarReaderForCoalescing(GpuParquetScanBase.scala:449)
[2022-01-29T04:41:44.698Z] E                   	at com.nvidia.spark.rapids.MultiFilePartitionReaderFactoryBase.createColumnarReader(GpuMultiFileReader.scala:195)
[2022-01-29T04:41:44.698Z] E                   	at com.nvidia.spark.rapids.GpuDataSourceRDD.compute(GpuDataSourceRDD.scala:48)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.scheduler.Task.run(Task.scala:131)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
[2022-01-29T04:41:44.698Z] E                   	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
[2022-01-29T04:41:44.698Z] E                   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[2022-01-29T04:41:44.698Z] E                   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[2022-01-29T04:41:44.698Z] E                   	... 1 more
@jlowe jlowe added bug Something isn't working ? - Needs Triage Need team to review and classify labels Jan 31, 2022
@jlowe jlowe self-assigned this Jan 31, 2022
@jlowe jlowe removed the ? - Needs Triage Need team to review and classify label Jan 31, 2022
@jlowe
Copy link
Member Author

jlowe commented Jan 31, 2022

This looks like a case of stale jars being run by the nightly integration tests. Spark320until330Shims was removed in #4508

@jlowe jlowe added the build Related to CI / CD or cleanly building label Jan 31, 2022
@jlowe
Copy link
Member Author

jlowe commented Jan 31, 2022

This passes with the latest code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working build Related to CI / CD or cleanly building
Projects
None yet
Development

No branches or pull requests

1 participant