使用本机orc impl“ java.lang.NegativeArraySizeException”

问题描述

spark本机orc读取器无法正常工作。请在下面找到详细信息

import org.apache.spark.sql.{Dataset,Encoders,SparkSession}
case class GateDoc(var xml: Array[Byte],var cknid: String = null)
spark.conf.set("spark.sql.orc.impl","native")
import spark.implicits._
val df = spark.read.schema(Encoders.product[GateDoc].schema).orc(inputFile).as[GateDoc] // problem here while reading throws below mentioned exception
df.write.orc(op)

抛出

java.lang.NegativeArraySizeException
    at org.apache.orc.impl.TreeReaderFactory$BytesColumnVectorUtil.commonReadByteArrays(TreeReaderFactory.java:1506)
    at org.apache.orc.impl.TreeReaderFactory$BytesColumnVectorUtil.readOrcByteArrays(TreeReaderFactory.java:1528)
    at org.apache.orc.impl.TreeReaderFactory$BinaryTreeReader.nextVector(TreeReaderFactory.java:878)
    at org.apache.orc.impl.TreeReaderFactory$StructTreeReader.nextBatch(TreeReaderFactory.java:2012)
    at org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1284)
    at org.apache.spark.sql.execution.datasources.orc.OrcColumnarBatchReader.nextBatch(OrcColumnarBatchReader.java:227)
    at org.apache.spark.sql.execution.datasources.orc.OrcColumnarBatchReader.nextkeyvalue(OrcColumnarBatchReader.java:109)
    at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:130)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:215)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:130)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.scan_nextBatch_0$(UnkNown Source)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(UnkNown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:232)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:123)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

但是,当使用spark.sql.orc.impl = hive时,效果很好

import org.apache.spark.sql.{Dataset,SparkSession}
    case class GateDoc(var xml: Array[Byte],var cknid: String = null)
    spark.conf.set("spark.sql.orc.impl","hive")
    import spark.implicits._
    val df = spark.read.schema(Encoders.product[GateDoc].schema).orc(inputFile).as[GateDoc]
    df.write.orc(op)

我很清楚为什么在用例中抛出java.lang.NegativeArraySizeException,但为什么用负值初始化我的case类数组类型字段呢?

我还检查了以下一些分区元数据

java -jar/usr/lib/spark/jars/orc-tools-1.5.5-uber.jar data part-00.snappy.orc
  java -jar /usr/lib/spark/jars/orc-tools-1.5.5-uber.jar Meta part-00.snappy.orc

这似乎还可以

有关环境的更多详细信息:

 Scala version 2.11.12
 Spark Version 2.4.4
 Orc Version 1.5.5
 EMR emr-5.29.0

请帮帮我。我觉得orc本机阅读器中存在一些错误

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)