读取兽人文件时,最新版本的 Hudi0.7.0、0.6.0是否适用于 Spark 2.3.0?

问题描述

文档说:Hudi 适用于 Spark-2.x 和 Spark 3.x 版本。 (https://hudi.apache.org/docs/quick-start-guide.html) 但是我一直无法将 hudi-spark-bundle_2.11 版本 0.7.0 与 Spark 2.3.0 和 Scala 2.11.12 一起使用。是否有任何特定的 spark_avro 包必须使用?

作业失败并出现以下错误java.lang.NoSuchMethodError: org.apache.spark.sql.types.Decimal$.minBytesForPrecision()[I 任何输入都会非常有帮助。

在我使用的集群中,我们有 Spark 2.3.0,没有立即升级的计划。想看看有没有办法让 Hudi 0.7.0 与 Spark 2.3.0 兼容?

注意:我可以将 Spark 2.3.0 与 hudi-spark-bundle-0.5.0-incubating.jar 一起使用

在 spark-shell 中,我收到以下错误

scala> transformedDF.write.format("org.apache.hudi").
         |         options(getQuickstartWriteConfigs).
         |         option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY,"col1").
         |         //option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY,"col2").
         |         option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY,"col3,col4,col5").
         |         option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY,"partitionpath").
         |         option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY,"org.apache.hudi.ComplexKeyGenerator").
         |         option("hoodie.upsert.shuffle.parallelism","20").
         |         option("hoodie.insert.shuffle.parallelism","20").
         |         option(HoodieCompactionConfig.PARQUET_SMALL_FILE_LIMIT_BYTES,128 * 1024 * 1024).
         |         option(HoodieStorageConfig.PARQUET_FILE_MAX_BYTES,128 * 1024 * 1024).
         |         option(HoodieWriteConfig.TABLE_NAME,"targetTableHudi").
         |         mode(SaveMode.Append).
         |         save(targetPath)
    21/02/22 07:14:03 WARN Utils: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf.
    java.lang.NoSuchMethodError: org.apache.spark.sql.types.Decimal$.minBytesForPrecision()[I
      at org.apache.hudi.spark.org.apache.spark.sql.avro.SchemaConverters$.toAvroType(SchemaConverters.scala:156)
      at org.apache.hudi.spark.org.apache.spark.sql.avro.SchemaConverters$$anonfun$5.apply(SchemaConverters.scala:176)
      at org.apache.hudi.spark.org.apache.spark.sql.avro.SchemaConverters$$anonfun$5.apply(SchemaConverters.scala:174)
      at scala.collection.Iterator$class.foreach(Iterator.scala:893)
      at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
      at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
      at org.apache.spark.sql.types.StructType.foreach(StructType.scala:99)
      at org.apache.hudi.spark.org.apache.spark.sql.avro.SchemaConverters$.toAvroType(SchemaConverters.scala:174)
      at org.apache.hudi.AvroConversionUtils$.convertStructTypetoAvroSchema(AvroConversionUtils.scala:52)
      at org.apache.hudi.HoodieSparksqlWriter$.write(HoodieSparksqlWriter.scala:139)
      at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:134)
      at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
      at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
      at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
      at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
      at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
      at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
      at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
      at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
      at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
      at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
      at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
      at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
      at org.apache.spark.sql.DataFrameWriter$$anonfun$runcommand$1.apply(DataFrameWriter.scala:654)
      at org.apache.spark.sql.DataFrameWriter$$anonfun$runcommand$1.apply(DataFrameWriter.scala:654)
      at org.apache.spark.sql.execution.sqlExecution$.withNewExecutionId(sqlExecution.scala:77)
      at org.apache.spark.sql.DataFrameWriter.runcommand(DataFrameWriter.scala:654)
      at org.apache.spark.sql.DataFrameWriter.savetoV1Source(DataFrameWriter.scala:273)
      at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
      at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)
      ... 62 elided

解决方法

能否请您打开一个 github 问题 (https://github.com/apache/hudi/issues),以便社区及时回复您?