问题描述
我正在尝试将数据帧保存到spark-shell中的TFrecord文件中,这需要spark-tensorflow-connector jar的依赖关系,所以我运行了
spark-shell --jars xxx/xxx/spark-tensorflow-connector_2.11-1.11.0.jar
然后在spark-shell中运行以下代码:
scala> import org.apache.spark.sql.{DataFrame,SaveMode,SparkSession}
import org.apache.spark.sql.{DataFrame,SparkSession}
scala> val df = Seq((8,"bat"),(8,"abc"),(1,"xyz"),(2,"aaa")).toDF("number","word")
df: org.apache.spark.sql.DataFrame = [number: int,word: string]
scala> df.show
+------+----+
|number|word|
+------+----+
| 8| bat|
| 8| abc|
| 1| xyz|
| 2| aaa|
+------+----+
scala> var s = df.write.mode(SaveMode.Overwrite).format("tfrecords").option("recordtype","Example")
s: org.apache.spark.sql.DataFrameWriter[org.apache.spark.sql.Row] = org.apache.spark.sql.DataFrameWriter@da1382f
scala> s.save("tmp/tfrecords")
java.lang.NoClassDefFoundError: scala/Product$class
at org.tensorflow.spark.datasources.tfrecords.TensorflowRelation.<init>(TensorflowRelation.scala:29)
at org.tensorflow.spark.datasources.tfrecords.DefaultSource.createRelation(DefaultSource.scala:78)
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:90)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:175)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:213)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:210)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:171)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:122)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:121)
at org.apache.spark.sql.DataFrameWriter.$anonfun$runcommand$1(DataFrameWriter.scala:944)
at org.apache.spark.sql.execution.sqlExecution$.$anonfun$withNewExecutionId$5(sqlExecution.scala:100)
at org.apache.spark.sql.execution.sqlExecution$.withsqlConfPropagated(sqlExecution.scala:160)
at org.apache.spark.sql.execution.sqlExecution$.$anonfun$withNewExecutionId$1(sqlExecution.scala:87)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
at org.apache.spark.sql.execution.sqlExecution$.withNewExecutionId(sqlExecution.scala:64)
at org.apache.spark.sql.DataFrameWriter.runcommand(DataFrameWriter.scala:944)
at org.apache.spark.sql.DataFrameWriter.savetoV1Source(DataFrameWriter.scala:396)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:380)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:269)
... 47 elided
Caused by: java.lang.classNotFoundException: scala.Product$class
at java.net.urlclassloader.findClass(urlclassloader.java:382)
at java.lang.classLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
at java.lang.classLoader.loadClass(ClassLoader.java:351)
... 70 more
Spark版本为3.0.0,使用Using Scala版本2.12.10(Java HotSpot(TM)64位服务器VM,Java 1.8.0_261)
解决方法
问题是您正在使用由Scala 2.11编译的Tensorflow连接器(请注意jar名称中的_2.11
部分)与由Scala 2.12编译的Spark 3.0。
截至目前,还没有针对Spark 3.0编译的Tensorflow连接器,因此您需要使用由Scala 2.11编译的Spark 2.4.6。