无法创建 hive 连接 jdbc:hive2://localhost:10000集群模式下的 spark-submit

问题描述

我正在 Apache Spark 上运行 Apache Hudi 应用程序。当我以客户端模式提交应用程序时,它工作正常,但是当我以集群模式提交应用程序时,出现错误

py4j.protocol.Py4JJavaError: An error occurred while calling o196.save.
: org.apache.hudi.hive.HoodieHiveSyncException: Cannot create hive connection jdbc:hive2://localhost:10000/
    at org.apache.hudi.hive.HoodieHiveClient.createHiveConnection(HoodieHiveClient.java:422)
    at org.apache.hudi.hive.HoodieHiveClient.<init>(HoodieHiveClient.java:95)
    at org.apache.hudi.hive.HiveSyncTool.<init>(HiveSyncTool.java:66)
    at org.apache.hudi.HoodieSparksqlWriter$.org$apache$hudi$HoodieSparksqlWriter$$syncHive(HoodieSparksqlWriter.scala:321)
    at org.apache.hudi.HoodieSparksqlWriter$$anonfun$MetaSync$2.apply(HoodieSparksqlWriter.scala:363)
    at org.apache.hudi.HoodieSparksqlWriter$$anonfun$MetaSync$2.apply(HoodieSparksqlWriter.scala:359)
    at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
    at org.apache.hudi.HoodieSparksqlWriter$.MetaSync(HoodieSparksqlWriter.scala:359)
    at org.apache.hudi.HoodieSparksqlWriter$.commitAndPerformPostOperations(HoodieSparksqlWriter.scala:417)
    at org.apache.hudi.HoodieSparksqlWriter$.write(HoodieSparksqlWriter.scala:205)
    at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:125)
    at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:173)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:169)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:197)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:194)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:169)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:114)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:112)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runcommand$1.apply(DataFrameWriter.scala:696)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runcommand$1.apply(DataFrameWriter.scala:696)
    at org.apache.spark.sql.execution.sqlExecution$.org$apache$spark$sql$execution$sqlExecution$$executeQuery$1(sqlExecution.scala:83)
    at org.apache.spark.sql.execution.sqlExecution$$anonfun$withNewExecutionId$1$$anonfun$apply$1.apply(sqlExecution.scala:94)
    at org.apache.spark.sql.execution.QueryExecutionMetrics$.withMetrics(QueryExecutionMetrics.scala:141)
    at org.apache.spark.sql.execution.sqlExecution$.org$apache$spark$sql$execution$sqlExecution$$withMetrics(sqlExecution.scala:178)
    at org.apache.spark.sql.execution.sqlExecution$$anonfun$withNewExecutionId$1.apply(sqlExecution.scala:93)

解决方法

修改 hudi 配置后“hoodie.datasource.hive_sync.jdbcurl”开始工作。