问题描述
我正在尝试使用来自 spark 数据帧的 spark 2.3 版本将数据写入 hive 内部表
CREATE TABLE `g_interimc.grpxm31`(
> `gr98p_cf` bigint,>
> `gr98p_cp` decimal(11,0),>
> `grp98mmmb` string,>
> `grp98oob` string,>
> `srccd` string,>
> `gp_n` string)
>
> ROW FORMAT SERDE
>
> 'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
>
> STORED AS INPUTFORMAT
>
> 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
>
> OUTPUTFORMAT
>
> 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
>
> LOCATION
>
> 'hdfs://gwhdnha/mnoo1/raw/cat/eilkls/g_interimc/grpxm31
>
> TBLPROPERTIES (
>
> 'bucketing_version'='2',>
> 'transactional'='true',>
> 'transactional_properties'='default',dataframe.write.mode("overwrite").insertInto("g_interimc.grpxm31")
Exception in thread "main" org.apache.spark.sql.AnalysisException:
Spark has no access to table `g_interimc`.`grpxm31`. Clients can access this table only if
they have the following capabilities: CONNECTORREAD,HIVEFULLACIDREAD,HIVEFULLACIDWRITE,HIVEMANAGESTATS,HIVECACHEINVALIDATE,CONNECTORWRITE.
This table may be a Hive-managed ACID table,or require some other capability that Spark
currently does not implement; at org.apache.spark.sql.catalyst.catalog.CatalogUtils$.throwIfNoAccess(ExternalCatalogUtils.scala:280)
at org.apache.spark.sql.catalyst.catalog.CatalogUtils$.throwIfRO(ExternalCatalogUtils.scala:297)
org.apache.spark.sql.hive.HiveTranslationLayerCheck$$anonfun$apply$1.applyOrElse(HiveTranslationLayerStrategies.scala:93)
atorg.apache.spark.sql.hive.HiveTranslationLayerCheck$$anonfun$apply$1.applyOrElse(HiveTranslationLayerStrategies.scala:85)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:288)
at org.apache.spark.sql.hive.HiveTranslationLayerCheck.apply(HiveTranslationLayerStrategies.scala:85)
at org.apache.spark.sql.hive.HiveTranslationLayerCheck.apply(HiveTranslationLayerStrategies.scala:83)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)
at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)