由于Kerberos,spark-submit无法连接到元存储区:由GSS引起:异常:没有提供有效的凭据但可以在本地客户端模式下运行

问题描述

看来,在本地客户端模式下的docker pyspark shell中,它可以正常工作并且能够连接到配置单元。但是,发出带有所有依赖关系的spark-submit都会失败,并显示以下错误

20/08/24 14:03:01 INFO storage.BlockManagerMasterEndpoint: Registering block manager test.server.com:41697 with 6.2 GB RAM,BlockManagerId(3,test.server.com,41697,None)
20/08/24 14:03:02 INFO hive.HiveUtils: Initializing HivemetastoreConnection version 1.2.1 using Spark classes.
20/08/24 14:03:02 INFO hive.metastore: Trying to connect to metastore with URI thrift://metastore.server.com:9083
20/08/24 14:03:02 ERROR transport.TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate Failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
        at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)

在pyspark上运行一个简单的pi示例可以正常工作,没有任何人为问题,但是在尝试访问配置单元时出现人为错误

Spark-submit命令:

spark-submit --master yarn --deploy-mode cluster --files=/etc/hive/conf/hive-site.xml,/etc/hive/conf/yarn-site.xml,/etc/hive/conf/hdfs-site.xml,/etc/hive/conf/core-site.xml,/etc/hive/conf/mapred-site.xml,/etc/hive/conf/ssl-client.xml  --name fetch_hive_test --executor-memory 12g --num-executors 20 test_hive_minimal.py

test_hive_minimal.py是一个简单的pyspark脚本,用于显示测试数据库中的表:

from pyspark.sql import SparkSession
#declaration
appName = "test_hive_minimal"
master = "yarn"
# Create the Spark session
sc = SparkSession.builder \
    .appName(appName) \
    .master(master) \
    .enableHiveSupport() \
    .config("spark.hadoop.hive.enforce.bucketing","True") \
    .config("spark.hadoop.hive.support.quoted.identifiers","none") \
    .config("hive.exec.dynamic.partition","True") \
    .config("hive.exec.dynamic.partition.mode","nonstrict") \
    .getorCreate()
# Define the function to load data from teradata
#custom freeform query
sql = "show tables in user_tables"
df_new = sc.sql(sql)
df_new.show()
sc.stop()

谁能提出一些解决方案? kerberos门票不是由纱线自动管理吗?所有其他hadoop资源都可以访问。

更新在docker容器上共享vol mount并传递keytab / principal以及hive-site.xml来访问metastore之后,该问题已解决

spark-submit --master yarn \
--deploy-mode cluster \
--jars /srv/python/ext_jars/terajdbc4.jar \
--files=/etc/hive/conf/hive-site.xml \
--keytab /home/alias/.kt/alias.keytab \ #this is mounted and kept in docker local path 
--principal [email protected] \
--name td_to_hive_test \
--driver-cores 2 \
--driver-memory 2G \
--num-executors 44 \
--executor-cores 5 \
--executor-memory 12g \
td_to_hive_test.py

解决方法

我认为您的司机有票,但您的遗嘱执行人却没有。将以下参数添加到您的spark提交中:

  • -principal:您可以通过以下方式获取主体:klist -k
  • -keytab:keytab的路径

更多信息:fork()

,

在集群上运行作业时,是否可以在命令行属性下尝试使用

-Djavax.security.auth.useSubjectCredsOnly=false

您可以将上述属性添加到Spark提交命令