为什么pyspark仅从根目录中找到本地配置单元元存储?

问题描述

我对蜂巢metastore对三角洲湖泊的支持有疑问, 我已经在具有以下配置的独立Spark会话上定义了元存储

pyspark --conf "spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog" --conf "spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension"
spark = SparkSession \
    .builder \
    .appName("Python Spark SQL Hive integration example") \
    .config("spark.sql.warehouse.dir",'/mnt/data/db/medilake/') \
    .config("spark.hadoop.datanucleus.autoCreateSchema",'true') \
    .enableHiveSupport() \
    .getOrCreate()
and it all worked while i was in the session according to the doc https://docs.delta.io/latest/delta-batch.html#-control-data-location&language-python
then when i opened a new session i got this error
>>> spark.sql("SELECT * FROM BRONZE.user").show()
20/09/04 20:30:27 WARN ObjectStore: Failed to get database bronze,returning NoSuchObjectException
Traceback (most recent call last):
  File "<stdin>",line 1,in <module>
  File "/opt/spark-3.0.0-bin-hadoop3.2/python/pyspark/sql/session.py",line 646,in sql
    return DataFrame(self._jsparkSession.sql(sqlQuery),self._wrapped)
  File "/opt/spark-3.0.0-bin-hadoop3.2/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py",line 1305,in __call__
  File "/opt/spark-3.0.0-bin-hadoop3.2/python/pyspark/sql/utils.py",line 137,in deco
    raise_from(converted)
  File "<string>",line 3,in raise_from
pyspark.sql.utils.AnalysisException: Table or view not found: BRONZE.user; line 1 pos 14;
'Project [*]
+- 'UnresolvedRelation [BRONZE,user]

我仍然可以看到文件夹中排列的数据,但是现在数据库为空(以前是青铜/银/金)

>>> spark.catalog.listDatabases()
[Database(name='default',description='Default Hive database',locationUri='file:/mnt/data/db/medilake/spark-warehouse')]

文件夹:

root@m:/mnt/data/db/medilake# ll
total 16
drwxr-xr-x 9 root root 4096 Sep  4 19:05 bronze.db
-rw-r--r-- 1 root root  708 Sep  4 19:45 derby.log
drwxr-xr-x 4 root root 4096 Sep  4 19:36 gold.db
drwxr-xr-x 5 root root 4096 Sep  4 19:45 metastore_db

conf:

>>> spark.conf.get("spark.sql.warehouse.dir")
'/mnt/data/db/medilake/'

如何设置该属性以在任何工作目录中工作?

.config("spark.sql.warehouse.dir",'/mnt/data/db/medilake/')

解决方法

巴拉克,也许我可以通过以下方式帮助您添加spark sql语法

spark.sql("CREATE TABLE BRONZE.user USING DELTA LOCATION '/mnt/data/db/medilake/spark-warehouse'")

我记得要读取增量表,必须将其从镶木地板或csv文件中保存,有关更多详细信息,请查阅文档

Delta Lake

,

只需按照说明进行操作即可。 在spark-default.conf上设置spark.sql.warehouse.dir可以达到目的

相关问答

依赖报错 idea导入项目后依赖报错,解决方案:https://blog....
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下...
错误1:gradle项目控制台输出为乱码 # 解决方案:https://bl...
错误还原:在查询的过程中,传入的workType为0时,该条件不起...
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct...