Databricks中的DataFrame.show抛出错误

问题描述

我正在尝试使用Azure Databricks从Azure数据仓库中获取数据。

连接部分很好,因为我可以看到DataFrame中返回的行,但是当我尝试在DataFrame中保存或显示记录时会引发错误。这是我尝试过的:

df = spark.read \
  .format("com.databricks.spark.sqldw") \
  .option("url",sqlDwNew) \
  .option("tempDir",temDir_location) \
  .option("forwardSparkAzureStorageCredentials","true") \
  .option("query","select  * from Accesspermission") \
  .load()
df.count()

输出

(1) Spark Jobs
df:pyspark.sql.dataframe.DataFrame
AccesspermissionId:integer
Accesspermission:string
Out[16]: 4

错误

df.show()

输出

com.databricks.spark.sqldw.sqlDWSideException: sql DW Failed to execute the JDBC query produced by the connector.

解决方法

要知道确切的原因,我将要求您检查完整的堆栈跟踪并尝试查找问题的根本原因。

根据我的代表,我已经遇到了完全相同的错误消息,并且能够通过查看堆栈跟踪来解决问题,并发现了存储帐户的配置问题。

com.databricks.spark.sqldw.SqlDWSideException: SQL DW failed to execute the JDBC query produced by the connector.
.
.
.
.
.   

 Caused by: java.lang.IllegalArgumentException: requirement failed: No access key found in the session conf or the global Hadoop conf for Azure Storage account name: chepra

enter image description here

第一步:在笔记本会话配置中设置Blob存储帐户访问密钥。

spark.conf.set(
  "fs.azure.account.key.<your-storage-account-name>.blob.core.windows.net","<your-storage-account-access-key>")

步骤2:从Azure Synapse查询中加载数据。

df = spark.read \
  .format("com.databricks.spark.sqldw") \
  .option("url","jdbc:sqlserver://<the-rest-of-the-connection-string>") \
  .option("tempDir","wasbs://<your-container-name>@<your-storage-account-name>.blob.core.windows.net/<your-directory-name>") \
  .option("forwardSparkAzureStorageCredentials","true") \
  .option("query","select  * from table") \
  .load()

第3步:显示或显示数据框

df.show()
display(df)

enter image description here

enter image description here

参考: Azure Databricks - Azure Synapse Analytics