从ADLS第2代读取的文件错误-找不到配置属性xxx.dfs.core.windows.net

问题描述

我正在使用Databricks笔记本中的ADLS Gen2,尝试使用“ abfss”路径处理文件。 我能够很好地读取镶木地板文件,但是当我尝试加载XML文件时,出现错误,找不到配置-找不到配置属性xxx.dfs.core.windows.net。

我没有尝试挂载文件,而是试图了解它是否是XML文件的已知限制,因为我能够很好地读取镶木地板文件。

这是我的XML库配置 com.databricks:spark-xml_2.11:0.9.0

我在其他文章中尝试了几件事,但仍然遇到相同的错误。

  • 添加了一个新范围,以查看它是否是Databricks工作区中的范围问题。
  • 尝试添加配置 spark.conf.set(“ fs.azure.account.key.xxxxx.dfs.core.windows.net”,“ xxxx ==”)
df = spark.read.format("xml")
 .option("rootTag","BookArticle")
 .option("inferSchema","true")
 .option("error_bad_lines",True)
 .option("mode","DROPMALFORMED")
 .load(abfsssourcename)   ##abfsssourcename is the path of the source file name

Exception Details: Py4JJavaError: An error occurred while calling o1113.load. 
Configuration property xxxx.dfs.core.windows.net not found. at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getStorageAccountKey(AbfsConfiguration.java:392) at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.initializeClient(AzureBlobFileSystemStore.java:1008) at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.<init>(AzureBlobFileSystemStore.java:151) at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:106) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.setInputPaths(FileInputFormat.java:500) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.setInputPaths(FileInputFormat.java:469) at org.apache.spark.SparkContext$$anonfun$newAPIHadoopFile$2.apply(SparkContext.scala:1281) at org.apache.spark.SparkContext$$anonfun$newAPIHadoopFile$2.apply(SparkContext.scala:1269) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.SparkContext.withScope(SparkContext.scala:820) at org.apache.spark.SparkContext.newAPIHadoopFile(SparkContext.scala:1269) at com.databricks.spark.xml.util.XmlFile$.withCharset(XmlFile.scala:46) at com.databricks.spark.xml.DefaultSource$$anonfun$createRelation$1.apply(DefaultSource.scala:71) at com.databricks.spark.xml.DefaultSource$$anonfun$createRelation$1.apply(DefaultSource.scala:71) at com.databricks.spark.xml.XmlRelation$$anonfun$1.apply(XmlRelation.scala:43) at com.databricks.spark.xml.XmlRelation$$anonfun$1.apply(XmlRelation.scala:42) at scala.Option.getOrElse(Option.scala:121) at com.databricks.spark.xml.XmlRelation.<init>(XmlRelation.scala:41) at com.databricks.spark.xml.XmlRelation$.apply(XmlRelation.scala:29) at com.databricks.spark.xml.DefaultSource.createRelation(DefaultSource.scala:74) at com.databricks.spark.xml.DefaultSource.createRelation(DefaultSource.scala:52) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:311) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:297) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:214) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

解决方法

我总结如下解决方案。

软件包com.databricks:spark-xml似乎使用RDD API读取xml文件。当我们使用RDD API访问Azure Data Lake Storage Gen2时,我们无法访问使用spark.conf.set(...)设置的Hadoop配置选项。因此,我们应该将代码更新为spark._jsc.hadoopConfiguration().set("fs.azure.account.key.xxxxx.dfs.core.windows.net","xxxx==")。有关更多详细信息,请参阅here

此外,您还可以将Azure Data Lake Storage Gen2作为文件系统安装在Azure数据块中。

相关问答

错误1:Request method ‘DELETE‘ not supported 错误还原:...
错误1:启动docker镜像时报错:Error response from daemon:...
错误1:private field ‘xxx‘ is never assigned 按Alt...
报错如下,通过源不能下载,最后警告pip需升级版本 Requirem...