如何在Azure Databricks中使用JDBC从PostgreSQL读取表数据?

问题描述

我正在尝试使用pyspark读取Azure云订阅中可用的postgresql表,但出现以下错误。我知道当我们使用加载功能时,我们也应该包括格式。但是由于该Postgresql实例在不同的azure订阅中可用,所以我完全没有访问Postgresql数据库的权限,如果是这种情况,该如何推断模式?还是有更好的方法从数据块读取数据。

df = spark.read.option("url","jdbc:postgresql://{hostname}:5432&user={username}&password={xxxxx}&sslmode=require").option("dbtable",{tablename}).load()

错误

 ---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
/databricks/spark/python/pyspark/sql/utils.py in deco(*a,**kw)
     62         try:
---> 63             return f(*a,**kw)
     64         except py4j.protocol.Py4JJavaError as e:

/databricks/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py in get_return_value(answer,gateway_client,target_id,name)
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id,".",name),value)
    329             else:

Py4JJavaError: An error occurred while calling o1169.load.
: org.apache.spark.sql.AnalysisException: Unable to infer schema for Parquet. It must be specified manually.;
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:211)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:211)
    at scala.Option.getorElse(Option.scala:121)
    at org.apache.spark.sql.execution.datasources.DataSource.getorInferFileFormatSchema(DataSource.scala:210)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:421)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:311)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:297)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:203)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
    at py4j.Gateway.invoke(Gateway.java:295)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:251)
    at java.lang.Thread.run(Thread.java:748)


During handling of the above exception,another exception occurred:

解决方法

该错误是因为代码隐式地假定格式为parquet。无论定义什么选项,格式都将忽略它们。

换句话说,结构化查询根本不使用使用JDBC的加载数据。

这就是错误的一部分(这么多):

org.apache.spark.sql.AnalysisException:无法推断Parquet的架构。必须手动指定。;

如果要读取JDBC数据源,则应在查询中包括format("jdbc")

spark.read.format("jdbc")