使用Azure Databricks创建Value and Timestamp数据框时发生错误

问题描述

我对Spark不太熟悉,但是我不得不使用它来消耗一些数据。我基本上已经尝试了所有可能发现的语法,以使数据帧具有一个值和一个时间戳,可以将其放入数据库中以跟踪从数据源获取更新的时间。错误是无穷无尽的,我没有主意,也没有为什么我不能做这么简单的事情。以下是我尝试使用的代码示例

sc = spark.sparkContext
df = sc.parallelize([[1,pyspark.sql.functions.current_timestamp()]]).toDF(("Value","CreatedAt"))

这个错误并没有真正的帮助

 py4j.Py4JException: Method __getstate__([]) does not exist
 ---------------------------------------------------------------------------
 Py4JError                                 Traceback (most recent call last)
 <command-1699228214903488> in <module>
      29 
      30 sc = spark.sparkContext
 ---> 31 df = sc.parallelize([[1,"CreatedAt"))

 /databricks/spark/python/pyspark/context.py in parallelize(self,c,numSlices)
     557                 return self._jvm.PythonParallelizeServer(self._jsc.sc(),numSlices)
     558 
 --> 559             jrdd = self._serialize_to_jvm(c,serializer,reader_func,createRDDServer)
     560 
     561         return RDD(jrdd,self,serializer)

 /databricks/spark/python/pyspark/context.py in _serialize_to_jvm(self,data,createRDDServer)
     590             try:
     591                 try:
 --> 592                     serializer.dump_stream(data,tempFile)
     593                 finally:
     594                     tempFile.close()

我也尝试过

from pyspark.sql import sqlContext
sqlContext = sqlContext(sc) # sc is the spark context

df = sqlContext.createDataFrame(
     [( current_timestamp(),'12a345')],['CreatedAt','Value'] # the row header/column labels should be entered here
)

错误

AssertionError: dataType <py4j.java_gateway.JavaMember object at 0x7f43d97c6ba8> should be an instance of <class 'pyspark.sql.types.DataType'>
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<command-2294571935273349> in <module>
     33 df = sqlContext.createDataFrame(
     34     [( current_timestamp(),---> 35     ['CreatedAt','Value'] # the row header/column labels should be entered here
     36 )
     37 

/databricks/spark/python/pyspark/sql/context.py in createDataFrame(self,schema,samplingRatio,verifySchema)
    305         Py4JJavaError: ...
    306         """
--> 307         return self.sparkSession.createDataFrame(data,verifySchema)
    308 
    309     @since(1.3)

/databricks/spark/python/pyspark/sql/session.py in createDataFrame(self,verifySchema)
    815                 rdd,schema = self._createFromrDD(data.map(prepare),samplingRatio)
    816             else:
--> 817                 rdd,schema = self._createFromLocal(map(prepare,data),schema)
    818             jrdd = self._jvm.SerDeUtil.toJavaArray(rdd._to_java_object_rdd())

解决方法

好吧,我最终编写了一些代码。我无法使其与TimestampType()一起使用,插入数据时,火花会翻转。我认为这可能是运行时错误,而不是编码问题。

 import adal
 import datetime;
 from pyspark.sql.types import *

 # Set Access Token
 access_token = token["accessToken"]
 from pyspark.sql import SQLContext
 sqlContext = SQLContext(sc) # sc is the spark context

 schema = StructType([
     StructField("CreatedAt",StringType(),True),StructField("value",True)
 ])

 da =  datetime.datetime.now().strftime("%m/%d/%Y %H:%M:%S")

 df = sqlContext.createDataFrame(
     [(da,'12a345')],schema
 )                            

 df.write \
   .format("com.microsoft.sqlserver.jdbc.spark") \
   .option("url",url)\
   .option("dbtable","dbo.RunStart")\
   .option("accessToken",access_token)\
   .option("databaseName",database_name) \
   .option("encrypt","true")\
   .option("hostNameInCertificate","*.database.windows.net")\
   .option("applicationintent","ReadWrite") \
   .mode("append") \
   .save()