pyspark三角洲湖优化-无法解析SQL

问题描述

我有一个使用spark 3.x和delta 0.7.x创建的增量表:

data = spark.range(0,5)
data.write.format("delta").mode("overwrite").save("tmp/delta-table")
# add some more files
data = spark.range(20,100)
data.write.format("delta").mode("append").save("tmp/delta-table")

df = spark.read.format("delta").load("tmp/delta-table")
df.show()

现在,在日志中生成了很多文件(很多情况下,木地板文件太小)。

%ls tmp/delta-table

我要压缩它们:

df.createGlobalTempView("my_delta_table")
spark.sql("OPTIMIZE my_delta_table ZORDER BY (id)")

失败:

ParseException: 
mismatched input 'OPTIMIZE' expecting {'(','ADD','ALTER','ANALYZE','CACHE','CLEAR','COMMENT','COMMIT','CREATE','DELETE','DESC','DESCRIBE','DFS','DROP','EXPLAIN','EXPORT','FROM','GRANT','IMPORT','INSERT','LIST','LOAD','LOCK','MAP','MERGE','MSCK','REDUCE','REFRESH','REPLACE','RESET','REVOKE','ROLLBACK','SELECT','SET','SHOW','START','TABLE','TRUNCATE','UNCACHE','UNLOCK','UPDATE','USE','VALUES','WITH'}(line 1,pos 0)

== SQL ==
OPTIMIZE my_delta_table ZORDER BY (id)
^^^

问题:

  1. 如何在不使查询失败的情况下使它工作(优化)
  2. 是否有比调用基于文本的SQL更本地的API?

注意:

spark is started like this:

import pyspark
from pyspark.sql import SparkSession

spark = pyspark.sql.SparkSession.builder.appName("MyApp") \
    .config("spark.jars.packages","io.delta:delta-core_2.12:0.7.0") \
    .config("spark.sql.extensions","io.delta.sql.DeltaSparkSessionExtension") \
    .config("spark.sql.catalog.spark_catalog","org.apache.spark.sql.delta.catalog.DeltaCatalog") \
    .getOrCreate()

from delta.tables import *

解决方法

public static void Main(string[] args) { // Populate the grades you want to test here int[] studentsGrades = {9,10,8,6,9,10}; // Student names are automatically generated string[] studentsList = Enumerable.Range(1,studentsGrades.Length) .Select(i => $"Student #{i} grade: {studentsGrades[i - 1]}\tcoins: ") .ToArray(); // Call the new method to get the coins array int[] coins = GetCoins(studentsGrades); // Show the results PrintResult(studentsList,coins); Console.WriteLine("Done! Press any key to exit..."); Console.ReadKey(); } 在OSS Delta Lake中不可用。如果要压缩文件,可以按照Compact files部分中的说明进行操作。如果您想使用OPTIMIZE,当前需要使用Databricks Runtime。

,

如果您在本地运行Delta,则意味着您必须使用OSS Delta Lake。最优化命令仅适用于Databricks Delta Lake。要在OSS中进行文件压缩,您可以执行以下操作-https://docs.delta.io/latest/best-practices.html#compact-files

相关问答

错误1:Request method ‘DELETE‘ not supported 错误还原:...
错误1:启动docker镜像时报错:Error response from daemon:...
错误1:private field ‘xxx‘ is never assigned 按Alt...
报错如下,通过源不能下载,最后警告pip需升级版本 Requirem...