Scala中的时间序列插值

问题描述

我需要在Scala中内插时间序列
原始数据是
2020-08-01,value1
2020-08-03,value3

我想像这样在中间日期插入数据
2020-08-01,value1
2020-08-02,value2
2020-08-03,value3 其中value2是value1和value3的线性内插值

有人可以帮助我提供在Scala Spark中执行此操作的示例代码吗?由于性能原因,我宁愿避免使用UDF并使用spark.range,但我愿意接受您的最佳解决方案。

谢谢!

解决方法

0。。您可以分组并从数据框中获取最小,最大日期,并创建一个序列,将其分解以获取一系列日期。

from pyspark.sql.functions import *
from pyspark.sql import Window

w1 = Window.orderBy('date').rowsBetween(Window.unboundedPreceding,Window.currentRow)
w2 = Window.orderBy('date').rowsBetween(Window.currentRow,Window.unboundedFollowing)

df.groupBy().agg(min('date').alias('date_min'),max('date').alias('date_max')) \
  .withColumn('date',sequence(to_date('date_min'),to_date('date_max'))) \
  .withColumn('date',explode('date')) \
  .select('date') \
  .join(df,['date'],'left') \
  .show(10,False)

+----------+-----+
|date      |value|
+----------+-----+
|2020-08-01|0    |
|2020-08-02|null |
|2020-08-03|null |
|2020-08-04|null |
|2020-08-05|null |
|2020-08-06|10   |
+----------+-----+

1。。仅适用于您的情况,也是最简单的情况。

from pyspark.sql.functions import *
from pyspark.sql import Window

w1 = Window.orderBy('date').rowsBetween(Window.unboundedPreceding,Window.unboundedFollowing)

df.withColumn("value_m1",last('value',ignorenulls=True).over(w1)) \
  .withColumn("value_p1",first('value',ignorenulls=True).over(w2)) \
  .withColumn('value',coalesce(col('value'),expr('value_m1 + value_p1 / 2'))) \
  .show(10,False)

+----------+-----+--------+--------+
|date      |value|value_m1|value_p1|
+----------+-----+--------+--------+
|2020-08-01|0.0  |0       |0       |
|2020-08-02|5.0  |0       |10      |
|2020-08-03|10.0 |10      |10      |
+----------+-----+--------+--------+

2。。任意null天的时间都有所改善。例如,当数据框由此给出时,

+----------+-----+
|date      |value|
+----------+-----+
|2020-08-01|0    |
|2020-08-02|null |
|2020-08-03|null |
|2020-08-04|null |
|2020-08-05|null |
|2020-08-06|10   |
|2020-08-07|null |
|2020-08-08|null |
+----------+-----+

然后应按以下步骤更改代码:

from pyspark.sql.functions import *
from pyspark.sql import Window

w1 = Window.orderBy('date').rowsBetween(Window.unboundedPreceding,Window.unboundedFollowing)
w3 = Window.partitionBy('days_m1').orderBy('date')
w4 = Window.partitionBy('days_p1').orderBy(desc('date'))

df.withColumn("value_m1",ignorenulls=True).over(w2)) \
  .withColumn('days_m1',count(when(col('value').isNotNull(),1)).over(w1)) \
  .withColumn('days_p1',1)).over(w2)) \
  .withColumn('days_m1',count(lit(1)).over(w3) - 1) \
  .withColumn('days_p1',count(lit(1)).over(w4) - 1) \
  .withColumn('value',expr('(days_p1 * value_m1 + days_m1 * value_p1) / (days_m1 + days_p1)'))) \
  .orderBy('date') \
  .show(10,False)

+----------+-----+--------+--------+-------+-------+
|date      |value|value_m1|value_p1|days_m1|days_p1|
+----------+-----+--------+--------+-------+-------+
|2020-08-01|0.0  |0       |0       |0      |0      |
|2020-08-02|2.0  |0       |10      |1      |4      |
|2020-08-03|4.0  |0       |10      |2      |3      |
|2020-08-04|6.0  |0       |10      |3      |2      |
|2020-08-05|8.0  |0       |10      |4      |1      |
|2020-08-06|10.0 |10      |10      |0      |0      |
|2020-08-07|null |10      |null    |1      |1      |
|2020-08-08|null |10      |null    |2      |0      |
+----------+-----+--------+--------+-------+-------+