问题描述
- original_orderid为NULL的订单可以被视为“父订单”
- 其中一些父订单可能具有子订单,而子订单的孩子的original_orderid映射到父母的订单ID。
- 子订单可以生成另一个子订单,如图所示,带有颜色编码。
与原始文本相同的数据:
+-----------+----------------+---------------------+----------+
|orderid |original_orderid|ttime |price |
+-----------+----------------+---------------------+----------+
|988782828 |0 |2020-09-0406:00:09.09|3444.0 |
|37377373374|0 |2020-09-0408:41:09.09|26262.0 |
|23222223378|37377373374 |2020-09-0409:02:55.55|33434.0 |
|2111111 |0 |2020-09-0409:05:55.55|44334.0 |
|2422244422 |0 |2020-09-0409:07:14.14|343434.0 |
|66666663388|23222223378 |2020-09-0409:10:14.14|1282.0 |
|44444443391|66666663388 |2020-09-0409:11:34.34|27272.6363|
|22222393392|44444443391 |2020-09-0409:13:38.38|333.0 |
|77777393397|22222393392 |2020-09-0409:14:31.31|3422.0 |
|55656563397|77777393397 |2020-09-0409:16:58.58|27272.0 |
+-----------+----------------+---------------------+----------+
作为转换,我们需要将所有子级映射到其原始父级(original_orderid为NULL),并获取订单可能具有的级别数。 预期结果将是:
这是从sqlserver到spark迁移工作的一部分。在sql server中,这是通过递归访问父视图的视图实现的。
我们可以使用伪代码在Spark中尝试这种转换:
val df = spark.read(raw_data_file)
val parent = df.filter(col(original_orderid).isNull)
.select(col(orderid).as("orderid"),col(order_id).as("parent_orderid")
val children = df.filter(col(original_orderid).isNotNull).sort(col(ttime))
var prentCollection = //Collect parent df in collection
val childrenCollection = //Collect child df in collection
//Traverse through the sorted childrenCollection
for (child <- childrenCollection) ={
if child.original_orderid in parentCollection.orderid.alias(parent)
insert into parentCollection - child.orderid as orderid,parent.parent_orderid as parent_orderid,child.ttime as ttime,child.price as price
}
此解决方案需要在驱动程序中收集所有数据,因此无法分发并且不适合大型数据集。
您能否建议我使用其他任何方法使它适用于较大的数据集,或者对上述现有方法进行任何改进。
解决方法
您可以递归地加入并累积数组中的父母。这是一个使用Spark v2.1的快速原型
val addToArray = udf((seq : Seq[String],item: String) => seq :+ item)
//v2.4.0 use array_union
val concatArray = udf((seq1 : Seq[String],seq2 : Seq[String]) => seq1 ++ seq2)
//v2.4.0 use element_at and size
val lastInArray = udf((seq: Seq[String]) => seq.lastOption.getOrElse(null))
//v2.4.0 and up use slice
val dropLastInArray = udf((seq: Seq[String]) => seq.dropRight(1))
val raw="""|988782828 |0 |2020-09-0406:00:09.09|3444.0 |
|37377373374|0 |2020-09-0408:41:09.09|26262.0 |
|23222223378|37377373374 |2020-09-0409:02:55.55|33434.0 |
|2111111 |0 |2020-09-0409:05:55.55|44334.0 |
|2422244422 |0 |2020-09-0409:07:14.14|343434.0 |
|66666663388|23222223378 |2020-09-0409:10:14.14|1282.0 |
|44444443391|66666663388 |2020-09-0409:11:34.34|27272.6363|
|22222393392|44444443391 |2020-09-0409:13:38.38|333.0 |
|77777393397|22222393392 |2020-09-0409:14:31.31|3422.0 |
|55656563397|77777393397 |2020-09-0409:16:58.58|27272.0 |"""
val df= raw.substring(1).split("\\n").map(_.split("\\|").map(_.trim)).map(r=> (r(0),r(1),r(2),r(3))).toSeq.toDF ("orderId","parentId","ttime","price").withColumn("parents",array(col("parentId")))
def selfJoin(df :DataFrame) : DataFrame = {
if (df.filter(lastInArray(col("parents")) =!= lit("0")).count > 0)
selfJoin(df.join(df.select(col("orderId").as("id"),col("parents").as("grandParents")),lastInArray(col("parents")) === col("id"),"left").withColumn("parents",when(lastInArray(col("parents")) =!= lit("0"),concatArray(col("parents"),col("grandParents"))).otherwise(col("parents"))).drop("grandParents").drop("id"))
else
df
}
selfJoin(df).withColumn("level",size(col("parents"))).withColumn("top parent",lastInArray(dropLastInArray(col("parents")))).show