在Spark scakla中将groupBy和agg用于多个列

问题描述

我有一个包含4列的DataFrame。我想基于2列应用GroupBy,并希望将其他列收集为列表。 示例:-我有一个像这样的DF

+---+-------+--------+-----------+
|id |fName  |lName   |dob        |
+---+-------+--------+-----------+
|1  |Akash  |Sethi   |23-05-1995 |
|2  |Kunal  |Kapoor  |14-10-1992 |
|3  |Rishabh|Verma   |11-08-1994 |
|2  |Sonu   |Mehrotra|14-10-1992 |
+---+-------+--------+-----------+

并且我想要这样的输出:-

+---+-----------+-------+--------+--------------------+
|id |dob        |fname           |lName               |
+---+-----------+-------+--------+--------------------+
|1  |23-05-1995 |[Akash]         |[Sethi]             |
|2  |14-10-1992 |[Kunal,Sonu]   |[Kapoor,Mehrotra]  |
|3  |11-08-1994 |[Rishabh]       |[Verma]             |
+---+-----------+-------+--------+--------------------+

解决方法

您可以使用agg做类似的事情

df.groupBy("id","dob").agg(collect_list(col("fname")),collect_list(col("lName")))