问题描述
我可以使用regexp_replace或其他等效项用一行代码替换pyspark dataframe列中的多个值吗?
以下是创建我的数据框的代码:
from pyspark import SparkContext,SparkConf,sqlContext
from datetime import datetime
sc = SparkContext().getorCreate()
sqlContext = sqlContext(sc)
data1 = [
('George',datetime(2010,3,24,19,58),13),('George',datetime(2020,9,6),8),datetime(2009,12,17,21,30),5),('Micheal',11,22,13,29,40),12),('Maggie',2,8,31,23),('ravi',1,4,47),2),('Xien',33,51),3),]
df1 = sqlContext.createDataFrame(data1,['name','trial_start_time','purchase_time'])
df1.show(truncate=False)
以下是数据框:
+-------+-------------------+-------------+
|name |trial_start_time |purchase_time|
+-------+-------------------+-------------+
|George |2010-03-24 07:19:58|13 |
|George |2020-09-24 07:19:06|8 |
|George |2009-12-12 22:21:30|5 |
|Micheal|2010-11-22 18:29:40|12 |
|Maggie |2010-02-08 08:31:23|8 |
|ravi |2009-01-01 09:19:47|2 |
|Xien |2010-03-02 09:33:51|3 |
+-------+-------------------+-------------+
from pyspark.sql.functions import regexp_replace,regexp_extract,col
df1.withColumn("name",regexp_replace('name',"ravi","ravi_renamed")).show()
以下是输出:
+------------+-------------------+-------------+
| name| trial_start_time|purchase_time|
+------------+-------------------+-------------+
| George|2010-03-24 07:19:58| 13|
| George|2020-09-24 07:19:06| 8|
| George|2009-12-12 22:21:30| 5|
| Micheal|2010-11-22 18:29:40| 12|
| Maggie|2010-02-08 08:31:23| 8|
|ravi_renamed|2009-01-01 09:19:47| 2|
| Xien|2010-03-02 09:33:51| 3|
+------------+-------------------+-------------+
在熊猫中,我可以用lambda表达式替换一行代码中的多个字符串:
df1[name].apply(lambda x: x.replace('George','George_renamed1').replace('ravi','ravi_renamed2')
我不确定是否可以使用regexp_replace在pyspark中完成此操作。也许是另一种选择?当我读到有关在pyspark中使用lambda表达式的信息时,似乎必须创建udf函数(似乎有点长)。但是我很好奇我是否可以在一行代码中像上面那样简单地对多个字符串运行某种类型的正则表达式。
解决方法
这是您要寻找的:
使用when()
(可读性最高)
df1.withColumn('name',when(col('name') == 'George','George_renamed1')
.when(col('name') == 'Ravi','Ravi_renamed2')
.otherwise(col('name'))
)
具有映射expr(不太明确,但如果要替换的值很多,则很方便)
df1 = df1.withColumn('name',F.expr("coalesce(map('George','George_renamed1','Ravi','Ravi_renamed2')[name],name)"))
,或者如果您已经有要使用的列表,即
name_changes = ['George','Ravi_renamed2']
# str()[1:-1] to convert list to string and remove [ ]
df1 = df1.withColumn('name',expr(f'coalesce(map({str(name_changes)[1:-1]})[name],name)'))
以上内容,但仅使用pyspark导入的功能
mapping_expr = create_map([lit(x) for x in name_changes])
df1 = df1.withColumn('name',coalesce(mapping_expr[df1['name']],'name'))
结果
df1.withColumn('name',name)")).show()
+---------------+-------------------+-------------+
| name| trial_start_time|purchase_time|
+---------------+-------------------+-------------+
|George_renamed1|2010-03-24 03:19:58| 13|
|George_renamed1|2020-09-24 03:19:06| 8|
|George_renamed1|2009-12-12 17:21:30| 5|
| Micheal|2010-11-22 13:29:40| 12|
| Maggie|2010-02-08 03:31:23| 8|
| Ravi_renamed2|2009-01-01 04:19:47| 2|
| Xien|2010-03-02 04:33:51| 3|
+---------------+-------------------+-------------+