问题描述
我做了一个计时实验,但我不认为自己在正确使用dask.delayed
。这是代码:
import pandas as pd
import dask
import time
def my_operation(row_str: str):
text_to_add = 'Five Michigan State University students—Ash Williams,his girlfriend,Linda; his sister,Cheryl; their friend Scott; and Scotts girlfriend Shelly—vacation at an isolated cabin in rural Tennessee. Approaching the cabin,the group notices the porch swing move on its own but suddenly stop as Scott grabs the doorknob. While Cheryl draws a picture of a clock,the clock stops,and she hears a faint,demonic voice tell her to "join us". Her hand becomes possessed,turns pale and draws a picture of a book with a demonic face on its cover. Although shaken,she does not mention the incident.'
new_str = row_str + ' ' + text_to_add
return new_str
def gen_sequential(n_rows: int):
df = pd.read_csv('path/to/myfile.csv',nrows=n_rows)
results_list = []
tic = time.perf_counter()
for ii in range(df.shape[0]):
my_new_str = my_operation(df.iloc[ii,0])
results_list.append(my_new_str)
toc = time.perf_counter()
task_time = toc - tic
return results_list,task_time
def gen_pandas_apply(n_rows: int):
df = pd.read_csv('path/to/myfile.csv',nrows=n_rows)
tic = time.perf_counter()
df['gen'] = df['text'].apply(my_operation)
toc = time.perf_counter()
task_time = toc - tic
return df,task_time
def gen_dask_compute(n_rows: int):
df = pd.read_csv('path/to/myfile.csv',nrows=n_rows)
results_list = []
tic = time.perf_counter()
for ii in range(df.shape[0]):
my_new_str = dask.delayed(my_operation)(df.iloc[ii,0])
results_list.append(my_new_str)
results_list = dask.compute(*results_list)
toc = time.perf_counter()
task_time = toc-tic
return results_list,task_time
n_rows = 16
times = []
for ii in range(100):
#_,t_dask_task = gen_sequential(n_rows)
#_,t_dask_task = gen_pandas_apply(n_rows)
_,t_dask_task = gen_dask_compute(n_rows)
times.append(t_dask_task)
t_mean = sum(times)/len(times)
print('average time for 100 iterations: {}'.format(t_mean))
我对文件中的8、64、256、1024、32768、262144和1048576行(仅约200万行文本)进行了测试,并将其与gen_sequential()
和{{1 }}。结果如下:
gen_pandas_apply()
我认为我没有正确使用n_rows sequential[s] pandas_apply[s] dask_compute[s]
===========================================================================
8 0.000288928459959 0.001460871489944 0.002077747459807
---------------------------------------------------------------------------
64 0.001723313619877 0.001805401749916 0.011105699519758
---------------------------------------------------------------------------
256 0.006383508619801 0.00198456062968 0.046899785500136
---------------------------------------------------------------------------
1024 0.022589521310038 0.002799118410258 0.197301750000333
---------------------------------------------------------------------------
32768 0.63460024946984 0.035047864249209 5.91377260136054
---------------------------------------------------------------------------
262144 5.28406698709983 0.254192861450574 50.5853837806704
---------------------------------------------------------------------------
1048576 21.1142608421401 0.967728560800169 195.71797474096
---------------------------------------------------------------------------
,因为较大的dask.delayed
的平均计算时间比其他方法要长。我希望n_rows
的最大优势随着数据集的增加而变得明显。有谁知道我要去哪里错了?这是我的设置:
- python:3.7.6
- 任务:2.11.0
- 熊猫:1.0.5
- OS:Pop_OS! 20.04 LTS
- 具有3个内核和32GB内存的虚拟机
我目前正在阅读dask.delayed
,但是目前我只限于在该项目中使用Vaex
。预先感谢您的帮助!
解决方法
my_operation
运行所花费的时间每行很小。即使使用“线程”调度程序,Dask也会增加每个任务的开销,而python的GIL确实意味着这样的非矢量化操作实际上不能并行运行。
就像您应该避免迭代pandas数据框一样,您也应该真正避免对其进行迭代,并分派每一行以便dask进行处理。
您知道Dask具有类似熊猫的数据框API吗? 您可以这样做:
import dask.dataframe as dd
df = dd.read_csv('path/to/myfile.csv')
out = df['text'].map(my_operation)
但是请记住:熊猫既快速又高效,因此对于适合内存的事物,将Dask分为多个工作块通常不会更快,特别是当您输出与输入一样大的数据(而不是聚合)时。