问题描述
在Node1(4cpu,8GB)上启动dask Scheduler:
dask Scheduler:dask-scheduler --host 0.0.0.0 --port 8786
在Node2(8cpu,32GB)和Node3(8cpu,32GB)上启动工作线程: 达斯工人:
dask-worker tcp://http://xxx.xxx.xxx.xxx:8786 --nanny-port 3000:3004 --worker-port 3100:3104 --dashboard-address :8789
这是我的原型,其中编辑了some_private_processing
和some_processing
方法:
import glob
import pandas as pd
from dask.distributed import Client
N_CORES = 16
THREADS_PER_WORKER = 2
dask_cluster = Client(
'127.0.0.1:8786'
)
def get_clean_str1(str1):
ret_tuple = None,False,True,None,False
if not str1:
return ret_tuple
if string_validators(str1) is not True:
return ret_tuple
data = some_processing(str1)
match_flag = False
if str1 == data.get('formated_str1'):
match_flag = True
private_data = some_private_processing(str1)
private_match_flag = False
if str1 == private_data.get('formated_private_str1'):
private_match_flag = True
ret_tuple = str1,match_flag,private_str1,private_match_flag
return ret_tuple
files = [
'part-00000-abcd.gz.parquet','part-00001-abcd.gz.parquet','part-00002-abcd.gz.parquet',]
print('Starting...')
for idx,each_file in enumerate(files):
dask_cluster.restart()
print(f'Processing file {idx}: {each_file}')
all_str1s_df = pd.read_parquet(
each_file,engine='pyarrow'
)
print(f'Read file {idx}: {each_file}')
all_str1s_df = dd.from_pandas(all_str1s_df,npartitions=16000)
print(f'Starting file processing {idx}: {each_file}')
str1_res_tuple = all_str1s_df.map_partitions(
lambda part: part.apply(
lambda x: get_clean_str1(x['str1']),axis=1
),Meta=tuple
)
(clean_str1,bad_str1_flag,private_match_flag) = zip(*str1_res_tuple)
all_str1s_df = all_str1s_df.assign(
clean_str1=pd.Series(clean_str1)
)
all_str1s_df = all_str1s_df.assign(
match_flag=pd.Series(match_flag)
)
all_str1s_df = all_str1s_df.assign(
bad_str1_flag=pd.Series(bad_str1_flag)
)
all_str1s_df = all_str1s_df.assign(
private_str1=pd.Series(private_str1)
)
all_str1s_df = all_str1s_df.assign(
private_match_flag=pd.Series(private_match_flag)
)
all_str1s_df = all_str1s_df[
all_str1s_df['match_flag'] == False
]
all_str1s_df = all_str1s_df.repartition(npartitions=200)
all_str1s_df.to_csv(
f'results-str1s-{idx}-*.csv'
)
print(f'Finished file {idx}: {each_file}')
此过程耗时超过8个小时,我看到所有数据都仅在Node2或Node3上的一个节点上被处理,而在Node2和Node3上都没有。
需要帮助了解这些见解并弄清楚我在哪里做错了,才能使此简单的数据转换运行8小时以上,但仍未完成。
解决方法
超时增加,内存增加。之后,它开始工作,没有失败并挂起。
timeouts:
connect: 180s # time before connecting fails
tcp: 180s # time before calling an unresponsive connection dead