写入600MB文件以输出时获得退出状态137

问题描述

我试图编写一个600MB的镶木地板文件输出,并且我一直保持退出状态137,我认为这与超出内存限制有关。我正在PythonScriptStep中运行脚本,使用4个STANDARD_D2_V2集群节点作为计算目标。我尝试了PipelineData和OutputFileDatasetsetConfig来写文件,以及简单地将文件写到“ ./outputs”。在所有情况下,我都会遇到以下错误

./outputs created
bash: line 1:   116 Killed                  /azureml-envs/azureml_.../bin/python $AZ_BATCHAI_JOB_MOUNT_ROOT/workspaceblobstore/azureml/.../azureml-setup/context_manager_injector.py "-i" "ProjectPythonPath:context_managers.ProjectPythonPath" "-i" "Dataset:context_managers.Datasets" "-i" "RunHistory:context_managers.RunHistory" "-i" "TrackUserError:context_managers.TrackUserError" "load_data.py" "--image_path" "DatasetConsumptionConfig:train_dataset" "--train_label" "$AZUREML_DATAREFERENCE_train_label_data" "--val_label" "$AZUREML_DATAREFERENCE_val_label_data" "--val_pixel" "$AZUREML_DATAREFERENCE_val_pixel_data" "--train_pixel" "./outputs"
2020/10/19 17:32:37 logger.go:297: Failed to run the wrapper cmd with err: exit status 137
2020/10/19 17:32:37 logger.go:297: Attempt 1 of http call to http://10.0.0.4:16384/sendlogstoartifacts/status
2020/10/19 17:32:38 sysutils_linux.go:221: mpirun version string: {
Intel(R) MPI Library for Linux* OS,Version 2018 Update 3 Build 20180411 (id: 18329)
copyright 2003-2018 Intel Corporation.
}
2020/10/19 17:32:38 sysutils_linux.go:225: MPI publisher: intel ; version: 2018
2020/10/19 17:32:38 logger.go:297: Process Exiting with Code:  137

这是我的代码。我在source.py文件中称呼以下内容

# Prepare data
train_label_data = PipelineData("train_label_data",datastore=ds).as_dataset()
# train_pixel_data = PipelineData("train_pixel_data",datastore=ds).as_dataset()
# train_pixel_data = OutputFileDatasetConfig(
#    destination=(ds,'outputdataset')).register_on_complete(
#        name='train_pixel_data')
val_label_data = PipelineData("val_label_data",datastore=ds).as_dataset()
val_pixel_data = PipelineData("val_pixel_data",datastore=ds).as_dataset()

train_pixel_data = "./outputs"
image_path = train_dataset.as_named_input('train_dataset').as_download()

loadDataStep = PythonScriptStep(
    name="Load pixel and label data",script_name="load_data.py",arguments=["--image_path",image_path,"--train_label",train_label_data,"--val_label",val_label_data,"--val_pixel",val_pixel_data,"--train_pixel",train_pixel_data],inputs=[train_csv_dataset.as_named_input('train_csv')],outputs=[train_label_data,val_pixel_data],compute_target=cpu_compute_target,runconfig=cpu_run_config,source_directory=script_folder,allow_reuse=False
)

这是“ load_data.py”中的代码

# Write dictionary to file
def write_dict(dict_file,output_path,parquet=False):
    df = pd.DataFrame(dict_file.items()).rename(
        columns={0: 'image',1: 'array'})
    if parquet is True:
        df.to_parquet(output_path,engine='pyarrow',compression='gzip',index=False)
    else:
        df.to_csv(output_path,header=['image','array'],index=False)


# Create output
def create_output(arg_output,output_file_name,dict_file,parquet):
    if not (arg_output is None):
        os.makedirs(arg_output,exist_ok=True)
        print("%s created" % arg_output)
        output_path = arg_output + output_file_name
        write_dict(dict_file,parquet)
        print("{} has been created.".format(output_file_name))
        return output_path

# train_pixels
train_pixels = create_output(arg_output=args.train_pixel,output_file_name="/train_pixels.parquet.gzip",dict_file=train_pixels,parquet=True)

我的问题是,一个人应该如何处理在不同步骤之间传输大型数据集的问题。我将不胜感激。

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)