如何让我的交互式作业和我在 condor 中的提交作业 100% 匹配?

问题描述

我希望我的后台提交脚本与我的交互式作业完全相同。我不知道为什么我不能在 Condor 中使它们相同。我试过了:

getenv = True

如这里所建议的How do I have condor automatically import my conda environment when running my python jobs?(有趣的是我之前问过类似的问题!)。

我遇到的问题是我的 pytorch 脚本在交互模式下工作,但在提交工作上不起作用。这些工作对我来说看起来一样(相同的 pytorch 版本、相同的环境、相同的 cuda 版本)但我不能让它们都工作。只有交互式作业才有效。

还有别的吗

getenv = True

我可以为他们做同样的事情吗?


顺便说一句,我什至无法运行 module load 来确保在提交脚本上加载了正确的 cuda 版本,看来后台脚本确实是从“scratch”开始的,甚至不让我做任何事情带着那面旗帜。但互动似乎没问题。


我还注意到我的 PATH 在登录、交互和提交作业中并不相同。见:


# export PATH=/home/miranda9/miniconda3/envs/automl-Meta-learning/bin:/home/miranda9/miniconda3/condabin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/miranda9/my_bins:/home/miranda9/bin
# export PATH=/usr/local/cuda/bin:/home/miranda9/miniconda3/envs/automl-Meta-learning/bin:/home/miranda9/miniconda3/condabin:/usr/local/cuda/bin:/usr/local/bin:/usr/bin:/home/miranda9/my_bins:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/miranda9/my_bins:/home/miranda9/bin
export PATH=/usr/local/cuda/bin:/home/miranda9/miniconda3/envs/Metalearning/bin:/home/miranda9/miniconda3/condabin:/usr/local/cuda/bin:/usr/local/bin:/usr/bin:/home/miranda9/my_bins:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/miranda9/my_bins:/home/miranda9/bin

交叉发布:


如果是 pytorch 问题,我找到了这些链接(但由于它在交互式会话中运行,我怀疑这是 pytorch 问题):


这是我当前提交的 job.sub 脚本:

####################
#
# Experiments script
# Simple HTCondor submit description file
#
# reference: https://gitlab.engr.illinois.edu/Vision/vision-gpu-servers/-/wikis/HTCondor-user-guide#submit-jobs
#
# chmod a+x test_condor.py
# chmod a+x experiments_Meta_model_optimization.py
# chmod a+x Meta_learning_experiments_submission.py
# chmod a+x download_miniImagenet.py
# chmod a+x ~/Meta-learning-lstm-pytorch/main.py
# chmod a+x /home/miranda9/automl-Meta-learning/automl-proj/Meta_learning/datasets/rand_fc_nn_vec_mu_ls_gen.py
# chmod a+x /home/miranda9/automl-Meta-learning/automl-proj/experiments/Meta_learning/supervised_experiments_submission.py
# chmod a+x /home/miranda9/automl-Meta-learning/results_plots/is_rapid_learning_real.py
# chmod a+x /home/miranda9/automl-Meta-learning/test_condor.py
# chmod a+x /home/miranda9/ML4Coq/main.sh
# chmod a+x /home/miranda9/ML4Coq/ml4coq-proj/PosEval/download_data.py
# chmod a+x /home/miranda9/ML4Coq/ml4coq-proj/pos_eval/create_pos_eval_dataset.sh
# chmod a+x /home/miranda9/ML4Coq/ml4coq-proj/embeddings_zoo/tree_nns/main_brando.py
# chmod a+x /home/miranda9/ML4Coq/main.sh
# condor_submit -i
# condor_submit job.sub
#
####################

# Executable = /home/miranda9/automl-Meta-learning/automl-proj/experiments/Meta_learning/supervised_experiments_submission.py

# Executable = /home/miranda9/automl-Meta-learning/automl-proj/experiments/Meta_learning/Meta_learning_experiments_submission.py
# SUBMIT_FILE = Meta_learning_experiments_submission.py

# Executable = /home/miranda9/Meta-learning-lstm-pytorch/main.py
# Executable = /home/miranda9/automl-Meta-learning/automl-proj/Meta_learning/datasets/rand_fc_nn_vec_mu_ls_gen.py

# Executable = /home/miranda9/automl-Meta-learning/results_plots/is_rapid_learning_real.py
# SUBMIT_FILE = is_rapid_learning_real.py

# Executable = /home/miranda9/automl-Meta-learning/test_condor.py

# Executable = /home/miranda9/ML4Coq/ml4coq-proj/embeddings_zoo/tree_nns/main_brando.py
# SUBMIT_FILE = main_brando.py

# Executable = /home/miranda9/ML4Coq/ml4coq-proj/PosEval/download_data.py
# SUBMIT_FILE = ml4coq-proj/PosEval/download_data.py

# Executable = /home/miranda9/ML4Coq/ml4coq-proj/pos_eval/create_pos_eval_dataset.sh
# SUBMIT_FILE = create_pos_eval_dataset.sh

Executable = /home/miranda9/ML4Coq/main.sh
SUBMIT_FILE = main.sh

# Output Files
Log          = $(SUBMIT_FILE).log$(CLUSTER)
Output       = $(SUBMIT_FILE).o$(CLUSTER)
Error        = $(SUBMIT_FILE).o$(CLUSTER)

getenv = True
# cuda_version = 10.2
# cuda_version = 11.0

# Use this to make sure 1 gpu is available. The key words are case insensitive.
# REquest_gpus = 1
REquest_gpus = 2
requirements = (CUDADeviceName != "Tesla K40m")
requirements = (CUDADeviceName != "GeForce GTX TITAN X")
# requirements = (CUDADeviceName == "Quadro RTX 6000")
# requirements = ((CUDADeviceName != "Tesla K40m")) && (TARGET.Arch == "X86_64") && (TARGET.OpSys == "LINUX") && (TARGET.disk >= Requestdisk) && (TARGET.Memory >= RequestMemory) && (TARGET.cpus >= Requestcpus) && (TARGET.gpus >= Requestgpus) && ((TARGET.FileSystemDomain == MY.FileSystemDomain) || (TARGET.HasFileTransfer))
# requirements = (CUDADeviceName == "Tesla K40m")
# requirements = (CUDADeviceName == "GeForce GTX TITAN X")

# Note: to use multiple cpus instead of the default (one cpu),use request_cpus as well
# Request_cpus = 1
Request_cpus = 4
# Request_cpus = 5
# Request_cpus = 8
# Request_cpus = 16
# Request_cpus = 32

# E-mail option
Notify_user = [email protected]
Notification = always

Environment = MY_CONDOR_JOB_ID= $(CLUSTER)

# "Queue" means add the setup until this line to the queue (needs to be at the end of script).
Queue

main.sh 脚本:

#!/bin/bash

echo JOB STARTED

# export PATH=/home/miranda9/miniconda3/envs/automl-Meta-learning/bin:/home/miranda9/miniconda3/condabin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/miranda9/my_bins:/home/miranda9/bin
# export PATH=/usr/local/cuda/bin:/home/miranda9/miniconda3/envs/automl-Meta-learning/bin:/home/miranda9/miniconda3/condabin:/usr/local/cuda/bin:/usr/local/bin:/usr/bin:/home/miranda9/my_bins:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/miranda9/my_bins:/home/miranda9/bin
export PATH=/usr/local/cuda/bin:/home/miranda9/miniconda3/envs/Metalearning/bin:/home/miranda9/miniconda3/condabin:/usr/local/cuda/bin:/usr/local/bin:/usr/bin:/home/miranda9/my_bins:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/miranda9/my_bins:/home/miranda9/bin
# module load cuda-toolkit/10.2
# module load cuda-toolkit/11.1

# echo $PATH
# nvidia-smi
# conda list
which python

# - run script
python ~/ML4Coq/ml4coq-proj/embeddings_zoo/tree_nns/main_brando.py
# python ~/ML4Coq/ml4coq-proj/embeddings_zoo/tree_nns/main_brando.py --debug --num_epochs 5 --batch_size 2 --term_encoder_embedding_dim 8

echo JOB ENDED

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)