多个输出到单个列表输入 - 在 Nextflow

问题描述

我正在尝试将 x 个通过一次执行多个比对生成的 bam 文件(批量 y 个 fastq 文件)合并到一个单独的 bam 文件中下一个流程。

到目前为止,在执行对齐和排序/索引生成的 bam 文件时,我有以下内容

//Run minimap2 on concatenated fastqs
process miniMap2Bam {
        publishDir "$params.bamDir"
        errorStrategy 'retry'
        cache 'deep'
        maxRetries 3
        maxForks 10
        memory { 16.GB * task.attempt }

        input:
        val dirstring from dirstr
        val runString from stringRun
        each file(batchFastq) from fastqBatch.flatMap()

        output:
        val runString into stringRun1
        file("${batchFastq}.bam") into bamFiles
        val dirstring into dirstrSam

        script:
        """
        minimap2 --secondary=no --MD -2 -t 10 -a $params.genome ${batchFastq} | samtools sort -o ${batchFastq}.bam
        samtools index ${batchFastq}.bam
        """
}

其中 ${batchFastq}.bam 是包含一批 y 个 fastq 文件的 bam 文件

此管道完成得很好,但是,当尝试在另一个进程 (samToolsMerge) 中对这些 bam 文件执行 samtools merge 时,每次运行对齐时都会运行该进程(在本例中为 4),而不是收集所有 bam 文件一次:

//Run samtools merge
process samToolsMerge {
        echo true
        publishDir "$dirstring/aligned_minimap/",mode: 'copy',overwrite: 'false'
        cache 'deep'
        errorStrategy 'retry'
        maxRetries 3
        maxForks 10
        memory { 14.GB * task.attempt }

        input:
        val runString from stringRun1
        file bamFile from bamFiles.collect()
        val dirstring from dirstrSam

        output:
        file("**")

        script:
        """
        samtools merge ${runString}.bam ${bamFile} 
        """
}

输出为:

executor >  lsf (9)
[49/182ec0] process > catFastqs (1)     [100%] 1 of 1 ✔
[-        ] process > nanoplotSummary   -
[0e/609a7a] process > miniMap2Bam (1)   [100%] 4 of 4 ✔
[42/72469d] process > samToolsMerge (2) [100%] 4 of 4 ✔




Completed at: 04-Mar-2021 14:54:21
Duration    : 5m 41s
cpu hours   : 0.2
Succeeded   : 9

如何只从 miniMap2Bam 获取生成的 bam 文件并通过 samToolsMerge 一次运行它们,而不是多次运行该进程?

提前致谢!

编辑: 感谢 Pallie 在下面的评论中,问题是将先前进程中的 runString 和 dirstring 值输入到 miniMap2Bam 和 samToolsMerge 中,导致每次传递一个值时该过程都会重复。

解决方案就像从 miniMap2Bam 中删除 vals 一样简单(如下):

//Run minimap2 on concatenated fastqs
process miniMap2Bam {
        errorStrategy 'retry'
        cache 'deep'
        maxRetries 3
        maxForks 10
        memory { 16.GB * task.attempt }

        input:
        each file(batchFastq) from fastqBatch.flatMap()

        output:
        file("${batchFastq}.bam") into bamFiles

        script:
        """
        minimap2 --secondary=no --MD -2 -t 10 -a $params.genome ${batchFastq} | samtools sort -o ${batchFastq}.bam
        samtools index ${batchFastq}.bam
        """
}

解决方法

最简单的修复方法可能是停止通过通道传递静态目录字符串和运行字符串:

// Instead of a hardcoded path use a parameter you passed via CLI like you did with bamDir
dirString = file("/path/to/fastqs/")
runString = file("/path/to/fastqs/").getParent()
fastqBatch = Channel.from("/path/to/fastqs/")

//Run minimap2 on concatenated fastqs
process miniMap2Bam {
        publishDir "$params.bamDir"
        errorStrategy 'retry'
        cache 'deep'
        maxRetries 3
        maxForks 10
        memory { 16.GB * task.attempt }

        input:
        each file(batchFastq) from fastqBatch.flatMap()

        output:
        file("${batchFastq}.bam") into bamFiles

        script:
        """
        minimap2 --secondary=no --MD -2 -t 10 -a $params.genome ${batchFastq} | samtools sort -o ${batchFastq}.bam
        samtools index ${batchFastq}.bam
        """
}

//Run samtools merge
process samToolsMerge {
        echo true
        publishDir "$dirString/aligned_minimap/",mode: 'copy',overwrite: 'false'
        cache 'deep'
        errorStrategy 'retry'
        maxRetries 3
        maxForks 10
        memory { 14.GB * task.attempt }

        input:
        file bamFile from bamFiles.collect()

        output:
        file("**")

        script:
        """
        samtools merge ${runString}.bam ${bamFile} 
        """