当我仍然有足够的内存时,CUDA push :: sort遇到内存问题

问题描述

我正在ubuntu18.04上使用cuda10.2。我的GPU是tesla T4,它具有16G内存,并且我没有在当前GPU上运行其他程序。 简短的代码如下:

#include <iostream>
#include <algorithm>
#include <random>
#include <vector>
#include <numeric>
#include <algorithm>
#include <chrono>

#include <cuda_runtime.h>
#include <thrust/device_vector.h>
#include <thrust/sort.h>
#include <thrust/execution_policy.h>


struct sort_functor {

    thrust::device_ptr<float> data;
    int stride = 1;
    __host__ __device__
    void operator()(int idx) {
        thrust::sort(thrust::device,data + idx * stride,data + (idx + 1) * stride);
    }
};


int main() {
    std::random_device rd;
    std::mt19937 engine;
    engine.seed(rd());
    std::uniform_real_distribution<float> u(0,90.);

    int M = 8;
    int N = 8 * 384 * 300;

    std::vector<float> v(M * N);
    std::generate(v.begin(),v.end(),[&](){return u(engine);});
    thrust::host_vector<float> hv(v.begin(),v.end());
    thrust::device_vector<float> dv = hv;

    thrust::device_vector<float> res(dv.begin(),dv.end());

    thrust::device_vector<int> index(M);
    thrust::sequence(thrust::device,index.begin(),index.end(),1);

    thrust::for_each(thrust::device,sort_functor{res.data(),N}
            );
    cudaDeviceSynchronize();

    return 0;
}

错误消息是:

temporary_buffer::allocate: get_temporary_buffer failed
temporary_buffer::allocate: get_temporary_buffer failed
temporary_buffer::allocate: get_temporary_buffer failed
temporary_buffer::allocate: get_temporary_buffer failed
temporary_buffer::allocate: get_temporary_buffer failed
temporary_buffer::allocate: get_temporary_buffer failed
terminate called after throwing an instance of 'thrust::system::system_error'
  what():  for_each: failed to synchronize: cudaErrorLaunchFailure: unspecified launch failure
Aborted (core dumped)

请问如何解决这个问题?

解决方法

thrust::sort requires O(N) temporary memory allocation。当您从设备代码(在函子中)调用它时,临时内存分配(对于每个调用-即从您的8个调用中的每个调用)将在设备上使用newmalloc引擎盖,然后分配将来自“设备堆”空间。 device heap space is by default limited to 8MB,但是您可以更改它。您正在达到此限制。

如果在main例程的顶部添加以下内容:

cudaError_t err = cudaDeviceSetLimit(cudaLimitMallocHeapSize,1048576ULL*1024);

您的代码对我来说没有任何运行时错误。

我并不是建议我仔细计算出上面的1GB值。我只是选择了一个比8MB大得多但比16GB小得多的值,它似乎可以工作。一般情况下,您应该carefully estimate所需的临时分配大小。

相关问答

错误1:Request method ‘DELETE‘ not supported 错误还原:...
错误1:启动docker镜像时报错:Error response from daemon:...
错误1:private field ‘xxx‘ is never assigned 按Alt...
报错如下,通过源不能下载,最后警告pip需升级版本 Requirem...