Colab资源和自我注意分配张量时为OOM

问题描述

我正在尝试使用Keras在google Colab上实现自我注意GAN。当我测试注意层时,出现OOM错误。那么,我在矩阵乘法上做错什么了吗?或者对于高分辨率(> 64 x 64)的colab GPU来说,这仅仅是一个昂贵的操作?

def hw_flatten(x):
   # Input shape x: [BATCH,HEIGHT,WIDTH,CHANNELS]
   # flat the feature volume across the width and height dimensions 

   x = Reshape((x.shape[1]*x.shape[2],x.shape[3]))(x) #in the Reshape layer batch is implicit

   return x # return [BATCH,W*H,CHANNELS]



def matmul(couple_t):
  tensor_1 = couple_t[0]
  tensor_2 = couple_t[1]
  transponse = couple_t[2] #boolean 

  return tf.matmul(tensor_1,tensor_2,transpose_b=transponse)



class SelfAttention(Layer):

  def _init_(self,ch,**kwargs):
    super(SelfAttention,self).__init__(**kwargs)
    self.ch = ch

  
  def attentionMap(self,feature_map):

    f = Conv2D(filters=feature_map.shape[3]/8,kernel_size=(1,1),strides=1,padding='same')(feature_map) # [bs,h,w,c']
    g = Conv2D(filters=feature_map.shape[3]/8,c']
    h = Conv2D(filters=feature_map.shape[3],padding='same')(feature_map)   # [bs,c']

    s = Lambda(matmul)([hw_flatten(g),hw_flatten(f),True]) # [bs,N,N]
    beta = Activation("softmax")(s)

    o = Lambda(matmul)([beta,hw_flatten(h),False]) # [bs,C]


    gamma = self.add_weight(name='gamma',shape=[1],initializer='zeros',trainable=True)

    o = Reshape((feature_map.shape[1:]))(o) # [bs,C]

    x = gamma * o + feature_map

    print(x.shape)

    return x

这是测试:

tensor = np.random.normal(0,1,size=(32,64,512)).astype('float64')
attention_o = SelfAttention(64)
a = attention_o.attentionMap(tensor)

这是错误

OOM when allocating tensor with shape[32,4096,4096] and type double

非常感谢您的关注:D

解决方法

您的32x4096x4096的张量具有536870912条目!这乘以双精度数(8)中的字节数,然后转换为Gb就是4294!超过4Tb,绝对不适合GPU。在应用自我关注之前,您可能希望添加一些最大的缓冲层以减少数据的维数。

相关问答

Selenium Web驱动程序和Java。元素在(x,y)点处不可单击。其...
Python-如何使用点“。” 访问字典成员?
Java 字符串是不可变的。到底是什么意思?
Java中的“ final”关键字如何工作?(我仍然可以修改对象。...
“loop:”在Java代码中。这是什么,为什么要编译?
java.lang.ClassNotFoundException:sun.jdbc.odbc.JdbcOdbc...