问题描述
由Trax实现的AttentionQKV层如下:AttentionQKV
def AttentionQKV(d_feature,n_heads=1,dropout=0.0,mode='train'):
"""Returns a layer that maps (q,k,v,mask) to (activations,mask).
See `Attention` above for further context/details.
Args:
d_feature: Depth/dimensionality of feature embedding.
n_heads: Number of attention heads.
dropout: Probababilistic rate for internal dropout applied to attention
activations (based on query-key pairs) before dotting them with values.
mode: One of `'train'`,`'eval'`,or `'predict'`.
"""
return cb.Serial(
cb.Parallel(
core.Dense(d_feature),core.Dense(d_feature),),PureAttention( # pylint: disable=no-value-for-parameter
n_heads=n_heads,dropout=dropout,mode=mode),)
特别是,三个平行的密集层的目的是什么?该层的输入是q,k,v,掩码。为什么q,k,v穿过一个密集层?
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)