pytorch 如何更加重视聚合特征、注意力机制

问题描述

    def forward(self,nodes_batch):
            """
...
            #Initiate self feature of center node
            pre_hidden_embs = self.raw_features
            for index in range(1,self.num_layers+1):
                #nb = lower_layer_nodes of 1st order,followed by 2nd order
                nb = nodes_batch_layers[index][0]
                #Extract 3 tuples from 2nd order,followed by 1st
                pre_neighs = nodes_batch_layers[index-1]
                # self.dc.logger.info('aggregate_feats.')
                #Aggregate 1st layer unique nodes,self,2nd layer unique nodes/direct neighs + self
                #Aggregate 2nd layer unique nodes,aggregate_feats = self.aggregate(nb,pre_hidden_embs,pre_neighs)
                sage_layer = getattr(self,'sage_layer'+str(index))
                if index > 1:
                    #_node_map returns index of lower_layer_nodes_dict --> unique center nodes of layer 0
                    nb = self._nodes_map(nb,pre_neighs)
                    **#aggregate_feats = 2*self.aggregate(nb,pre_neighs)**
                # self.dc.logger.info('sage_layer.')
                #W/ nb index,retrieve self + aggregate_feats embeddings (2nd order +1st,then 1st + zero layers)
                cur_hidden_embs = sage_layer(self_feats=pre_hidden_embs[nb],aggregate_feats=aggregate_feats)
                #Agg neigh of 2nd layer to 1st layer
                #Then aggreg 1st layer(aggregated earlier) to zero layer
                #From outside to inside
                pre_hidden_embs = cur_hidden_embs
    
            return pre_hidden_embs

请注意以下已停用的内容,如果它是第一层到中心节点,我计划为这个aggregate_feats分配更多权重,如果它是第二层到第一层,则为aggregate_feats分配更少权重。我可以知道如何实现这一目标吗? #aggregate_feats = 2*self.aggregate(nb,pre_hidden_​​embs,pre_neighs)

换句话说,我如何为某个嵌入分配更多的权重,在这种情况下,它是aggregate_feats。

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)