问题描述
想做类似的事情
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello,my dog is cute")).unsqueeze(0) # Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
(来自this thread) 使用longformer
该文档示例似乎做了类似的事情,但令人困惑(特别是wrt。如何设置注意掩码,我想我想将其设置为[CLS]
标记,该示例设置了全局注意我认为是随机值)
>>> import torch
>>> from transformers import LongformerModel,LongformerTokenizer
>>> model = LongformerModel.from_pretrained('allenai/longformer-base-4096',return_dict=True)
>>> tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
>>> SAMPLE_TEXT = ' '.join(['Hello World! '] * 1000) # long input document
>>> input_ids = torch.tensor(tokenizer.encode(SAMPLE_TEXT)).unsqueeze(0) # batch of size 1
>>> # Attention mask values -- 0: no attention,1: local attention,2: global attention
>>> attention_mask = torch.ones(input_ids.shape,dtype=torch.long,device=input_ids.device) # initialize to local attention
>>> attention_mask[:,[1,4,21,]] = 2 # Set global attention based on the task. For example,... # classification: the <s> token
... # QA: question tokens
... # LM: potentially on the beginning of sentences and paragraphs
>>> outputs = model(input_ids,attention_mask=attention_mask)
>>> sequence_output = outputs.last_hidden_state
>>> pooled_output = outputs.pooler_output
(来自here)
解决方法
您不需要弄乱这些值(除非您想优化 longformer 处理不同令牌的方式)。在上面列出的示例中,它将强制全局关注第 1 个、第 4 个和第 21 个令牌。他们在这里放置了随机数,但有时您可能希望全局参与某种类型的标记,例如标记序列中的问题标记(例如: + 但仅全局参与第一部分).
如果您只是在寻找嵌入,您可以按照讨论的内容here :The last layers of longformer for document embeddings。