火炬张量和输入冲突:“不可调用张量对象”

问题描述

由于代码“ torch.tensor”,当我添加“输入”时,出现错误“无法调用Tensor对象”。有谁知道我该如何解决

import torch
from torch.nn import functional as F
from transformers import GPT2Tokenizer,GPT2LMHeadModel


tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')

text0 = "In order to"
text = tokenizer.encode("In order to")
input,past = torch.tensor([text]),None


logits,past = model(input,past = past)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits,best_indices = logits.topk(5)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()

for i in range(5):
    f = ('Generated {}: {}'.format(i,best_words[i]))
    print(f)


option = input("Pick a Option:")
z = text0.append(option)
print(z)

错误堆栈跟踪:

TypeError                                 Traceback (most recent call last)

<ipython-input-2-82e8d88e81c1> in <module>()
     25 
     26 
---> 27 option = input("Pick a Option:")
     28 z = text0.append(option)
     29 print(z)

TypeError: 'Tensor' object is not callable

解决方法

问题是您已经定义了一个名称为input的变量,它将代替input函数使用。只需为变量使用其他名称即可,它将按预期运行。

另外,python字符串没有添加方法。

import torch
from torch.nn import functional as F
from transformers import GPT2Tokenizer,GPT2LMHeadModel


tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')

text0 = "In order to"
text = tokenizer.encode("In order to")
myinput,past = torch.tensor([text]),None


logits,past = model(myinput,past = past)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits,best_indices = logits.topk(5)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()

for i in range(5):
    f = ('Generated {}: {}'.format(i,best_words[i]))
    print(f)


option = input("Pick a Option:")
z = text0 + ' ' + option
print(z)