问题描述
目标:我想根据我的自定义数据集创建一个文本分类器,类似(和以下)来自 mlexplained 的 This (now deleted) 教程。
发生了什么 我成功地格式化了我的数据,创建了一个训练、验证和测试数据集,并对其进行了格式化,使其等于他们正在使用的“有毒推文”数据集(每个标签都有一个列,1/0 代表 True/不对)。大多数其他部分也按预期工作,但在迭代时出现错误。
The `device` argument should be set by using `torch.device` or passing a string as an argument.
This behavior will be deprecated soon and currently defaults to cpu.
The `device` argument should be set by using `torch.device` or passing a string as an argument. This behavior will be deprecated soon and currently defaults to cpu.
The `device` argument should be set by using `torch.device` or passing a string as an argument. This behavior will be deprecated soon and currently defaults to cpu.
The `device` argument should be set by using `torch.device` or passing a string as an argument. This behavior will be deprecated soon and currently defaults to cpu.
0%| | 0/25517 [00:01<?,?it/s]
Traceback (most recent call last):
... (trace back messages)
AttributeError: 'Example' object has no attribute 'text'
Traceback 指示的行:
opt = optim.Adam(model.parameters(),lr=1e-2)
loss_func = nn.BCEWithLogitsLoss()
epochs = 2
for epoch in range(1,epochs + 1):
running_loss = 0.0
running_corrects = 0
model.train() # turn on training mode
for x,y in tqdm.tqdm(train_dl): # **THIS LINE CONTAINS THE ERROR**
opt.zero_grad()
preds = model(x)
loss = loss_func(y,preds)
loss.backward()
opt.step()
running_loss += loss.data[0] * x.size(0)
epoch_loss = running_loss / len(trn)
# calculate the validation loss for this epoch
val_loss = 0.0
model.eval() # turn on evaluation mode
for x,y in valid_dl:
preds = model(x)
loss = loss_func(y,preds)
val_loss += loss.data[0] * x.size(0)
val_loss /= len(vld)
print('Epoch: {},Training Loss: {:.4f},Validation Loss: {:.4f}'.format(epoch,epoch_loss,val_loss))
尝试解决已经出现的问题,我认为是Reson:
我知道其他人也遇到过这个问题,这里甚至还有 2 个问题,bot 都有跳过数据集中的列或行的问题(我检查了空行/Cokumns,但没有发现)。另一个解决方案是,给定模型的参数必须与 .csv 文件中的参数顺序相同(无缺失)。
然而,相关代码(tst、trn和vld集的加载和创建) def createTestTrain():
# Create a Tokenizer
tokenize = lambda x: x.split()
# Defining Tag and Text
TEXT = Field(sequential=True,tokenize=tokenize,lower=True)
LABEL = Field(sequential=False,use_vocab=False)
# Our Datafield
tv_datafields = [("ID",None),("text",TEXT)]
# Loading our Additional columns we added earlier
with open(PATH + 'columnList.pickle','rb') as handle:
addColumns = pickle.load(handle)
# Adding the extra columns,no way we are defining 1000 tags by hand
for column in addColumns:
tv_datafields.append((column,LABEL))
#tv_datafields.append(("split",None))
# Loading Train/Test Split we created
trn = TabularDataset(
path=PATH+'train.csv',format='csv',skip_header=True,fields=tv_datafields)
vld = TabularDataset(
path=PATH+'train.csv',fields=tv_datafields)
# Creating Test Datafield
tst_datafields = [("id",TEXT)]
# Using TabularDataset,as we want to Analyse Text on it
tst = TabularDataset(
path=PATH+"test.csv",# the file path
format='csv',fields=tst_datafields)
return(trn,vld,tst)
使用相同的列表和顺序,就像我的 csv 一样。 tv_datafields 的结构与文件完全一样。此外,由于数据字段对象只是带有数据点的字典,我读出了字典的键,就像教程一样,通过:
trn[0].dict_keys()
应该发生的事情: 示例的行为是这样的
trn[0]
torchtext.data.example.Example at 0x10d3ed3c8
trn[0].__dict__.keys()
dict_keys(['comment_text','toxic','severe_toxic','threat','obscene','insult','identity_hate'])
我的结果:
trn[0].__dict__.keys()
Out[19]: dict_keys([])
trn[1].__dict__.keys()
Out[20]: dict_keys([])
trn[2].__dict__.keys()
Out[21]: dict_keys([])
trn[3].__dict__.keys()
Out[22]: dict_keys(['text'])
虽然 trn[0] 不包含任何内容,而是从 3 到 15 分布,通常应该存在的列数应该更多。
现在我不知所措,至于我哪里出错了。数据适合,该函数显然有效,但 TabularDataset() 似乎以错误的方式读取我的列(如果有的话)。我分类了吗
# Defining Tag and Text
TEXT = Field(sequential=True,lower=True)
LABEL = Field(sequential=False,use_vocab=False)
走错路了?至少我的 Debuggin 似乎表明了这一点。
由于 Torchtext 上的文档很少,我很难找到它,但是当我查看 Data 或 Fields 的定义时,我看不出它有什么问题。
感谢您的帮助。
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)