PyTorch DataLoader 正在加载整个数据集而不是批处理

问题描述

我正在 PyTorch 中构建一个 Bi-LSTM,用于在 PyTorch 中的 Jigsaw Unintended Bias 数据集上进行文本分类。我给出了固定长度 300 的标记化文本注释作为输入和模型需要预测 18 个输出标签。我采用了一个大小为 1000 的样本数据框进行训练,并尝试向模型输入一批大小为 4 的张量,即形状为 [4,300] 4 的张量,表示浴缸大小,300:标记化文本的固定长度 ....但是数据加载器一次吐出所有数据,即形状为 [1,900,300] 的张量 900:着色集的大小,300:标记化的固定长度文本.....我附上下面的代码.....对此的任何见解都会非常有帮助...提前致谢:)

class LSTMDatasetTraining(Dataset):

    def __init__(self,comment,targets):
            self.comment = comment,self.targets = targets

    def __len__(self):        
        return len(self.comment)


    def __getitem__(self,item):               

        return{            
            "comment_text": torch.tensor(self.comment[item].tolist(),dtype=torch.long),"targets": torch.tensor(self.targets[item,:],dtype=torch.float)
        }  

def train_loop_fn(data_loader,model,optimizer,device,scheduler = None):
    epoc_loss= 0
    model.train()
    
    for bi,batch in enumerate(data_loader):
        print("hi I'm Here")
        comments = batch["comment_text"]
        shape_comments = comments.shape
        print(shape_comments)
        
        targets = batch["targets"]
        break
        shape_targets = targets.shape
        print(shape_targets)
        targets = targets
        optimizer.zero_grad()
        outputs = model(comments)
        loss = loss_fn(outputs,targets)
        print("this is loss")
        print(loss)
        loss.backward()
        optimizer.step()

        
        if scheduler is not None:
            scheduler.step()
        

    train_dataset = LSTMDatasetTraining(comment = df_train.comment_text,targets = train_targets)
    train_data_loader = torch.utils.data.DataLoader(train_dataset,batch_size=TRAIN_BATCH_SIZE)
    
    lr = 3e-5
    device = "cuda"
    num_train_steps = int(df_train.shape[0]/TRAIN_BATCH_SIZE*EPOCH)
    model = BiLSTM(input_size=input_size,hidden_size=hidden_size,num_layers=num_layers,`text_field=text_field).to(device) 
    optimizer = optim.AdamW(model.parameters(),lr=lr)
    scheduler = torch.optim.lr_scheduler.StepLR(optimizer,step_size=2,gamma=0.1)
    train_loop_fn(data_loader=train_data_loader,model=model,optimizer = optimizer,device=device,scheduler = scheduler)

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)