问题描述
我正在尝试使用 Flair 框架 (https://github.com/flairNLP/flair) 训练命名实体识别模型,其中嵌入:TransformerWordEmbeddings('emilyalsentzer/Bio_ClinicalBERT')
。但是,它总是以 OverflowError: int too big to convert
失败。这也发生在其他一些转换器词嵌入中,例如 XLNet
。但是,BERT
和 RoBERTa
工作正常。
这是错误的完整追溯:
2021-04-15 09:34:48,106 ----------------------------------------------------------------------------------------------------
2021-04-15 09:34:48,106 Corpus: "Corpus: 778 train + 259 dev + 260 test sentences"
2021-04-15 09:34:48,106 Parameters:
2021-04-15 09:34:48,106 - learning_rate: "0.1"
2021-04-15 09:34:48,106 - mini_batch_size: "32"
2021-04-15 09:34:48,106 - patience: "3"
2021-04-15 09:34:48,106 - anneal_factor: "0.5"
2021-04-15 09:34:48,106 - max_epochs: "200"
2021-04-15 09:34:48,106 - shuffle: "True"
2021-04-15 09:34:48,106 - train_with_dev: "False"
2021-04-15 09:34:48,106 - batch_growth_annealing: "False"
2021-04-15 09:34:48,107 ----------------------------------------------------------------------------------------------------
2021-04-15 09:34:48,107 Model training base path: "/home/xxx/data/xxx-clinical-bert"
2021-04-15 09:34:48,107 Device: cuda:0
2021-04-15 09:34:48,107 Embeddings storage mode: gpu
2021-04-15 09:34:48,116 ----------------------------------------------------------------------------------------------------
Traceback (most recent call last):
File "train_medical_2.py",line 144,in <module>
train_ner(d + '-base-ent',corpus_base)
File "train_medical_2.py",line 136,in train_ner
max_epochs=200)
File "/home/d111199102201607101/flair/lib/python3.6/site-packages/flair/trainers/trainer.py",line 381,in train
loss = self.model.forward_loss(batch_step)
File "/home/d111199102201607101/flair/lib/python3.6/site-packages/flair/models/sequence_tagger_model.py",line 637,in forward_loss
features = self.forward(data_points)
File "/home/d111199102201607101/flair/lib/python3.6/site-packages/flair/models/sequence_tagger_model.py",line 642,in forward
self.embeddings.embed(sentences)
File "/home/d111199102201607101/flair/lib/python3.6/site-packages/flair/embeddings/token.py",line 81,in embed
embedding.embed(sentences)
File "/home/d111199102201607101/flair/lib/python3.6/site-packages/flair/embeddings/base.py",line 60,in embed
self._add_embeddings_internal(sentences)
File "/home/d111199102201607101/flair/lib/python3.6/site-packages/flair/embeddings/token.py",line 923,in _add_embeddings_internal
self._add_embeddings_to_sentence(sentence)
File "/home/d111199102201607101/flair/lib/python3.6/site-packages/flair/embeddings/token.py",line 999,in _add_embeddings_to_sentence
truncation=True,File "/home/d111199102201607101/flair/lib/python3.6/site-packages/transformers/tokenization_utils_base.py",line 2438,in encode_plus
**kwargs,File "/home/d111199102201607101/flair/lib/python3.6/site-packages/transformers/tokenization_utils_fast.py",line 472,in _encode_plus
**kwargs,line 379,in _batch_encode_plus
pad_to_multiple_of=pad_to_multiple_of,line 330,in set_truncation_and_padding
self._tokenizer.enable_truncation(max_length,stride=stride,strategy=truncation_strategy.value)
OverflowError: int too big to convert
我已尝试更改 embedding_storage_mode
、hidden_size
和 mini_batch_size
。这些都没有让我解决这个问题。
有人遇到同样的问题吗? 有什么办法可以解决这个问题吗?
谢谢
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)