HuggingFace - config.json 中的 GPT2 Tokenizer 配置

问题描述

GPT2 微调模型在 huggingface-models上传以进行推理

在推理过程中观察到以下错误

无法使用 from_pretrained 加载分词器,请更新其配置:无法加载“bala1802/model_1_test”的分词器。确保:-“bala1802/model_1_test”是“https://huggingface.co/models”上列出的正确模型标识符-或“bala1802/model_1_test”是包含相关标记文件的目录的正确路径

以下是Finetuned Huggingface模型的配置- config.json文件

True

我是否应该像 config.json 文件中的 { "_name_or_path": "gpt2","activation_function": "gelu_new","architectures": [ "GPT2LMHeadModel" ],"attn_pdrop": 0.1,"bos_token_id": 50256,"embd_pdrop": 0.1,"eos_token_id": 50256,"gradient_checkpointing": false,"initializer_range": 0.02,"layer_norm_epsilon": 1e-05,"model_type": "gpt2","n_ctx": 1024,"n_embd": 768,"n_head": 12,"n_inner": null,"n_layer": 12,"n_positions": 1024,"resid_pdrop": 0.1,"summary_activation": null,"summary_first_dropout": 0.1,"summary_proj_to_labels": true,"summary_type": "cls_index","summary_use_proj": true,"task_specific_params": { "text-generation": { "do_sample": true,"max_length": 50 } },"transformers_version": "4.3.2","use_cache": true,"vocab_size": 50257 } 一样配置 GPT2 Tokenizer

解决方法

您的存储库不包含创建标记器所需的文件。您似乎只上传了模型的文件。创建用于训练模型的分词器对象,并使用 save_pretrained() 保存所需文件:

from transformers import GPT2Tokenizer

t = GPT2Tokenizer.from_pretrained("gpt2")
t.save_pretrained('/SOMEFOLDER/')

输出:

('/SOMEFOLDER/tokenizer_config.json','/SOMEFOLDER/special_tokens_map.json','/SOMEFOLDER/vocab.json','/SOMEFOLDER/merges.txt','/SOMEFOLDER/added_tokens.json')