问题描述
我执行了这个优秀的教程: https://towardsdatascience.com/building-a-multi-label-text-classifier-using-bert-and-tensorflow-f188e0ecdc5d
除了创建模型的地方,我理解其中的大部分内容。我想知道它并迁移到 TF2 bert。
def create_model(bert_config,is_training,input_ids,input_mask,segment_ids,labels,num_labels,use_one_hot_embeddings):
"""Creates a classification model."""
model = modeling.BertModel(
config=bert_config,is_training=is_training,input_ids=input_ids,input_mask=input_mask,token_type_ids=segment_ids,use_one_hot_embeddings=use_one_hot_embeddings)
output_layer = model.get_pooled_output()
hidden_size = output_layer.shape[-1].value
output_weights = tf.get_variable(
"output_weights",[num_labels,hidden_size],initializer=tf.truncated_normal_initializer(stddev=0.02))
output_bias = tf.get_variable(
"output_bias",[num_labels],initializer=tf.zeros_initializer())
with tf.variable_scope("loss"):
if is_training:
# I.e.,0.1 dropout
output_layer = tf.nn.dropout(output_layer,keep_prob=0.9)
logits = tf.matmul(output_layer,output_weights,transpose_b=True)
logits = tf.nn.bias_add(logits,output_bias)
# probabilities = tf.nn.softmax(logits,axis=-1) ### multiclass case
probabilities = tf.nn.sigmoid(logits)#### multi-label case
labels = tf.cast(labels,tf.float32)
tf.logging.info("num_labels:{};logits:{};labels:{}".format(num_labels,logits,labels))
per_example_loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=labels,logits=logits)
loss = tf.reduce_mean(per_example_loss)
return (loss,per_example_loss,probabilities)
- 我浏览了 BERT 的 TF2 微调教程,但如何实现相同的效果?我可以训练其他不需要第 1 步的模型。