通过通用阈值使用通用句子编码器的句子相似度

问题描述

我有一个超过1500行的数据。每行都有一个句子。我正在尝试找出在所有句子中找到最相似句子的最佳方法。我已经尝试过此example,但处理速度如此之慢,以至于大约需要20分钟才能处理1500行数据。

我使用了上一个问题中的代码,并尝试了多种类型来提高速度,但是影响不大。我遇到了使用tensorflow的通用句子编码器,它看起来很快并且具有很好的准确性。我正在与colab合作,您可以检查它here

import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sns

module_url = "https://tfhub.dev/google/universal-sentence-encoder/4" #@param ["https://tfhub.dev/google/universal-sentence-encoder/4","https://tfhub.dev/google/universal-sentence-encoder-large/5","https://tfhub.dev/google/universal-sentence-encoder-lite/2"]
model = hub.load(module_url)
print ("module %s loaded" % module_url)
def embed(input):
  return model(input)

df = pd.DataFrame(columns=["ID","DESCRIPTION"],data=np.matrix([[10,"Cancel ASN WMS Cancel ASN"],[11,"MAXPREDO Validation is corect"],[12,"Move to QC"],[13,[14,"MAXPREDO Validation is right"],[15,"Verify files are sent every hours for this interface from Optima"],[16,"MAXPREDO Validation are correct"],[17,[18,"Verify files are not sent"]
                                                                ]))

message_embeddings = embed(messages)

for i,message_embedding in enumerate(np.array(message_embeddings).tolist()):
  print("Message: {}".format(messages[i]))
  print("Embedding size: {}".format(len(message_embedding)))
  message_embedding_snippet = ",".join(
      (str(x) for x in message_embedding[:3]))
  print("Embedding: [{},...]\n".format(message_embedding_snippet))

我在寻找什么

我希望我可以通过阈值示例的方式在所有相似且高于0.90%的行中传递0.90数据作为结果。

Data Sample
ID    |   DESCRIPTION
-----------------------------
10    | Cancel ASN WMS Cancel ASN   
11    | MAXPREDO Validation is corect
12    | Move to QC  
13    | Cancel ASN WMS Cancel ASN   
14    | MAXPREDO Validation is right
15    | Verify files are sent every hours for this interface from Optima
16    | MAXPREDO Validation are correct
17    | Move to QC  
18    | Verify files are not sent 

预期结果

Above data which are similar upto 0.90% should get as a result with ID

ID    |   DESCRIPTION
-----------------------------
10    | Cancel ASN WMS Cancel ASN
13    | Cancel ASN WMS Cancel ASN
11    | MAXPREDO Validation is corect  # even spelling is not correct
14    | MAXPREDO Validation is right
16    | MAXPREDO Validation are correct
12    | Move to QC  
17    | Move to QC 

解决方法

有多种方法可以找到两个嵌入矢量之间的相似性。 最常见的是cosine_similarity

因此,您要做的第一件事是计算相似度矩阵:

代码:

message_embeddings = embed(list(df['DESCRIPTION']))
cos_sim = sklearn.metrics.pairwise.cosine_similarity(message_embeddings)

您将获得一个具有相似值的9*9矩阵。 您可以创建此矩阵的热图以使其可视化。

代码:

def plot_similarity(labels,corr_matrix):
  sns.set(font_scale=1.2)
  g = sns.heatmap(
      corr_matrix,xticklabels=labels,yticklabels=labels,vmin=0,vmax=1,cmap="YlOrRd")
  g.set_xticklabels(labels,rotation=90)
  g.set_title("Semantic Textual Similarity")

plot_similarity(list(df['DESCRIPTION']),cos_sim)

输出:

Matrix

深色方框意味着更多相似性。

最后,您遍历此cos_sim矩阵以使用阈值获得所有相似的句子:

threshold = 0.8
row_index = []
for i in range(cos_sim.shape[0]):
  if i in row_index:
    continue
  similar = [index for index in range(cos_sim.shape[1]) if (cos_sim[i][index] > threshold)]
  if len(similar) > 1:
    row_index += similar

sim_df = pd.DataFrame()
sim_df['ID'] = [df['ID'][i] for i in row_index]
sim_df['DESCRIPTION'] = [df['DESCRIPTION'][i] for i in row_index]
sim_df

数据框看起来像这样。
输出:

This

可以使用多种方法生成相似度矩阵。 您可以查看this了解更多方法。