如何简化文本含义相同但不完全相同的大数据集的文本比较-对文本数据进行重复数据删除

问题描述

我有大约180万条记录的文本数据集(巧克力,蛋糕,可乐等不同的菜单项),分别属于6个不同的类别(类别A,B,C,D,E,F)。其中一个类别有大约70万条记录。大多数菜单项被混合在不属于它们的多个类别中,例如:cake属于“ A”类别,但也可以在“ B”和“ C”类别中找到。

我想识别那些分类错误的物品并向人员报告,但是挑战在于物品名称并不总是正确的,因为它完全是人类输入的文本。例如:巧克力可能会更新为热巧克力,甜巧克力,巧克力等。也可能有诸如巧克力蛋糕的项目;)

因此,为了解决这个问题,我尝试了一种使用余弦相似度的简单方法,以按类别进行比较并识别那些异常,但是由于我将每个项目与180万条记录进行了比较,因此这需要很多时间(示例代码如下所示)。任何人都可以提出解决此问题的更好方法吗?

#Function
from nltk.corpus import stopwords 
from nltk.tokenize import word_tokenize 

def cos_similarity(a,b):
    X =a
    Y =b

    # tokenization 
    X_list = word_tokenize(X)  
    Y_list = word_tokenize(Y) 

    # sw contains the list of stopwords 
    sw = stopwords.words('english')  
    l1 =[];l2 =[] 

    # remove stop words from the string 
    X_set = {w for w in X_list if not w in sw}  
    Y_set = {w for w in Y_list if not w in sw} 

    # form a set containing keywords of both strings  
    rvector = X_set.union(Y_set)  
    for w in rvector: 
        if w in X_set: l1.append(1) # create a vector 
        else: l1.append(0) 
        if w in Y_set: l2.append(1) 
        else: l2.append(0) 
    c = 0

    # cosine formula  
    for i in range(len(rvector)): 
            c+= l1[i]*l2[i] 
    if float((sum(l1)*sum(l2))**0.5)>0:
        cosine = c / float((sum(l1)*sum(l2))**0.5) 
    else:
        cosine = 0
    return cosine

#Base code
cos_sim_list = []
for i in category_B.index:
    ln_cosdegree = 0
    ln_degsem = []
    for j in category_A.index:
        ln_j = str(category_A['item_name'][j])
        ln_i = str(category_B['item_name'][i])
        degreeOfSimilarity = cos_similarity(ln_j,ln_i)
        if degreeOfSimilarity>0.5:
            cos_sim_list.append([ln_j,ln_i,degreeOfSimilarity])

考虑文字已清除

解决方法

我使用KNeighbor和余弦相似度来解决这种情况。尽管我多次运行代码以按类别比较类别;仍然有效,因为类别数量较少。请建议我是否有更好的解决方案

enter image description here

enter image description here

cat_A_clean = category_A['item_name'].unique()

print('Vecorizing the data - this could take a few minutes for large datasets...')
vectorizer = TfidfVectorizer(min_df=1,analyzer=ngrams,lowercase=False)
tfidf = vectorizer.fit_transform(cat_A_clean)
print('Vecorizing completed...')

from sklearn.neighbors import NearestNeighbors
nbrs = NearestNeighbors(n_neighbors=1,n_jobs=-1).fit(tfidf)

unique_B = set(category_B['item_name'].values) 

def getNearestN(query):
    queryTFIDF_ = vectorizer.transform(query)
    distances,indices = nbrs.kneighbors(queryTFIDF_)
    return distances,indices

import time
t1 = time.time()
print('getting nearest n...')
distances,indices = getNearestN(unique_B)
t = time.time()-t1
print("COMPLETED IN:",t)

unique_B = list(unique_B) 
print('finding matches...')
matches = []
for i,j in enumerate(indices):
    temp = [round(distances[i][0],2),cat_A_clean['item_name'].values[j],unique_B[i]]
    matches.append(temp)

print('Building data frame...')  
matches = pd.DataFrame(matches,columns=['Match confidence (lower is better)','ITEM_A','ITEM_B'])
print('Done') 

def clean_string(text):
        text = str(text)
        text = text.lower()
        return(text)
def cosine_sim_vectors(vec1,vec2):
    vec1 = vec1.reshape(1,-1)
    vec2 = vec2.reshape(1,-1)
    return cosine_similarity(vec1,vec2)[0][0]

def cos_similarity(sentences):
    cleaned = list(map(clean_string,sentences))
    print(cleaned)
    vectorizer = CountVectorizer().fit_transform(cleaned)
    vectors = vectorizer.toarray()
    print(vectors) 
    return(cosine_sim_vectors(vectors[0],vectors[1]))

cos_sim_list =[]
for ind in matches.index:
    a = matches['Match confidence (lower is better)'][ind]
    b = matches['ITEM_A'][ind]
    c = matches['ITEM_B'][ind]
    degreeOfSimilarity = cos_similarity([b,c])
    cos_sim_list.append([a,b,c,degreeOfSimilarity])