在一个非常大的文件上应用FuzzyLogic

问题描述

嗨,我正在尝试应用Fuzzylogic比较两个不同文件中的字符串。这段代码适用于较小的数据集。但是,当我尝试在大型数据集上应用相同的代码时,我遇到了内存问题。当我运行代码时,它显示Low memory Unable to create 6.6GB array

from fuzzywuzzy import fuzz
from fuzzywuzzy import process
import difflib
import pandas as pd

df_To_beMatched = pd.read_excel('vendor_file.xlsx',encoding='utf-8',usecols=["vendOR_NAME"])
df_To_beMatched['vendOR_NAME'] = df_To_beMatched['vendOR_NAME'].fillna('')
original_list = df_To_beMatched['vendOR_NAME'].tolist()
# print(original_list)

df_exceptionlist = pd.read_excel('Exceptionfile.xlsx',usecols=["Entity_Name"])
df_exceptionlist['Entity_Name'] = df_exceptionlist['Entity_Name'].fillna('')
exception_list = df_exceptionlist['Entity_Name'].tolist()

# print(exception_list)

result = []
result_difflib = []
for exp in exception_list:
    to_delete = exp
    for orig in original_list:
        original = orig
        # print(to_delete,original)
        ratio = fuzz.ratio(to_delete,original)
        token = fuzz.token_set_ratio(to_delete,original)
        partial_ratio = fuzz.partial_ratio(to_delete,original)
        # print(ratio,to_delete,original)
        if ratio > 75 and token > 75 and partial_ratio > 85:
            print(ratio,original)
            result.append({'Entity_Name': to_delete,'vendOR_NAME': original,'Ratio': ratio,'Token': token,'Status': 'Match'})
            break

    difflib_result = difflib.get_close_matches(to_delete,original_list)
    matches = "^".join(difflib_result)
    result_difflib.append({'Entity_Name': to_delete,'Matches': matches})

    fuzzy_df = pd.DataFrame(result)

    fuzzy_df.to_csv('FuzzyLogic_Results.csv',index=False)


解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)