NLTK TypeError:无法散列的类型:'list'

问题描述

我目前正在研究csv文件中单词的反义化,之后我将所有单词以小写字母传递,删除所有标点符号并拆分列。

我仅使用两个CSV列:analyze.info()

<class 'pandas.core.frame.DataFrame'> RangeIndex: 4637 entries,0 to 4636. Data columns (total 2 columns):
#   Column          Non-Null Count  Dtype
0   Comments        4637 non-null   object
1   Classification  4637 non-null   object

import string
import pandas as pd
from nltk.corpus import stopwords
from nltk.stem import 

analyze = pd.read_csv('C:/Users/(..)/Talk London/ALL_dataset.csv',delimiter=';',low_memory=False,encoding='cp1252',usecols=['Comments','Classification'])

lower_case = analyze['Comments'].str.lower()

cleaned_text = lower_case.str.translate(str.maketrans('','',string.punctuation))

tokenized_words = cleaned_text.str.split()

final_words = []
for word in tokenized_words:
    if word not in stopwords.words('english'):
       final_words.append(word)

wnl = WordNetLemmatizer()
lemma_words = []
lem = ' '.join([wnl.lemmatize(word) for word in tokenized_words])
lemma_words.append(lem)

当我运行代码时,返回此错误

回溯(最近通话最近):
文件“ C:/Users/suiso/PycharmProjects/SA_working/SA_Main.py”,第52行,在 lem =''.join([wnl.lemmatize(word)for tokenized_words中的单词])
文件“ C:/Users/suiso/PycharmProjects/SA_working/SA_Main.py”,第52行,在 lem =''.join([tokenized_words中单词的wnl.lemmatize(word)])
lemmatize中的文件“ C:\ Users \ suiso \ PycharmProjects \ SA_working \ venv \ lib \ site-packages \ nltk \ stem \ wordnet.py”,第38行 lemmas = wordnet._morphy(word,pos)
文件“ C:\ Users \ suiso \ PycharmProjects \ SA_working \ venv \ lib \ site-packages \ nltk \ corpus \ reader \ wordnet.py”,行1897,以_morphy
if形式例外:
TypeError:无法散列的类型:“列表”

解决方法

tokenized_words是一列列表。它不是字符串列的原因是因为您使用了split方法。因此,您需要像这样使用double for循环

lem = ' '.join([wnl.lemmatize(word) for word_list in tokenized_words for word in word_list])