0.7 - 0.75 是朴素贝叶斯情绪分析可接受的准确度吗?

问题描述

我提前为发布这么多代码道歉。

我正在尝试将 YouTube 评论分为包含意见的评论(无论是正面还是负面)和不使用 NLTK 的朴素贝叶斯分类器的评论,但无论我在预处理阶段做什么,我都无法真正得到精度在0.75以上。与我见过的其他示例相比,这似乎有点低 - 例如,this 教程最终的准确度约为 0.98。

这是我的完整代码

import nltk,re,json,random

from nltk.stem.wordnet import WordNetLemmatizer
from nltk.corpus import stopwords
from nltk.tag import pos_tag
from nltk.tokenize import TweetTokenizer
from nltk import Freqdist,classify,NaiveBayesClassifier

from contractions import CONTRACTION_MAP
from abbreviations import abbrev_map
from tqdm.notebook import tqdm

def expand_contractions(text,contraction_mapping=CONTRACTION_MAP):
    text = re.sub(r"’","'",text)
    if text in abbrev_map:
        return(abbrev_map[text])
    text = re.sub(r"\bluv","lov",text)
    
    contractions_pattern = re.compile('({})'.format('|'.join(contraction_mapping.keys())),flags=re.IGnorECASE|re.DOTALL)
    def expand_match(contraction):
        match = contraction.group(0)
        first_char = match[0]
        expanded_contraction = contraction_mapping.get(match)\
                                if contraction_mapping.get(match)\
                                else contraction_mapping.get(match.lower())                       
        expanded_contraction = first_char+expanded_contraction[1:]
        return expanded_contraction
        
    expanded_text = contractions_pattern.sub(expand_match,text)
    return expanded_text

def reduce_lengthening(text):
    pattern = re.compile(r"(.)\1{2,}")
    return pattern.sub(r"\1\1",text)

def lemmatize_sentence(tokens):
    lemmatizer = WordNetLemmatizer()
    lemmatized_sentence = []
    for word,tag in pos_tag(tokens):
        if tag.startswith('NN'):
            pos = 'n'
        elif tag.startswith('VB'):
            pos = 'v'
        else:
            pos = 'a'
        lemmatized_sentence.append(lemmatizer.lemmatize(word,pos))
    return lemmatized_sentence

def processor(comments_list):
    
    new_comments_list = []
    for com in tqdm(comments_list):
        com = com.lower()
        
        #expand out contractions
        tok = com.split(" ")
        z = []
        for w in tok:
            ex_w = expand_contractions(w)
            z.append(ex_w)
        st = " ".join(z)
        
        
        tokenized = tokenizer.tokenize(st)
        reduced = [reduce_lengthening(token) for token in tokenized]
        new_comments_list.append(reduced)
        
    lemmatized = [lemmatize_sentence(new_com) for new_com in new_comments_list]
    
    return(lemmatized)

def get_all_words(cleaned_tokens_list):
    for tokens in cleaned_tokens_list:
        for token in tokens:
            yield token

def get_comments_for_model(cleaned_tokens_list):
    for comment_tokens in cleaned_tokens_list:
        yield dict([token,True] for token in comment_tokens)
        
if __name__ == "__main__":
    #=================================================================================~
    tokenizer = TweetTokenizer(strip_handles=True,reduce_len=True)        
    
    with open ("english_lang/samples/training_set.json","r",encoding="utf8") as f:
        train_data = json.load(f)
        
    pos_processed = processor(train_data['pos'])
    neg_processed = processor(train_data['neg'])
    neu_processed = processor(train_data['neu'])
    
    emotion = pos_processed + neg_processed
    random.shuffle(emotion)
    
    em_tokens_for_model = get_comments_for_model(emotion)
    neu_tokens_for_model = get_comments_for_model(neu_processed)

    em_dataset = [(comment_dict,"Emotion")
                         for comment_dict in em_tokens_for_model]

    neu_dataset = [(comment_dict,"Neutral")
                             for comment_dict in neu_tokens_for_model]

    dataset = em_dataset + neu_dataset


    random.shuffle(dataset)
    x = 700
    tr_data = dataset[:x]
    te_data = dataset[x:]
    classifier = NaiveBayesClassifier.train(tr_data)
    print(classify.accuracy(classifier,te_data))

如果需要,我可以发布我的训练数据集,但可能值得一提的是,YouTube 评论本身的英语质量非常差且不一致(我认为这是模型准确率低的原因)。无论如何,这会被认为是可接受的准确度吗? 或者,我很可能将这一切都弄错了,并且有一个更好的模型可供使用,在这种情况下,请随时告诉我我是个白痴! 提前致谢

解决方法

将您的结果与无关教程的结果进行比较在统计上是无效的。在您恐慌之前,请对可能降低模型准确性的因素进行适当的研究。首先,您的模型的准确度不能高于数据集信息中固有的准确度。例如,无论数据集如何,在预测随机二元事件方面,没有任何模型可以(从长远来看)优于 50%。

我们没有合理的方法来评估理论信息内容。如果您需要检查,请尝试将其他一些模型类型应用于相同的数据,并查看它们产生的准确性。运行这些实验是数据科学的正常组成部分。