问题描述
我想调整 this Text Rank code 以提取文本中的关键字,值在 0 和 1 之间标准化。我展示了一个简短的片段:
# Pare text by spaCy
doc = nlp(text)
# Filter sentences
sentences = self.sentence_segment(doc,candidate_pos,lower) # list of list of words
# Build vocabulary
vocab = self.get_vocab(sentences)
# Get token_pairs from windows
token_pairs = self.get_token_pairs(window_size,sentences)
# Get normalized matrix
g = self.get_matrix(vocab,token_pairs)
# Initionlization for weight(pagerank value)
pr = np.array([1] * len(vocab))
# Iteration
prevIoUs_pr = 0
for epoch in range(self.steps):
pr = (1-self.d) + self.d * np.dot(g,pr)
if abs(prevIoUs_pr - sum(pr)) < self.min_diff:
break
else:
prevIoUs_pr = sum(pr)
# Get weight for each node
node_weight = dict()
for word,index in vocab.items():
node_weight[word] = pr[index]
self.node_weight = node_weight
我看到输出是类似的:
# Output
# science - 1.717603106506989
# fiction - 1.6952610926181002
# filmmaking - 1.4388798751402918
# China - 1.4259793786986021
# Earth - 1.3088154732297723
# tone - 1.1145002295684114
# Chinese - 1.0996896235078055
# Wandering - 1.0071059904601571
# weekend - 1.002449354657688
# America - 0.9976329264870932
# budget - 0.9857269586649321
# north - 0.9711240881032547
我想在 0 到 1 之间标准化文本排名值以获得最大值。
在维基百科上我发现了这两个公式 here
但是,如果我将 (1-self.d)/g.shape[0]
添加到前面的公式中,则:
pr = (1-self.d)/g.shape[0] + self.d * np.dot(g,pr)
我仍然有一些大于 1 的值。有什么错误?
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)