问题描述
有一个python库——Newspaper3k,它使获取网页内容变得更容易。 [newspaper][1]
用于标题检索:
import newspaper
a = Article(url)
print(a.title)
用于内容检索:
url = 'http://fox13Now.com/2013/12/30/new-year-new-laws-obamacare-pot-guns-and-drones/'
article = Article(url)
article.text
我想获取有关网页的信息(有时是标题,有时是实际内容)有我的代码来获取网页的内容/文本:
from newspaper import Article
import nltk
nltk.download('punkt')
fil=open("laborURLsml2.csv","r")
# 3,below read every line in fil
Lines = fil.readlines()
for line in Lines:
print(line)
article = Article(line)
article.download()
article.html
article.parse()
print("[[[[[")
print(article.text)
print("]]]]]")
“laborURLsml2.csv”文件内容为: [laborURLsml2.csv][2]
我的问题是:我的代码读取第一个 URL 并打印内容,但未能读取 2 个 URL
解决方法
我注意到您的 CSV 文件中的某些 URL 有尾随空格,这导致了问题。我还注意到,您的一个链接不可用,而其他链接与分发给子公司以供发布的相同故事相同。
下面的代码处理了前两个问题,但没有处理数据冗余问题。
from newspaper import Config
from newspaper import Article
from newspaper import ArticleException
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'
config = Config()
config.browser_user_agent = USER_AGENT
config.request_timeout = 10
with open('laborURLsml2.csv','r') as file:
csv_file = file.readlines()
for url in csv_file:
try:
article = Article(url.strip(),config=config)
article.download()
article.parse()
print(article.title)
# the replace is used to remove newlines
article_text = article.text.replace('\n',' ')
print(article_text)
except ArticleException:
print('***FAILED TO DOWNLOAD***',article.url)
您可能会发现我在 newspaper3K overview document 上创建和分享的这个 Github page 很有用。