Python Newspapers3k Newspapers库的多线程无限期挂起

问题描述

我正在研究一个从游戏媒体网站中提取文章的项目,并且正在进行基本的测试运行,根据VSCode的调试器的说法,该过程始终挂起,之后我设置了多线程提取(更改线程数对您没有帮助)在两个站点上。老实说,我不确定我在做什么错。我遵循了已列出的示例。其中一个网站,Gamespot,甚至在某人的教程中使用,我尝试删除一个网站(多边形),但似乎无济于事。我创建了一个虚拟环境,并在Python 3.8和3.7中进行了尝试。所有依赖关系似乎都得到满足;我还在repl dot中进行了测试,并得到了相同的结果。

我很想听到我做错了什么,所以我可以解决它;我真的很想在这些特定的网站及其文章上做一些数据科学!但是,似乎至少对于OS X用户而言,多线程存在某种错误。这是我的代码

#import system functions
import sys
import requests
sys.path.append('/usr/local/lib/python3.8/site-packages/')
#import basic HTTP handling processes
#import urllib
#from urllib.request import urlopen
#import scraping libraries

#import newspaper and BS dependencies

from bs4 import BeautifulSoup
import newspaper
from newspaper import Article 
from newspaper import Source 
from newspaper import news_pool

#import broad data libraries
import pandas as pd

#import gaming related news sources as newspapers
gamespot = newspaper.build('https://www.gamespot.com/news',memoize_articles=False)
polygon = newspaper.build('https://www.polygon.com/gaming',memoize_articles=False)

#organize the gaming related news sources using a list
gamingPress = [gamespot,polygon]
print("About to set the pool.")
#parallel process these articles using multithreading (store in mem)
news_pool.set(gamingPress,threads_per_source=4)
print("Setting the pool")
news_pool.join()
print("Pool set")
#create the interim pandas dataframe based on these sources
final_df = pd.DataFrame()

#a limit on sources Could be placed here; intentionally I have placed none
limit = 10

for source in gamingPress:
    #these are temporary placeholder lists for elements to be extracted
    list_title = []
    list_text = []
    list_source = []

    count = 0

    for article_extract in source.articles:
        article_extract.parse()
        
        #further limit functionality Could be placed here; not placed
        if count > limit:
            break

        list_title.append(article_extract.title)
        list_text.append(article_extract.text)
        list_source.apprend(article_extract.source_url)

        print(count)
        count +=1 #progress the loop *via* count

    temp_df = pd.DataFrame({'Title': list_title,'Text': list_text,'Source': list_source})
    #Append this to the final DataFrame
    final_df = final_df.append(temp_df,ignore_index=True)

#export to CSV,placeholder for deeper analysis/more limited scope,may remain
final.df.to_csv('gaming_press.csv')

这是我最终放弃并在控制台上遇到中断时返回的内容


About to set the pool.
Setting the pool
^X^X^CTraceback (most recent call last):
  File "scraper1.py",line 31,in <module>
    news_pool.join()
  File "/usr/local/lib/python3.8/site-packages/newspaper3k-0.3.0-py3.8.egg/newspaper/mthreading.py",line 103,in join
    self.pool.wait_completion()
  File "/usr/local/lib/python3.8/site-packages/newspaper3k-0.3.0-py3.8.egg/newspaper/mthreading.py",line 63,in wait_completion
    self.tasks.join()
  File "/usr/local/Cellar/python@3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/queue.py",line 89,in join
    self.all_tasks_done.wait()
  File "/usr/local/Cellar/python@3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/threading.py",line 302,in wait
    waiter.acquire()
KeyboardInterrupt

解决方法

我决定研究报纸的多线程问题。我在github上查看了 Newspaper 的源代码,并设计了这个答案。在测试中,我可以获得文章标题。

该处理似乎很耗时,因为平均需要6分钟。经过更多研究后,时间延迟似乎与后台下载的文章直接相关。我不确定如何使用 Newspaper 来加快此过程。

import newspaper
from newspaper import Config
from newspaper import news_pool

USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'

config = Config()
config.browser_user_agent = USER_AGENT
config.request_timeout = 10

gamespot = newspaper.build('https://www.gamespot.com/news',config=config,memoize_articles=False)
polygon = newspaper.build('https://www.polygon.com/gaming',memoize_articles=False)

gamingPress = [gamespot,polygon]

# this setting is adjustable 
news_pool.config.number_threads = 2

# this setting is adjustable 
news_pool.config.thread_timeout_seconds = 2

news_pool.set(gamingPress)
news_pool.join()

for source in gamingPress:
  for article_extract in source.articles:
    article_extract.parse()
    print(article_extract.title)

说实话,我正在尝试确定使用 news_pool 的好处。从 Newspaper 的源代码中的注释看来, news_pool 的主要目的与连接速率限制有关。我还注意到,已经进行了几次尝试来改进线程模型,但是这些代码更新尚未推送到生产代码中。

不过,...下面的答案将在1分钟内开始处理,并且不使用 news_pool 。需要进行更多测试,以查看源速率是否限制了连接或出现了其他问题。

import newspaper
from newspaper import Config
from newspaper import news_pool

USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'

config = Config()
config.browser_user_agent = USER_AGENT
config.request_timeout = 10

gamespot = newspaper.build('https://www.gamespot.com/news',memoize_articles=False)
gamingPress = [gamespot,polygon]
for source in gamingPress:
   source.download_articles()
   for article_extract in source.articles:
      article_extract.parse()
      print(article_extract.title)

关于 news_pool 代码部分。由于某种原因,我在对目标源进行的有限测试中注意到多余的文章标题。

相关问答

Selenium Web驱动程序和Java。元素在(x,y)点处不可单击。其...
Python-如何使用点“。” 访问字典成员?
Java 字符串是不可变的。到底是什么意思?
Java中的“ final”关键字如何工作?(我仍然可以修改对象。...
“loop:”在Java代码中。这是什么,为什么要编译?
java.lang.ClassNotFoundException:sun.jdbc.odbc.JdbcOdbc...