从特定页面抓取某些URL

问题描述

我正在尝试从网页中抓取所有与一个主题特别相关的URL。

我正在用漂亮的汤来做。

我当前的尝试是

urls = soup.find_all('a',href=True)

但是我不想在页面上有很多多余的URL。

页面为: https://www.basketball-reference.com/players/

例如,我想抓取所有玩家姓名及其参考代码

 <a href="/players/a/allenra02.html">Ray Allen</a>,

将“ Ray Allen / allenra02”添加到列表中。

如何使用漂亮的汤将必需的前缀添加到url搜索中?例如“玩家/”

解决方法

您可以将编译后的正则表达式用作href=中的.find_all()参数。

例如:

import re
import requests
from bs4 import BeautifulSoup


url = 'https://www.basketball-reference.com/players/'
soup = BeautifulSoup(requests.get(url).content,'html.parser')

r = re.compile(r'/players/.+/(.*?)\.html')
out = []
for a in soup.find('ul',class_="page_index").find_all('a',href=r):
    out.append('{}/{}'.format(a.get_text(strip=True),r.search(a['href']).group(1)))

from pprint import pprint
pprint(out)

打印:

['Kareem Abdul-Jabbar/abdulka01','Ray Allen/allenra02','LaMarcus Aldridge/aldrila01','Paul Arizin/arizipa01','Carmelo Anthony/anthoca01','Tiny Archibald/architi01','Charles Barkley/barklch01','Kobe Bryant/bryanko01','Larry Bird/birdla01','Walt Bellamy/bellawa01','Rick Barry/barryri01','Chauncey Billups/billuch01','Wilt Chamberlain/chambwi01','Vince Carter/cartevi01','Maurice Cheeks/cheekma01','Stephen Curry/curryst01',...and so on.
,

尝试一下

import requests

url = 'https://www.basketball-reference.com/players/'
soup = BeautifulSoup(requests.get(url).text,"html.parser")

ul = soup.find("ul",attrs={'class':"page_index"})

for li in ul.findAll("li"):
    # ignore the first value (index A,B...)
    for player in li.select("a")[1:]:
        print(
            player.text + "/" + player['href'].split("/")[-1].replace(".html","")
        )

Kareem Abdul-Jabbar/abdulka01
Ray Allen/allenra02
LaMarcus Aldridge/aldrila01
...
...