如何使用BeautifulSoup在h5 a href链接中提取标题

问题描述

我正在尝试使用BeautifulSoup提取链接标题。我正在使用的代码如下:

import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
import pandas as pd

hdr={'User-Agent':'Chrome/84.0.4147.135'}

frame=[]

for page_number in range(19):
    http= "https://www.epa.wa.gov.au/media-statements?page={}".format(page_number+1)

    print('Downloading page %s...' % http)

    url= requests.get(http,headers=hdr)
    soup = BeautifulSoup(url.content,'html.parser')

    for row in soup.select('.view-content .views-row'):

        content = row.select_one('.views-field-body').get_text(strip=True)
        title = row.text.strip(':')
        link = 'https://www.epa.wa.gov.au' + row.a['href']
        date = row.select_one('.date-display-single').get_text(strip=True)

        frame.append({
        'title': title,'link': link,'date': date,'content': content
    })

dfs = pd.DataFrame(frame)
dfs.to_csv('epa_scrapper.csv',index=False,encoding='utf-8-sig')

但是,运行上述代码后,什么都没有显示。如何提取存储在链接中的定位标记的title属性内的值?

此外,我只想知道如何将“标题”,“链接”,“ dt”,“内容”附加到csv文件中。

非常感谢您。

解决方法

要获取链接文本,可以使用选择器"h5 a"。例如:

import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
import pandas as pd

hdr={'User-Agent':'Chrome/84.0.4147.135'}

frame=[]
for page_number in range(1,20):
    http= "https://www.epa.wa.gov.au/media-statements?page={}".format(page_number)

    print('Downloading page %s...' % http)

    url= requests.get(http,headers=hdr)
    soup = BeautifulSoup(url.content,'html.parser')

    for row in soup.select('.view-content .views-row'):

        content = row.select_one('.views-field-body').get_text(strip=True,separator='\n')
        title = row.select_one('h5 a').get_text(strip=True)
        link = 'https://www.epa.wa.gov.au' + row.a['href']
        date = row.select_one('.date-display-single').get_text(strip=True)

        frame.append({
            'title': title,'link': link,'date': date,'content': content
        })

dfs = pd.DataFrame(frame)
dfs.to_csv('epa_scrapper.csv',index=False,encoding='utf-8-sig')

创建epa_scrapper.csv(来自LibreOffice的屏幕截图):

enter image description here