问题描述
我正在尝试从列表中抓取每个URL的内容,这没有问题,我的列表工作正常,
原始链接是这样的:https://www.lamudi.com.mx/nuevo-leon/departamento/for-rent/
tags = soup('a',{'class':'js-listing-link'})
for tag in tags:
linktag = tag.get('href').strip()
if linktag not in linklist:
linklist.append(linktag)
以上结果是作为字符串的URL列表。但是然后我尝试一下:
for link in linklist[0]:
page2=urllib.request.Request(link,headers={'User-Agent': 'Mozilla/5.0'})
myhtml2 = urllib.request.urlopen(page2).read()
soupfl = BeautifulSoup(myhtml2,'html.parser')
raise ValueError("unkNown url type: %r" % self.full_url)
ValueError:未知的网址类型:“ h”
解决方法
要获取所有链接,可以使用以下示例:
import urllib.request
from bs4 import BeautifulSoup
URL = "https://www.lamudi.com.mx/nuevo-leon/departamento/for-rent/"
HEADERS = {
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML,like Gecko) Chrome/85.0.4183.121 Safari/537.36"
}
r = urllib.request.Request(URL,headers=HEADERS)
soup = BeautifulSoup(urllib.request.urlopen(r).read(),"html.parser")
tags = soup.find_all("a",{"class": "js-listing-link"})
links = []
[links.append(link["href"]) for link in tags if link["href"] not in links]
for link in links:
print("Getting:",link)
r2 = urllib.request.Request(link,headers=HEADERS)
soup2 = BeautifulSoup(urllib.request.urlopen(r2).read(),"html.parser")