BeautifulSoup Webscrape问题

问题描述

我对Python和编码一般都是新手。我正在观看有关在YouTube上进行网络抓取的视频教程,并且在尝试运行代码时遇到问题。

要做什么:

我尝试过的事情:

  • 我已经在Anaconda中测试了代码,并且每个属性都有正确的text / html。
  • 我无法运行整个脚本。我收到错误消息,不确定是怎么回事。

下面是脚本:

from urllib.request import Request,urlopen
from bs4 import BeautifulSoup as soup

#opens up connection and grabs the webpage
url = 'https://www.cigarmonster.com/'
req = Request(url,headers={'User-Agent': 'Mozilla/5.0'})

web_byte = urlopen(req).read()

webpage = web_byte.decode('utf-8')

#parses html
page_soup = soup(webpage,"html.parser")

# grabs each of the products
containers = page_soup.findAll("div",{"class":"quickview-pop launchModal"})

filename = "cigar_list.csv"

f = open(filename,"w")

headers = "cigar_brand,product_size,famous_price,monster_price,percent_off"

f.write(headers)

for container in containers:
    try:
        cigar_brand = container.find("div",{"class":"item-grid-product-title"}).text
    except Exception as e:
        cigar_brand = "NA"
    else:
        pass
    finally:
        pass

    size_container = container.findAll("span",{"class":"product-subtitle"})

    product_size = size_container[0].text

    famous_price_container = container.findAll("div",{"class":"col-xs-12 item-grid-product-fss-price"})

    famous_price = famous_price_container[0].text

    monster_price_container = container.findAll("div",{"class":"col-xs-12 item-grid-product-monster-price"})

    monster_price = monster_price_container[0].text

    percent_off_container = container.findAll("div",{"class":"col-xs-12 item-grid-product-fss-pct"})

    percent_off = percent_off_container[0].text

    #print("cigar_brand: " +  cigar_brand)
    #print("product_size: " +  product_size)
    #print("famous_price: " + famous_price)  
    #print("monster_price: " +  monster_price)
    #print("percent_off: " +  percent_off)
    f.write(cigar_brand + "," + product_size + "," + famous_price + "," + monster_price + "," + percent_off + "\n")
 
f.close()

运行脚本时出现以下错误

Traceback (most recent call last):
  File "cigar_monster_scrape.py",line 8,in <module>
    uClient = urlopen(uReq).read()
  File "C:\Users\nmbuc\anaconda3\lib\urllib\request.py",line 222,in urlopen
    return opener.open(url,data,timeout)
  File "C:\Users\nmbuc\anaconda3\lib\urllib\request.py",line 525,in open
    response = self._open(req,data)
  File "C:\Users\nmbuc\anaconda3\lib\urllib\request.py",line 547,in _open
    return self._call_chain(self.handle_open,'unkNown',File "C:\Users\nmbuc\anaconda3\lib\urllib\request.py",line 502,in _call_chain
    result = func(*args)
  File "C:\Users\nmbuc\anaconda3\lib\urllib\request.py",line 1421,in unkNown_open
    raise URLError('unkNown url type: %s' % type)
urllib.error.URLError: <urlopen error unkNown url type: https>

我一开始也遇到了很多缩进错误

解决方法

在下面的代码行中更改2,然后就可以了。在下面的代码中,我使用requests代替了 urlib ,其余的一切都保持不变。

import requests   

url = 'https://www.cigarmonster.com/'
req = requests.get(url,headers={'User-Agent': 'Mozilla/5.0'})    

#parses html
page_soup = soup(req.text,"html.parser")