问题描述
我正在练习并从站点抓取数据。我被困在URL为https://www.ilan.gov.tr/ilan/kategori/12/iflas-hukuku-davalari?txv=12¤tPage=1的网站中。我想获取Kurum-İlanNumarası-Şehir(Corporation-公告编号-城市)的数据。我认为我无法抓取div。当我编译包含此代码div.search-results-header row
的代码时,它不起作用。我也想获得该网站的前20页。我怎样才能做到这一点?有一堆复杂的代码,所以我要添加图像作为附件。如果您至少告诉我如何获得Kurum,我想我可以处理其他人。谢谢。
但是,这是我正在为项目工作的代码。
公共静态void main(String [] args)引发异常{
File iflasHukuku = new File("/Users/Berkan/Desktop/Iflas Hukuku.txt");
iflasHukuku.createNewFile();
FileWriter fileWriter = new FileWriter(iflasHukuku);
BufferedWriter bufferedWriter = new BufferedWriter(fileWriter);
final Document document = Jsoup.connect("https://www.ilan.gov.tr/ilan/kategori/12/iflas-hukuku-davalari?txv=12¤tPage=1").get();
for(Element x: document.select(".search-results-table-container container mb-4 ng-tns-c6-3 ng-star-inserted")) {
final String kurumAdi = x.select("div.search-results-header row").text();
System.out.println(kurumAdi);
}
}
解决方法
该网页显示为Angular App。因此,您不能简单地使用Jsoup.connect来获取HTML内容,因为浏览器需要执行JS来呈现页面。因此,您必须使用WebDriver加载内容并获取pageSource并将其发送到Jsoup。
看到这个:
import io.github.bonigarcia.wdm.WebDriverManager;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.chrome.ChromeOptions;
import org.openqa.selenium.support.ui.WebDriverWait;
public class JSoupTest {
public static void main(String[] args) {
WebDriverManager.chromedriver().setup(); //downloads the driver
ChromeOptions chromeOptions = new ChromeOptions();
chromeOptions.setHeadless(true);
WebDriver driver = new ChromeDriver(chromeOptions);
driver.get("https://www.ilan.gov.tr/ilan/kategori/12/iflas-hukuku-davalari?txv=12¤tPage=1");
WebDriverWait wait = new WebDriverWait(driver,30);
wait.until(webDriver -> driver.getPageSource().contains("İlan Açıklaması"));
final Document document = Jsoup.parse(driver.getPageSource());
Elements xx = document.select(".search-results-row");
for (Element x : document.select(".search-results-row")) {
System.out.println(x.text());
//parse it further
}
}
}
必需的依赖项:
<dependency>
<groupId>io.github.bonigarcia</groupId>
<artifactId>webdrivermanager</artifactId>
<version>4.2.2</version>
</dependency>
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-chrome-driver</artifactId>
<version>3.141.59</version>
</dependency>
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-support</artifactId>
<version>3.141.59</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>28.2-jre</version>
</dependency>
<dependency>
<groupId>org.jsoup</groupId>
<artifactId>jsoup</artifactId>
<version>1.13.1</version>
</dependency>