视频1 视频21 视频41 视频61 视频文章1 视频文章21 视频文章41 视频文章61 推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37 推荐39 推荐41 推荐43 推荐45 推荐47 推荐49 关键词1 关键词101 关键词201 关键词301 关键词401 关键词501 关键词601 关键词701 关键词801 关键词901 关键词1001 关键词1101 关键词1201 关键词1301 关键词1401 关键词1501 关键词1601 关键词1701 关键词1801 关键词1901 视频扩展1 视频扩展6 视频扩展11 视频扩展16 文章1 文章201 文章401 文章601 文章801 文章1001 资讯1 资讯501 资讯1001 资讯1501 标签1 标签501 标签1001 关键词1 关键词501 关键词1001 关键词1501 专题2001
Python实现简单爬虫
2020-11-27 14:27:18 责编:小采
文档


简介

爬虫是一段自动抓取互联息的程序。它的价值是互联网数据为我所有。利用爬取到的数据,可以做很多的事情如:可以进行数据统计,对比;可以利用爬取的数据做某一方面的app;也可以利用爬取的数据做一个新闻阅读器等等。

爬虫架构

1)URL管理器
2)网页下载器
3)网页分析器
4)爬虫调用器
5)价值数据使用

爬虫实现

1)调度器实现

# coding:utf-8
import url_manager
import html_downloader
import html_parser
import html_outputer
import url_manager
class SpiderMain(object):
 def __init__(self):
 self.urls = url_manager.UrlManager()
 self.downloader = html_downloader.HtmlDownloader()
 self.parser = html_parser.HtmlParser()
 self.outputer = html_outputer.HtmlOutputer()

 def craw(self, root_url):
 count = 1
 self.urls.add_new_url(root_url)
 while self.urls.has_new_url():
 try:
 new_url = self.urls.get_new_url()
 print "craw %d : %s" % (count, new_url)
 html_cont = self.downloader.download(new_url)
 new_urls, new_data = self.parser.parse(new_url, html_cont)
 self.urls.add_new_urls(new_urls)
 self.outputer.collect_data(new_data)
 if count == 1000:
 break
 count = count + 1
 except:
 print "craw failed"
 self.outputer.output_html()

if __name__ == "__main__":
 root_url = "http://baike.baidu.com/view/21087.htm"
 obj_spider = SpiderMain()
 obj_spider.craw(root_url)

2)URL管理器实现

class UrlManager(object):
 def __init__(self):
 self.new_urls = set()
 self.old_urls = set()

 def add_new_url(self, url):
 if url is None:
 return
 if url not in self.new_urls and url not in self.old_urls:
 self.new_urls.add(url)

 def add_new_urls(self, urls):
 if urls is None or len(urls) == 0:
 return
 for url in urls:
 self.add_new_url(url)

 def has_new_url(self):
 return len(self.new_urls) != 0

 def get_new_url(self):
 new_url = self.new_urls.pop()
 self.old_urls.add(new_url)
 return new_url

3)URL下载器实现

import urllib2
class HtmlDownloader(object):
 def download(self, url):
 if url is None:
 return None
 response = urllib2.urlopen(url)
 if response.getcode() != 200:
 return None
 return response.read()

4)URL解析器实现

from bs4 import BeautifulSoup
import re
import urlparse
class HtmlParser(object):
 def _get_new_urls(self, page_url, soup):
 new_urls = set()
 links = soup.find_all('a', href=re.compile(r"/view/d+.htm"))
 for link in links:
 new_url = link['href']
 new_full_url = urlparse.urljoin(page_url, new_url)
 new_urls.add(new_full_url)
 return new_urls

 def _get_new_data(self, page_url, soup):
 res_data = {}
 res_data['url'] = page_url
 title_node = soup.find('dd', class_="lemmaWgt-lemmaTitle-title").find("h1")
 res_data['title'] = title_node.get_text()
 summary_node = soup.find('div', class_="lemma-summary")
 res_data['summary'] = summary_node.get_text()
 return res_data

 def parse(self, page_url, html_cont):
 if page_url is None or html_cont is None:
 return
 soup = BeautifulSoup(html_cont, 'html.parser', from_encoding='utf-8')
 new_urls = self._get_new_urls(page_url, soup)
 new_data = self._get_new_data(page_url, soup)
 return new_urls, new_data

5)价值数据输出显示

# coding:utf-8
class HtmlOutputer(object):
 def __init__(self):
 self.datas = []

 def collect_data(self, data):
 if data is None:
 return
 self.datas.append(data)

 def output_html(self):
 fout = open('output.html', 'w')
 fout.write("<html>")
 fout.write("<meta charset="UTF-8">")
 fout.write("<body>")
 fout.write("<table>")
 for data in self.datas:
 fout.write("<tr>")
 fout.write("<td>%s</td>" % data['url'])
 fout.write("<td>%s</td>" % data['title'].encode('utf-8'))
 fout.write("<td>%s</td>" % data['summary'].encode('utf-8'))
 fout.write("</tr>")
 fout.write("</table>")
 fout.write("</body>")
 fout.write("</html>")
 fout.close()

执行

本爬虫为爬取百度百科与Python关键字相关的1000个静态网页,对网页中的数据,主要提取关键词和摘要信息,并将爬取的信息以HTML文件的方式存储,之后利用浏览器打开即可实现访问。

下载本文
显示全文
专题