比如要获取所有class=test的div元素,
1. 用Scrapy,示范代码:
def parse(self, response): hxs = Selector(response) items = [] divs = hxs.xpath('//div[@class="test"]')
2. 用lxml,示范代码:
from lxml import etree #import mechanize import lxml.html #import cookielib #br = mechanize.Browser() #r = br.open('http://yourdomain.com') #html = br.response().read() #root = lxml.html.fromstring(html) #divs = root.xpath("//div[@class='test']") hparser = etree.HTMLParser(encoding='utf-8') #for avoiding unicode codec problems htree = etree.parse('http://yourdomain.com',hparser) htree.write('/tmp/bi.html') divs= htree.xpath("//div[@class='test']")
要获取class包含test的所有div,比如<div class="test website"></div>
把上述xpath的参数修改为 "div[contains(@class,'test')]" 即可。
Scrapy内部解析引擎也是用的lxml。
参考链接:
http://lxml.de/dev/lxmlhtml.html#examples
http://doc.scrapy.org/en/latest/
by iefreer