现在的位置: 首页 > 综合 > 正文

C#/VB – Automated WebSpider / WebRobot

2012年11月12日 ⁄ 综合 ⁄ 共 10097字 ⁄ 字号 评论关闭
文章目录

Introduction

What is a WebSpider

A WebSpider or crawler is an automated program that follows links on websites and calls a WebRobot to handle the contents of each link.

What is a WebRobot

A WebRobot is a program that processes the content found through a link, a WebRobot can be used for indexing a page or extracting useful information based on a predefined query, common examples are - Link checkers, e-mail address extractors, multimedia extractors and update watchers.

 

Background

I had a recent contract to build a web page link checker, this component had to be able to check links that were stored in a database as well as to check links on a website, both through the local file system and over the internet.

This article explains the WebRobot, the WebSpider and how to enhance the WebRobot through specialized content handlers, the code shown has some superfluous code such try blocks, variable initialization and minor methods removed.

Class overview

The classes that make up the WebRobot are; WebPageState, which represents a URI and its current state in the process chain and an implementation of IWebPageProcessor, which performs the actual reading of the URI, calling content handlers and dealing with page errors.

The WebSpider has only one class WebSpider, this maintains a list of pending/processed URI's contained in a list of WebPageState objects and runs WebPageProcessor against each WebPageState to extract links to other pages and to test whether the URI's are valid.

Using the code - WebRobot

Web page processing is handled by an object that implements IWebPageProcessor. The Process method expects to receive a WebPageState, this will be updated during page processing and if all is successful the method will return true. Any number of content handlers can be also be called after the page has been read, by assigning WebPageContentDelegate delegates to the processor.

<span class="cs-keyword">public</span> <span class="cs-keyword">delegate</span> <span class="cs-keyword">void</span> WebPageContentDelegate( WebPageState state );
<span class="cs-keyword">public</span> <span class="cs-keyword">interface</span> IWebPageProcessor
{
<span class="cs-keyword">bool</span> Process( WebPageState state );
WebPageContentDelegate ContentHandler { <span class="cs-keyword">get</span>; <span class="cs-keyword">set</span>; }
}

The WebPageState object holds state and content information for the URI being processed. All properties of this object are read/write accept for the URI which must be passed in through the constructor.

<span class="cs-keyword">public</span> <span class="cs-keyword">class</span> WebPageState
{
<span class="cs-keyword">private</span> WebPageState( ) {}
<span class="cs-keyword">public</span> WebPageState( Uri uri )
{
m_uri             = uri;
}
<span class="cs-keyword">public</span> WebPageState( <span class="cs-keyword">string</span> uri )
: <span class="cs-keyword">this</span>( <span class="cs-keyword">new</span> Uri( uri ) ) { }
Uri      m_uri;                           <span class="cs-comment">// URI to be processed</span>
<span class="cs-keyword">string</span>   m_content;                       <span class="cs-comment">// Content of webpage</span>
<span class="cs-keyword">string</span>   m_processInstructions   = <span class="cpp-string">""</span>;    <span class="cs-comment">// User defined instructions </span>
<span class="cs-comment">// for content handlers</span>
<span class="cs-keyword">bool</span>     m_processStarted        = <span class="cs-keyword">false</span>;
<span class="cs-comment">// Becomes true when processing starts</span>
<span class="cs-keyword">bool</span>     m_processSuccessfull    = <span class="cs-keyword">false</span>;
<span class="cs-comment">// Becomes true if process was successful</span>
<span class="cs-keyword">string</span>   m_statusCode;
<span class="cs-comment">// HTTP status code</span>
<span class="cs-keyword">string</span>   m_statusDescription;
<span class="cs-comment">// HTTP status description, or exception message</span>
<span class="cs-comment">// Standard Getters/Setters....</span>
}

The WebPageProcessor is an implementation of the IWebPageProcessor that does the actual work of reading in the content, handling error codes/exceptions and calling the content handlers. WebPageProcessor may be replaced or extended to provide additional functionality, though adding a content handler is generally a better option.

   <span class="cs-keyword">public</span> <span class="cs-keyword">class</span> WebPageProcessor : IWebPageProcessor
{
<span class="cs-keyword">public</span> <span class="cs-keyword">bool</span> Process( WebPageState state )
{
state.ProcessStarted       = <span class="cs-keyword">true</span>;
state.ProcessSuccessfull   = <span class="cs-keyword">false</span>;
<span class="cs-comment">// Use WebRequest.Create to handle URI's for </span>
<span class="cs-comment">// the following schemes: file, http &amp; https</span>
WebRequest  req = WebRequest.Create( state.Uri );
WebResponse res = <span class="cs-keyword">null</span>;
<span class="cs-keyword">try</span>
{
<span class="cs-comment">// Issue a response against the request. </span>
<span class="cs-comment">// If any problems are going to happen they</span>
<span class="cs-comment">// they are likly to happen here in the form of an exception.</span>
res = req.GetResponse( );
<span class="cs-comment">// If we reach here then everything is likly to be OK.</span>
<span class="cs-keyword">if</span> ( res <span class="cs-keyword">is</span> HttpWebResponse )
{
state.StatusCode        =
((HttpWebResponse)res).StatusCode.ToString( );
state.StatusDescription =
((HttpWebResponse)res).StatusDescription;
}
<span class="cs-keyword">if</span> ( res <span class="cs-keyword">is</span> FileWebResponse )
{
state.StatusCode        = <span class="cpp-string">"OK"</span>;
state.StatusDescription = <span class="cpp-string">"OK"</span>;
}
<span class="cs-keyword">if</span> ( state.StatusCode.Equals( <span class="cpp-string">"OK"</span> ) )
{
<span class="cs-comment">// Read the contents into our state </span>
<span class="cs-comment">// object and fire the content handlers</span>
StreamReader   sr    = <span class="cs-keyword">new</span> StreamReader(
res.GetResponseStream( ) );
state.Content        = sr.ReadToEnd( );
<span class="cs-keyword">if</span> ( ContentHandler != <span class="cs-keyword">null</span> )
{
ContentHandler( state );
}
}
state.ProcessSuccessfull = <span class="cs-keyword">true</span>;
}
<span class="cs-keyword">catch</span>( Exception ex )
{
HandleException( ex, state );
}
<span class="cs-keyword">finally</span>
{
<span class="cs-keyword">if</span> ( res != <span class="cs-keyword">null</span> )
{
res.Close( );
}
}
<span class="cs-keyword">return</span> state.ProcessSuccessfull;
}
}
<span class="cs-comment">// Store any content handlers</span>
<span class="cs-keyword">private</span> WebPageContentDelegate m_contentHandler = <span class="cs-keyword">null</span>;
<span class="cs-keyword">public</span> WebPageContentDelegate ContentHandler
{
<span class="cs-keyword">get</span> { <span class="cs-keyword">return</span> m_contentHandler; }
<span class="cs-keyword">set</span> { m_contentHandler = value; }
}

There are additonal private methods in the WebPageProcessor to handle HTTP error codes and file not found errors when dealing with the "file://" scheme as well as more severe exceptions.

Using the code - WebSpider

The WebSpider class is really just a harness for calling the WebRobot in a particular way. It provides the robot with a specialized content handler for crawling through web links and maintains a list of both pending pages and already visited pages. The current WebSpider is designed to start from a given URI and to limit full page processing to a base path.

<span class="cs-comment">// CONSTRUCTORS</span>
<span class="cs-comment">//</span>
<span class="cs-comment">// Process a URI, until all links are checked, </span>
<span class="cs-comment">// only add new links for processing if they</span>
<span class="cs-comment">// point to the same host as specified in the startUri.</span>
<span class="cs-keyword">public</span> WebSpider(
<span class="cs-keyword">string</span>            startUri
) : <span class="cs-keyword">this</span> ( startUri, -<span class="cs-literal">1</span> ) { }
<span class="cs-comment">// As above only limit the links to uriProcessedCountMax.</span>
<span class="cs-keyword">public</span> WebSpider(
<span class="cs-keyword">string</span>            startUri,
<span class="cs-keyword">int</span>               uriProcessedCountMax
) : <span class="cs-keyword">this</span> ( startUri, <span class="cpp-string">""</span>, uriProcessedCountMax,
<span class="cs-keyword">false</span>, <span class="cs-keyword">new</span> WebPageProcessor( ) ) { }
<span class="cs-comment">// As above, except new links are only added if</span>
<span class="cs-comment">// they are on the path specified by baseUri.</span>
<span class="cs-keyword">public</span> WebSpider(
<span class="cs-keyword">string</span>            startUri,
<span class="cs-keyword">string</span>            baseUri,
<span class="cs-keyword">int</span>               uriProcessedCountMax
) : <span class="cs-keyword">this</span> ( startUri, baseUri, uriProcessedCountMax,
<span class="cs-keyword">false</span>, <span class="cs-keyword">new</span> WebPageProcessor( ) ) { }
<span class="cs-comment">// As above, you can specify whether the web page</span>
<span class="cs-comment">// content is kept after it is processed, by</span>
<span class="cs-comment">// default this would be false to conserve memory</span>
<span class="cs-comment">// when used on large sites.</span>
<span class="cs-keyword">public</span> WebSpider(
<span class="cs-keyword">string</span>            startUri,
<span class="cs-keyword">string</span>            baseUri,
<span class="cs-keyword">int</span>               uriProcessedCountMax,
<span class="cs-keyword">bool</span>              keepWebContent,
IWebPageProcessor webPageProcessor )
{
<span class="cs-comment">// Initialize web spider ...</span>
}

Why is there a base path limit?

Since there are trillions of pages on the Internet, this spider will check all links that it finds to see if they are valid, but it will only add new links to the pending queue if those links belong within the context of the initial website or sub path of that website.

 

So if we are starting from www.myhost.com/index.html and this page has link to www.myhost.com/pageWithSomeLinks.html and www.google.com/pageWithManyLinks.html then the WebRobot will be called against both links to check if they are valid but it will only add new links found within www.myhost.com/pageWithSomeLinks.html

Call the Execute method to start the spider. This method will add the startUri to a Queue of pending pages and then call the IWebPageProcessor until there are no pages left to process.

<span class="cs-keyword">public</span> <span class="cs-keyword">void</span> Execute( )
{
AddWebPage( StartUri, StartUri.AbsoluteUri );
<span class="cs-keyword">while</span> ( WebPagesPending.Count &gt; <span class="cs-literal">0</span> &amp;&amp;
( UriProcessedCountMax == -<span class="cs-literal">1</span> || UriProcessedCount
&lt; UriProcessedCountMax ) )
{
WebPageState state = (WebPageState)m_webPagesPending.Dequeue( );
m_webPageProcessor.Process( state );
<span class="cs-keyword">if</span> ( ! KeepWebContent )
{
state.Content = <span class="cs-keyword">null</span>;
}
UriProcessedCount++;
}
}

A web page can only be added to the queue if the Uri "excluding anchor" points to a path or a valid page (e.g. .html, .aspx, .jsp etc...) and has not already been seen before.

<span class="cs-keyword">private</span> <span class="cs-keyword">bool</span> AddWebPage( Uri baseUri, <span class="cs-keyword">string</span> newUri )
{
Uri      uri      = <span class="cs-keyword">new</span> Uri( baseUri,
StrUtil.LeftIndexOf( newUri, <span class="cpp-string">"#"</span> ) );
<span class="cs-keyword">if</span> ( ! ValidPage( uri.LocalPath ) || m_webPages.Contains( uri ) )
{
<span class="cs-keyword">return</span> <span class="cs-keyword">false</span>;
}
WebPageState state = <span class="cs-keyword">new</span> WebPageState( uri );
<span class="cs-keyword">if</span> ( uri.AbsoluteUri.StartsWith( BaseUri.AbsoluteUri ) )
{
state.ProcessInstructions += <span class="cpp-string">"Handle Links"</span>;
}
m_webPagesPending.Enqueue  ( state );
m_webPages.Add             ( uri, state );
<span class="cs-keyword">return</span> <span class="cs-keyword">true</span>;
}

Examples of running the spider

The following code shows three examples for calling the WebSpider, the paths shown are examples only, they don't represent the true structure of this website. Note: the Bondi Beer website in the example, is a site that I built using my own SiteGenerator. This easy to use program produces static websites from dynamic content such as proprietary data files, XML / XSLT files, databases, RSS feeds and more...

<span class="cs-comment">/*
* Check for broken links found on this website, limit the spider to 100 pages.
*/</span>
WebSpider spider = <span class="cs-keyword">new</span> WebSpider( <span class="cpp-string">"http://www.bondibeer.com.au/"</span>, <span class="cs-literal">100</span> );
spider.execute( );
<span class="cs-comment">/*
* Check for broken links found on this website,
* there is no limit on the number
* of pages, but it will not look for new links on
* pages that are not within the
* path http://www.bondibeer.com.au/products/.  This
* means that the home page found
* at http://www.bondibeer.com.au/home.html may be
* checked for existence if it was
* called from the somepub/index.html but any
* links within that page will not be
* added to the pending list, as there on an a lower path.
*/</span>
spider = <span class="cs-keyword">new</span> WebSpider(
<span class="cpp-string">"http://www.bondibeer.com.au/products/somepub/index.html"</span>,
<span class="cpp-string">"http://www.bondibeer.com.au/products/"</span>, -<span class="cs-literal">1</span> );
spider.execute( );
<span class="cs-comment">/*
* Check for pages on the website that have funny
* jokes or pictures of sexy women.
*/</span>
spider = <span class="cs-keyword">new</span> WebSpider( <span class="cpp-string">"http://www.bondibeer.com.au/"</span> );
spider.WebPageProcessor.ContentHandler +=
<span class="cs-keyword">new</span> WebPageContentDelegate( FunnyJokes );
spider.WebPageProcessor.ContentHandler +=
<span class="cs-keyword">new</span> WebPageContentDelegate( SexyWomen );
spider.execute( );
<span class="cs-keyword">private</span> <span class="cs-keyword">void</span> FunnyJokes( WebPageState state )
{
<span class="cs-keyword">if</span>( state.Content.IndexOf( <span class="cpp-string">"Funny Joke"</span> ) &gt; -<span class="cs-literal">1</span> )
{
<span class="cs-comment">// Do something</span>
}
}
<span class="cs-keyword">private</span> <span class="cs-keyword">void</span> SexyWomen( WebPageState state )
{
Match       m     = RegExUtil.GetMatchRegEx(
RegularExpression.SrcExtractor, state.Content );
<span class="cs-keyword">string</span>      image;
<span class="cs-keyword">while</span>( m.Success )
{
m     = m.NextMatch( );
image = m.Groups[<span class="cs-literal">1</span>].ToString( ).toLowerCase( );
<span class="cs-keyword">if</span> ( image.indexOf( <span class="cpp-string">"sexy"</span> ) &gt; -<span class="cs-literal">1</span> ||
image.indexOf( <span class="cpp-string">"women"</span> ) &gt; -<span class="cs-literal">1</span> )
{
DownloadImage( image );
}
}
}

Conclusion

The WebSpider is flexible enough to be used in a variety of useful scenarios, and could be powerful tool for Data Mining websites on the Internet and Intranet. I would like to here how people have used this code.

Outstanding Issues

These issues are minor but if anyone has any ideas then please share them.

  • state.ProcessInstructions - This is really just a quick hack to provide instructions that the content handlers can use as they see fit. I am looking for a more elegant solution to this problem.
  • MultiThreaded Spider - This project 1st started of as a multi threaded spider but that soon fell by the wayside when I found that performance was much slower using threads to process each URI. It seems that the bottle neck is in the GetResponse, which does not seem to run in multiple threads.
  • Valid URI, but the query data that returns a bad page. - The current processor does not handle the scenario where the URI points to a valid page, but the page returned by the webserver is considered to be bad. Eg. http://www.validhost.com/validpage.html?opensubpage=invalidid. One idea to to resolve this problem is to read the contents of a returned page and look for key pieces of information but that technique is a little flakey.


About David Cruwys

I have been programming commercially since 1990, with the last 4 years spent mainly in Java. I made the transition to .NET six months ago and have not looked back. I have written e-commerce solutions, desktop and mobile phone applications in a variety of languages (VB6, Delphi, Java, Foxpro, Clipper 87 etc...) and am currently developing a Web Application Framework in C#.

I have just launched www.offyourbutt.com for showcasing my products and services and this will become a test bed for my C# framework. Click here to see my full profile

Click here to view David Cruwys's online profile.

抱歉!评论已关闭.