Friday, October 23, 2009

Search Engine Crawler

When you are in the process of learning SEO, a question comes into our mind that “What is Search Engine Crawler or Bots?”, “How does it visit a website?” And “What are its criteria of examining the webpage?”

A Search Engine Crawler is an automated and calculated program used to examine a new webpage. The process by which it crawls is called web crawling or spidering It has fixed algorithms, sets of constraints and certain instructions for examining the webpage. Some crawlers are like: GoogleBot is used by Google Search Engine, MSNBot is used by MSN Search Engine, Slurp is used by yahoo Search Engine and Teoma crawler is used by Ask Jeeves.

The crawler visits the webpage through its URL. It stores and send this URL to Search Engine for indexing. First it checks the robot.txt file of the webpage. If it allows, only then the crawler enters in to the page. It examines the webpage from Top to Bottom and then from Left to Right. It checks Meta tags, total size of the page, total links on the page, body texts, distinct words and unique content of the webpage. So the content of the webpage should be seo friendly. Avoid complex designing of the page, script images with no Alt tags. In the end, we can say that Crawlers are the best friends of website owners.