Tag Archives: Site Index

SEO – What is Crawling and Indexing

Site Index

Crawling and indexing the billions of documents, pages, files, news, calculating relevancy, rankings, and serving results.

Imagine the World Wide Web as a network of stops in a big city subway system. Each stop is its own unique document (usually a web page, but sometimes a PDF, JPG or other type of file.

When you sit down at your computer and do a Google search, you’re almost instantly presented with a list of results from all over the web. How does Google find web pages matching your query, and determine the order of search results?

In the simplest terms, you could think of searching the web as looking in a very large book with an impressive index telling you exactly where everything is located. When you perform a Google search, our programs check our index to determine the most relevant search results to be returned (“served”) to you.

There are three key processes in delivering search results to you are:

  1. Crawling: Does Google know about your site and can they find it?
  2. Indexing: Can Google Index your site?
  3. Serving: Does the site have good and useful content that is relevant to users search?

Crawling

Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.

Google uses a huge set of computers to fetch (or “crawl”) billions of pages on the web. The program that does the fetching is called Googlebot (also known as a robot, bot, or spider). Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site.

Google’s crawl process begins with a list of web page URLs, generated from previous crawl processes, and augmented with Sitemap data provided by webmasters. As Googlebot visits each of these websites it detects links on each page and adds them to its list of pages to crawl. New sites, changes to existing sites, and dead links are noted and used to update the Google index.

It should be noted: Google doesn’t accept payment to crawl a site more frequently, and they keep the search side of the business separate from their revenue-generating AdWords service.

Indexing

Googlebot processes each of the pages it crawls in order to compile a massive index of all the words it sees and their location on each page. In addition, Google processes information included in key content tags and attributes, such as Title tags and ALT attributes. Googlebot can process many, but not all, content types. For example, we cannot process the content of some rich media files or dynamic pages.

Once the engines find these pages, their next job is to parse the code from them and store selected pieces of the pages in massive hard drives, to be recalled when needed in a query. To accomplish the monumental task of holding billions of pages that can be accessed in a fraction of a second, the search engines have constructed massive datacenters in cities all over the world.

These monstrous storage facilities hold thousands of machines processing unimaginably large quantities of information. After all, when a person performs a search at any of the major engines, engines work hard to provide answers as fast as possible.

Relevancy is determined by over 200 factors, one of which is the PageRank for a given page. PageRank is the measure of the importance of a page based on the incoming links from other pages. In simple terms, each link to a page on your site from another site adds to your site’s PageRank. Not all links are equal: Google works hard to improve the user experience by identifying spam links and other practices that negatively impact search results. The best types of links are those that are given based on the quality of your content. In order for your site to rank well in search results pages, it’s important to make sure that Google can crawl and index your site correctly.

Importance is an equally tough concept to quantify, but Search engines must do their best. Site, page or document, the more valuable the information contained therein must be. This assumption has proven fairly successful in practice, as the engines have continued to increase algorithms and as we said before, are often comprised of hundreds of components.

Prediction Engines

When using Google’s Did you mean and Google Auto Complete features which are designed to help users save time by displaying related terms, common misspellings, and popular queries. Like google.com search results, the keywords used by these features are automatically generated by the web crawlers and search algorithms. Google displays these predictions only when they think they might save the user time. If a site ranks well for a keyword, it’s because Google has algorithmically determined that its content is more relevant to the user’s query.

Hopefully understanding these concepts will help you to better understand how Crawling and Indexing function, so you get make use of keywords to write your articles that improve your Websites and Blog rankings.

William