Understanding How Google Crawls Your Site

Most people think of content, backlinks and onsite SEO as the most important part of getting a site to rank on google, but really, there is something even more pressing. Your crawlability score! Google bots “crawl” your site to read the content and see what is on each page, so it can then be ranked on the search results pages. There can be issues with this when your site is not easy for these bots to crawl.

The Google bots are primarily used to find and index new web pages and content. This is what Google says on the matter:

Crawlers look at webpages and follow links on those pages, much like you would if you were browsing content on the web. They go from link to link and bring data about those webpages back to Google’s servers.

In essence, Crawlability describes the search engine’s ability to access and crawl content on a page. If a site has no crawlability issues, then web crawlers can access all its content easily by following links between pages. However, broken links or dead ends might result in crawlability issues – the search engine’s inability to access specific content on a site.

There are a number of things which can affect the crawlability:

  • Site structure
  • Internal link structure
  • redirects
  • Blocking access to the crawler