Ever questioned what would occur for those who prevented Google from crawling your web site for just a few weeks? Technical search engine marketing professional Kristina Azarenko has revealed the outcomes of such an experiment.
Six shocking issues that occurred. What occurred when Googlebot couldn’t crawl Azarenko’s website from Oct 5 to Nov. 7:
- Favicon was faraway from Google Search outcomes.
- Video search outcomes took an enormous hit and nonetheless haven’t recovered post-experiment.
- Positions remained comparatively secure, besides had been barely extra risky in Canada.
- Site visitors solely noticed solely a slight lower.
- A rise in reported listed pages in Google Search Console. Why? Pages with noindex meta robots tags ended up being listed as a result of Google couldn’t crawl the positioning to see these tags.
- A number of alerts in GSC (e.g., “Listed, although blocked by robots.txt”, “Blocked by robots.txt”).
Why we care. Testing is an important component of search engine marketing. All modifications (intentional or unintentional) can affect your rankings and visitors and backside line, so it’s good to grasp how Google may probably react. Additionally, most corporations aren’t capable of try this form of an experiment, so that is good info to know.
The experiment. You possibly can learn all about it in Sudden Outcomes of My Google Crawling Experiment.
One other comparable experiment. Patrick Stox of Ahrefs has additionally shared outcomes of blocking two high-ranking pages with robots.txt for 5 months. The affect on rating was minimal, however the pages misplaced all their featured snippets.
New on Search Engine Land