8.8 C
New York
Sunday, November 24, 2024

The Story of Blocking 2 Excessive-Rating Pages With Robots.txt


I blocked two of our rating pages utilizing robots.txt. We misplaced a place right here or there and the entire featured snippets for the pages. I anticipated much more affect, however the world didn’t finish.

Warning

I don’t advocate doing this, and it’s completely potential that your outcomes could also be totally different from ours.

I used to be attempting to see the affect on rankings and site visitors that the removing of content material would have. My idea was that if we blocked the pages from being crawled, Google must depend on the hyperlink indicators alone to rank the content material.

Nevertheless, I don’t assume what I noticed was truly the affect of eradicating the content material. Perhaps it’s, however I can’t say that with 100% certainty, because the affect feels too small. I’ll be operating one other check to substantiate this. My new plan is to delete the content material from the web page and see what occurs.

My working idea is that Google should be utilizing the content material it used to see on the web page to rank it. Google Search Advocate John Mueller has confirmed this habits within the previous.

Thus far, the check has been operating for almost 5 months. At this level, it doesn’t appear to be Google will cease rating the web page. I think, after some time, it is going to possible cease trusting that the content material that was on the web page remains to be there, however I haven’t seen proof of that occuring.

Hold studying to see the check setup and affect. The principle takeaway is that by chance blocking pages (that Google already ranks) from being crawled utilizing robots.txt most likely isn’t going to have a lot affect in your rankings, and they’ll possible nonetheless present within the search outcomes.

I selected the identical pages as used within the “affect of hyperlink” examine, aside from the article on web optimization pricing as a result of Joshua Hardwick had simply up to date it. I had seen the affect of eradicating the hyperlinks to those articles and wished to check the affect of eradicating the content material. As I stated within the intro, I’m undecided that’s truly what occurred.

I blocked these two pages on January 30, 2023:

These strains had been added to our robots.txt file:

  • Disallow: /weblog/top-bing-searches/
  • Disallow: /weblog/top-youtube-searches/

As you’ll be able to see within the charts beneath, each pages misplaced some site visitors. But it surely didn’t end in a lot change to our site visitors estimate like I used to be anticipating.

Organic traffic chart for the "Top YouTube Searches" article showing a bit of a drop
Visitors for the “Prime YouTube Searches” article.
Organic traffic chart for the "Top Bing Searches" article showing a bit of a drop
Visitors for the “Prime Bing Searches” article.

Trying on the particular person key phrases, you’ll be able to see that some key phrases misplaced a place or two and others truly gained rating positions whereas the web page was blocked from crawling.

Essentially the most fascinating factor I seen is that they misplaced all featured snippets. I assume that having the pages blocked from crawling made them ineligible for featured snippets. After I later eliminated the block, the article on Bing searches shortly regained some snippets.

"Top Bing Searches" keywords were down one or two positions and lost featured snippets
Natural key phrases for the “Prime Bing Searches” article.
"Top YouTube Searches" keywords had mixed results (some up and some down) and also lost featured snippets
Natural key phrases for the “Prime YouTube Searches” article.

Essentially the most noticeable affect to the pages is on the SERP. The pages misplaced their customized titles and displayed a message saying that no info was obtainable as a substitute of the meta description.

SERP listing for "Top YouTube Searches" when blocked
SERP listing for "Top Bing Searches" when blocked

This was anticipated. It occurs when a web page is blocked by robots.txt. Moreover, you’ll see the “Listed, although blocked by robots.txt” standing in Google Search Console in the event you examine the URL.

"Indexed, though blocked by robots.txt" shown in the GSC Inspection Tool

I consider that the message on the SERPs harm the clicks to the pages greater than the rating drops. You’ll be able to see some drop within the impressions, however a bigger drop within the variety of clicks for the articles.

Visitors for the “Prime YouTube Searches” article:

Traffic drop for the "Top YouTube Searches" article, via Google Search Console

Visitors for the “Prime Bing Searches” article:

Traffic drop for the "Top Bing Searches" article, via Google Search Console

Remaining ideas

I don’t assume any of you’ll be shocked by my commentary on this. Don’t block pages you need listed. It hurts. Not as dangerous as you may assume it does—however it nonetheless hurts.



Related Articles

Latest Articles