Seo, in its most fundamental sense, relies upon one thing above all others: Online search engine spiders crawling and indexing your site.
But almost every site is going to have pages that you do not want to include in this exploration.
In a best-case situation, these are doing nothing to drive traffic to your website actively, and in a worst-case, they might be diverting traffic from more important pages.
Fortunately, Google allows web designers to tell search engine bots what pages and content to crawl and what to neglect. There are a number of ways to do this, the most common being utilizing a robots.txt file or the meta robotics tag.
We have an exceptional and in-depth explanation of the ins and outs of robots.txt, which you ought to certainly check out.
However in top-level terms, it’s a plain text file that resides in your site’s root and follows the Robots Exemption Procedure (ASSOCIATE).
Robots.txt supplies crawlers with instructions about the website as an entire, while meta robotics tags consist of instructions for particular pages.
Some meta robots tags you might employ consist of index, which tells search engines to include the page to their index; noindex, which informs it not to include a page to the index or include it in search engine result; follow, which advises a search engine to follow the links on a page; nofollow, which tells it not to follow links, and a whole host of others.
Both robots.txt and meta robotics tags work tools to keep in your toolbox, however there’s also another way to instruct online search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another method for you to control how your websites are crawled and indexed by spiders. As part of the HTTP header response to a URL, it controls indexing for an entire page, in addition to the specific aspects on that page.
And whereas using meta robots tags is fairly straightforward, the X-Robots-Tag is a bit more complicated.
However this, of course, raises the question:
When Should You Use The X-Robots-Tag?
According to Google, “Any instruction that can be used in a robotics meta tag can likewise be defined as an X-Robots-Tag.”
While you can set robots.txt-related regulations in the headers of an HTTP reaction with both the meta robots tag and X-Robots Tag, there are certain situations where you would want to utilize the X-Robots-Tag– the 2 most typical being when:
- You want to control how your non-HTML files are being crawled and indexed.
- You wish to serve regulations site-wide rather of on a page level.
For example, if you wish to obstruct a specific image or video from being crawled– the HTTP reaction technique makes this easy.
The X-Robots-Tag header is likewise helpful due to the fact that it allows you to combine multiple tags within an HTTP response or utilize a comma-separated list of directives to define regulations.
Possibly you do not desire a particular page to be cached and want it to be not available after a particular date. You can use a combination of “noarchive” and “unavailable_after” tags to instruct search engine bots to follow these guidelines.
Basically, the power of the X-Robots-Tag is that it is a lot more versatile than the meta robotics tag.
The advantage of utilizing an X-Robots-Tag with HTTP responses is that it enables you to use routine expressions to perform crawl instructions on non-HTML, in addition to use specifications on a bigger, worldwide level.
To assist you comprehend the difference in between these regulations, it’s helpful to categorize them by type. That is, are they crawler directives or indexer regulations?
Here’s a handy cheat sheet to describe:
|Crawler Directives||Indexer Directives|
|Robots.txt– uses the user representative, permit, disallow, and sitemap regulations to specify where on-site search engine bots are allowed to crawl and not permitted to crawl.||Meta Robotics tag– enables you to define and prevent online search engine from showing specific pages on a website in search results.
Nofollow– enables you to define links that should not pass on authority or PageRank.
X-Robots-tag– permits you to manage how specified file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s state you wish to obstruct particular file types. A perfect approach would be to add the X-Robots-Tag to an Apache configuration or a.htaccess file.
The X-Robots-Tag can be contributed to a site’s HTTP responses in an Apache server setup via.htaccess file.
Real-World Examples And Uses Of The X-Robots-Tag
So that sounds great in theory, however what does it appear like in the real world? Let’s have a look.
Let’s say we wanted online search engine not to index.pdf file types. This setup on Apache servers would look something like the below:
In Nginx, it would appear like the listed below:
place ~ * . pdf$
Now, let’s take a look at a various circumstance. Let’s say we wish to use the X-Robots-Tag to obstruct image files, such as.jpg,. gif,. png, etc, from being indexed. You could do this with an X-Robots-Tag that would look like the below:
Please note that comprehending how these directives work and the impact they have on one another is vital.
For instance, what takes place if both the X-Robots-Tag and a meta robots tag lie when spider bots discover a URL?
If that URL is obstructed from robots.txt, then particular indexing and serving instructions can not be found and will not be followed.
If directives are to be followed, then the URLs including those can not be prohibited from crawling.
Check For An X-Robots-Tag
There are a few different techniques that can be used to check for an X-Robots-Tag on the website.
The most convenient method to examine is to set up a web browser extension that will inform you X-Robots-Tag information about the URL.
Screenshot of Robots Exemption Checker, December 2022
Another plugin you can utilize to identify whether an X-Robots-Tag is being utilized, for instance, is the Web Designer plugin.
By clicking on the plugin in your web browser and browsing to “View Response Headers,” you can see the different HTTP headers being used.
Another technique that can be utilized for scaling in order to pinpoint concerns on sites with a million pages is Screaming Frog
. After running a website through Yelling Frog, you can navigate to the “X-Robots-Tag” column.
This will show you which sections of the site are using the tag, in addition to which particular regulations.
Screenshot of Shrieking Frog Report. X-Robot-Tag, December 2022 Using X-Robots-Tags On Your Website Comprehending and controlling how online search engine engage with your website is
the foundation of search engine optimization. And the X-Robots-Tag is an effective tool you can use to do just that. Just be aware: It’s not without its dangers. It is very easy to slip up
and deindex your whole site. That stated, if you read this piece, you’re most likely not an SEO beginner.
So long as you utilize it carefully, take your time and check your work, you’ll discover the X-Robots-Tag to be a helpful addition to your arsenal. More Resources: Featured Image: Song_about_summer/ Best SMM Panel