Robots.txt is a standard content document for speaking with web Crawler/Spider/bots and train them to which region of site or site page is crept or not. Robots.txt is a Publicly accessible document and anybody can see effectively with part or url’s of site crawler output or not. As a matter of course Search Engine Crawler creep all that they conceivable.
How to Create robots.txt ?
There are numerous online instruments for making robots.txt record furthermore you will make this document manually.Before making robots.txt record you will see a few guidelines and regulation. I have disclosed here how to make a robots.txt record
In the robots.txt document USERAGENT line recognizes the web crawler and DISALLOW: line characterizes which some portion of the webpage is refused
[1] Here is the Basic robots.txt document
User-agent: *
Disallow : /
In the Above assertion “*” demonstrate the all crawler/arachnid/bots and “/” characterize every one of the pages are prohibited. We can say it at the end of the day, that the entire destinations are refused for all crawlers.
[2] on the off chance that you need to prohibit any particular web crawler or insect not slither your website, then the robots.txt record will be
User-agent: Yahoobot
Disallow: /
In above illustration I utilized Yahoobot and you will utilize that crawler which you not have any desire to creep you site.
[3] If you need to diallow all crawler for particular envelope or website pages then
Client specialists: *
Disallow: cgi-bin
Disallow: abc.html
[4] If you need to forbid all crawler for entire site with the exception of any particular crawler for permitted slithering then
User-agent: *
Disallow: /
User-agent: Googlebot
Disallow: /cgi-bin
Disallow:/ abc.php
[5] in next case you can utilize refuse to transform into permit standard by not entering any quality or/after semicolon (:) .
User-agent: *
Disallow :/
User-agent: Googlebot
Disallow :