An introduction to robots.txt, XML sitemaps and duplicate content

Today at my work I gave a presentation about robots.txt, XML sitemaps and how search engines treat duplicate content.

 

 

Upon review I realised there may be some minor points which could have been clarified or explained in better detail. Sometimes I used the word “index” where “crawl” may be more appropriate, etc. And the assumption that bot traffic to your site is about 1/2 of the total traffic taken from the 61.5% “internet traffic” statistic is just a “rough estimate” etc.

Take it as an informal presentation on the topic where majority of the information is accurate but some points could be made clearer or better.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *