Duplicate content

Duplicate content is content that appears in more than one place on the internet. This can include identical or substantially similar text, images or other forms of content. Duplicate content can be a problem for several reasons:

  • It can be difficult for search engines to decide which version of a page to display in search results, which can lead to lower rankings and lower visibility for the website where the original content is found.
  • It can be confusing for users to find multiple versions of the same information, which can lead to a poor experience.
  • For website operators, it can lead to a loss of traffic and revenue if search engines send users to other websites instead of their own.

There are many reasons why duplicate content can occur:

  • They are often the result of automated content scraping, where bots copy and republish text from one website to another.
  • URLs can lead to duplicate content if pages are accessible via several URLs, e.g. www and notwww or http and https.
  • Content syndication: This happens when a website publishes content from another website with their permission, e.g. RSS feeds, press releases and article directories.
  • Session IDs: Sometimes session IDs added to the URL to track visitors on the website can lead to duplicate content.

To avoid duplicate content, webmasters can use rel="canonical" to tell search engines which version is the original content, or use a 301 redirect to redirect all URLs to the correct one, or use robots.txt to prohibit scraping of certain pages.

en_GB