+1 (614) 348-7474

Copy

Free in USA

info@crocoapps.com

Copy

From 10:00 to 21:00

info@crocoapps.com

Copy

From 10:00 to 21:00

Fill out the brief

Online request

#

How to find and eliminate duplicate site pages?

Crocoapps editorial

Crocoapps editorial

Reading time: 7 minutes

Duplication of content on the site - full or partial - leads to a number of problems. It becomes more difficult for search engines to index and rank pages, behavioral factors worsen, and there is a risk of falling under filters. Therefore, duplicates must be constantly monitored and removed in time.

Find and remove duplicate pages on a site

Types of takes

Not all takes are the same. Usually they are divided into two types - complete and partial. Let's consider each of them in more detail.

Full takes

What are full site page duplicates?

These are pages that have exactly the same content but different urls. These duplicates include:

  1. pages with www and without www;
  2. website version on http and https protocols;
  3. url with and without trailing slash;
  4. technical duplicates - index.php, index.html, default.aspx and others;
  5. addresses with utm tags to track attendance statistics;
  6. url with tags for the referral program (in which the user receives bonuses if he brings new clients);
  7. duplicates due to errors in the hierarchy - typical for online stores when product cards are repeated in different sections;
  8. Duplicates resulting from a misconfigured 404 page.

Incomplete takes

This is a partial copy of information on pages with different addresses. For example:

  • repetitions of categorical texts on catalog urls generated by applying filters or pagination;
  • duplication of characteristics in cards of similar products;
  • customer reviews placed in product cards and on a separate page;
  • Copied content blocks on service pages (such as “About Us”, “Our Benefits”, etc.).

Why might they occur?

  1. Automatic generation from CMS.

The content management system can create technical duplicates of category pages or product cards.

  1. Structure changes.

When changing the urls of sections of the site, it is necessary to correctly register 301 redirects. Otherwise, the page will open at the new and the old address at the same time.

  1. Incorrect work of 404 pages.

Due to a technical error, incorrectly typed urls may copy content from existing pages and end up in the index.

  1. Initially incorrect structure.

Some cards can be loaded into different sections and opened under different addresses.

  1. Unclosed mirrors.

For search engines, https://www.example.com and http://example.com are two separate resources, although their content is identical. Such duplicates must be “glued together” with 301 redirects even before the main launch of the site.

How dangerous for the site?

  1. Indexing difficulty.

Search engines have a so-called crawl budget - a limit on the number of indexed urls per day. If the robots process a lot of duplicates, it becomes more difficult for the correct pages to get into the index.

  1. Problems with determining relevance.

Search engines have a hard time deciding which of two pages with identical content to consider relevant to a user query. Because of this, the site ranks worse on average in the SERP.

  1. Incorrect link weight distribution.

Users can externally link to a duplicate url instead of the main page.

  1. The threat of sanctions.

Due to the large amount of non-original content, search engines can send the site under filters and exclude it from the search results.

How to find?

How to find duplicate pages on a site?

There are many ways to detect duplicates.

  1. With the help of crawlers such as Netpeak Spider, Screaming Frog, Megaindex. The program scans the entire site and marks the duplicates found.
  2. If the resource is small, pages can be checked manually using a special operator. In the Google or Yandex search bar, you need to enter the site: command and the site address. A list of all urls with titles and descriptions will appear in the output.
  3. You can add a piece of text before the operator from the previous paragraph. Then the search results will display all the pages that contain this snippet.
  4. Via advanced Google search. If you enter the address of a specific page, you can see its duplicates with similar addresses.
  5. With webmaster panels. In Yandex.Webmaster, you need to go to the "Indexing" and "Titles and Descriptions" sections, and in the Google Search Console - "Coverage".
  6. You can find duplicate texts both on external sources and within the site using anti-plagiarism check from text.ru or Content Watch.

How to fix

How to remove duplicate pages of a site?

The way to eliminate a duplicate depends on its type and the reason for its appearance.

  1. Disable via directive in robots.txt.

Suitable for bulk duplications, such as filters in a catalog. To exclude all duplicate pages from the index, you need to write a rule for the common part of their addresses (Disallow: *filter*). At the same time, it is important to ensure that landing urls are not banned.

  1. 301 redirect

With its help, mirrors are glued together or the site structure is changed. Redirects are usually set via the .htaccess file.

  1. Rel=”canonical”.

The line with this tag in the html code of the page tells Googlebot the address of the canonical, that is, the main page. This way you can close addresses with utm tags, paginations or individual pages/sections from indexing.

  1. Meta name="robots" tag.

Direct instructions for the robot, written in html. Suitable for print pages or technical duplicates.

Author

Crocoapps editorial

Crocoapps editorial