Defining Duplicate Web Content
Even the most discerning digital marketing professionals and hoteliers need to be cognizant of the dangers of duplicate content. Content that appears on the Internet in more than one place (i.e. on more than one URL) and is identical or very similar in nature is regarded as duplicate content.
Google defines duplicate content as “substantive blocks of content within or across domains that either completely match other content or are appreciably similar.” Whether these two pages exist on the same website or on two different websites, duplicate content makes it difficult for search engines to determine which version is more relevant to a search query. When duplicate content exists, the search engine is forced to choose which page is the original in an attempt to provide the best results for its users.
Identifying Duplicate Content Sources
Duplicate content is not always created with intent. It can often be the result of hoteliers’ “innocent ignorance.” For example, when a hotel signs up with an online travel agency (OTA) they may be required to fill out a questionnaire. If the hotelier provides the OTA with identical hotel descriptions from their own website, the OTA may publish these descriptions without editing the copy or altering the content significantly. This can cause external duplication under two separate domains. The same would be true of a brand website that uses identical copy from property pages on vanity sites for each of the brand’s hotels.
Internal duplication exists when multiple URLs within the same domain show identical content, as may occur on printer-only versions of web pages on a hotel website. Often, internal duplication can stem from poor site architecture or programming. When a site is not structured properly, duplicate content problems may come to the surface. This causes self-competition that is especially damaging when the content is link-worthy. Since each duplicate URL will likely receive inbound links, neither page will receive the full value of link equity pointed towards that content. Consolidating that content onto a single page would enhance its value and the chances of it ranking prominently on organic search results pages.
In some cases, the content may have been deliberately duplicated across the Internet in an attempt to manipulate search engine rankings. This deceptive practice aims to win more traffic and is generally performed by less-than-authoritative SEO vendors or self-taught “experts.” In all cases, in the eyes of the search engines such duplicate content practices, if served within the search engine results, result in a poor user experience.
A Search Engine’s Perspective
In general, duplicate content creates a variety of issues for search engines. For example, Google tries to index and display pages with distinct information, which means that publishing duplicate information is ultimately not beneficial to search engine users. Google’s own guidelines to webmasters are very clear: “Don’t create multiple pages, subdomains, or domains with substantially duplicate content.”
While search engines are now hesitant to use terminology that borders on phrases containing ‘penalty’ the search engines are less likely to rank a site that consists of republished or duplicate content. When duplicate content appears on a site we often see instances when a page that was once highly ranked will not rank again. Google’s main priority is diversifying results to their searchers. In order to do that, the search engine needs to know which page should rank and which should not. Duplicate content complicates this process, dissuading Google from rewarding duplicate content sources.
When search engines filter content, they crawl, index and process each content page, filtering out some duplicate content before even adding it to the engine’s index. This often results in duplicate content being left out of search results entirely.
Responding to Duplicate Content Issues
When duplicate content exists on the same website (i.e. ‘regular’ and ‘printer-friendly’ pages), and neither is blocked a “noindex meta tag,” the search engines will choose one of them to list. However, if your site contains multiple pages with largely identical content as is often the case with poorly developed E-commerce sites or poorly constructed hotel websites, there are a variety of ways to indicate a preferred page (and its URL) to Google and other search engines, including:
- 301 Redirects: When duplicate content on a site can be found at multiple URLs, it should be redirected to push search engines to the single correct piece of content. The best way to combat duplicate content is to set up a 301 redirect from the page that displays the duplicate content to the original page. This would ensure that the two pages no longer compete with each other but also create a stronger relevancy – ultimately aiding in their ability to rank well in search engine results.
- Rel=”canonical” (no-index meta tag): Utilizing the rel=canonical tag passes the same amount of ranking power as a 301 redirect, but can take less time to implement, making it a lucrative avenue for dealing with duplicate content for hoteliers. Implementing these tags tells search engines that the given page should be treated as if it were a copy of the URL. Therefore all links and content metrics get credited towards the URL provided, and the chance of the search engine interpreting the page as a duplicate is lowered.
External duplication can be caused by something as simple as the example mentioned above – a hotelier providing a generic property description in a hotel questionnaire to an OTA. But which one prevails in search results? Generally, the site that is better known, has a larger audience and a stronger inbound link portfolio will take precedence. This results in the OTA pages about the hotel in the search results ranking and the hotel’s own website being ignored. This is why it is critical for hoteliers to provide original content on their site and to outside sources.
On the rare occasion that a copied version of your content outranks your original (i.e. if a travel blog or other publisher copied content from your site), Google wants you to tell them about it using the Scraper Report Tool.
Deliberate Content Duplication:
When the search engines determine that duplicate content was purposely published to manipulate rankings and deceive users, search engines like Google will make “appropriate adjustments in the indexing and ranking of the sites involved.” Ultimately this can cause rankings to suffer and result in the site being removed in its entirety from the search engine index – causing it to disappear from search results.
Hotel Websites and Duplicate Content
Hoteliers – to one degree or another – must provide content about their properties to other sites on the web. This inevitability is nothing new, considering all hotels provide brand, room and property descriptions to various third-party sites and their distribution partners, including:
- Major hotel brands, soft brands and hotel rep companies
- OTAs, GDSs, booking engine vendors, etc.
- Hotel listings on CVB and Chamber of Commerce sites, hotel directories, destination portals, etc.
- Travel blogs and publishers
These content listings offer a variety of added benefits to the hotel’s website, in addition to distributing the hotel inventory to a more diverse audience. Since hotel listings on directories, CVB sites and more often feature a URL link to your own hotel’s website, this can impact how your hotel is ranked on search engines. The addition of strategic linking via incoming links to your site is considered to be a vote of confidence in your website’s content – positioning it as an authority.
Since hoteliers cannot avoid sharing content about their hotel with other sites, they need to rely on smart strategies in how they share content to avoid providing duplicate content. Failure to do so could result in detrimental SEO rankings for their website as a result of external duplication on another distribution or marketing partner site. So if we know all of this, why does duplicate content persist? Hoteliers are finding it easier to copy and paste descriptions right from their own website rather than to write creative hotel and room descriptions that are “significantly different.”
If hoteliers continue these existing practices many third-party sites (primarily OTAs) will end up higher in the search engine rankings than the property’s original content.
Strategies for Success
So what can hoteliers do to avoid having their sites excluded from search engine results and suffering the detrimental SEO impacts of duplicate content?
As obvious as it may seem, hoteliers should avoid duplicate content at all costs. Make sure that all content descriptions provided to outside sources are “significantly different” across the Internet. This includes descriptions that appear on the brand website and the property’s own vanity sites. “Creating” duplicate content on a separate vanity site or domain is a waste of time, money and energy. Google and other search engines are likely to rank the page that is deemed most trustworthy – on most cases the older page or the one with the most inbound links.
The greatest threat comes from utilizing the same content across CVB sites on your own website. Your site should have a clear advantage in the eyes of search engines and Internet users alike – something that is highly achievable by providing unique hotel and product descriptions. Your hotel website should offer the best, most unique and relevant content (both textual and visual) about your property. Integrating this content with SEO best practices should be a top priority for any hotelier in 2016 and beyond.
Other strategies for success include:
- Maintain consistency when linking internally through your site. In addition, you’ll want to ensure that canonical tags and 301 redirects are in place where appropriate.
- When syndicating content, you’ll want to ensure that the syndicating page links back to the original source.
- Remove duplicate content from Google’s index with meta robots, noindex designation or removal via Google Search Console (formerly Google Webmaster Tools).
- Minimize similar content. If two pages share very similar content, you may want to combine the two pages into one that is highly relevant. Alternatively, the content on both pages could be expanded to provide unique content on each.
There is no way to avoid sharing textual and visual content about your hotel with third parties. However, hoteliers must avoid having content from their own website duplicated by third-party distribution and marketing sites. This will aid in avoiding marginalization by the search engine results pages – or a lack of appearing at all.
Creating the best unique and SEO-friendly content on your own hotel website, complemented with a variety of action steps in 2016 will aid in positioning your hotel website as the most sought-after source of information about your property. Hoteliers should consider the following recommended action steps:
- Re-design the hotel website with a fully responsive design as per the industry’s best practices
- Create original and deep textual and visual content on the sites
- Implement robust functionalities on the hotel website:
- Customer reviews
- Calendar of events and special offers modules
- Photo and experience sharing capabilities
- Interactive contests and sweepstakes
- Implement solid SEO on the website
- Content optimizations and regular audits
- Organic visibility tracking
- Strategic internal linking
- Local listing management
- Undertake a comprehensive marketing strategy:
- Email marketing
- Paid search marketing
- Banner advertising
Don’t let your hotel website fall victim to poor content strategy and duplicate content. When you partner with hotel Internet marketing experts, you’re aligned with individuals who can make your website work for you.
Source: HeBS Digital