Yesterday, Stone Temple’s Eric Enge released an interview he had with Matt Cutts at SMX Advanced. The big pullout from the post in the social sphere was that infographics might get devalued. People on Twitter started freaking out, the world shook, babies cried. However, within this same continuum I believe the biggest piece of this interview with Matt was missed – that content rewrites as an idea – and effective SEO practice – are effectively dead.
Enge gets into an example in the interview where two sites each write about frogs, how they’re green, and they like lillypads. The second blog basically says the same thing as the first in reference to the frogs, but in a rewritten fashion. This is not a duplicate, it’s a rewrite. For the past few years, this kind of content has ranked well on Google. Matt then goes on to talk about these types of sites that rewrite the first version of the same content to make it their own, and how they might be viewed negatively by the search engines.
Those other sites are not bringing additional value. While they’re not duplicates they bring nothing new to the table. It’s not that there’s anything wrong with what these people have done, but they should not expect this type of content to rank.
Google would seek to detect that there is no real differentiation between these results and show only one of them so we could offer users different types of sites in the other search results.
For people attempting to create dozens to hundreds of sites at scale (or even one or two and being extremely lazy), this is a real and legitimate concern. Content rewrites were the go-to, and even if done well, it makes perfect sense why Google would crack down on a “1B” version of a previously existing search result.
I believe a new iteration on “content quality” has arrived recently through this concept, and without much awareness of its arrival. We had duplicate content being bad, then spun content being bad, and now I believe that even content that is totally unique, but a restate of another, currently ranking web document, is also a clear negative signal.
We simply can’t be lazy any more as SEOs and I believe that if we look at our sites that were hit by Penguin (and Panda) for this exact condition occurring, we might just find that what we initially thought was external links in isolation was actually Google also interpreting the “unique” nature of our rewritten content within its search results as a negative overall factor on their SERP quality.
While this identifying characteristic was previously intuitively hard to pick up with the data available to us, Cutts statement on the matter makes it obvious that creating absolutely unique content for our sites is a must for the future health of our organic search rankings. If you’re not one of those people who picked up on this recent change, you’re probably also the person still wondering where his longtail rankings to deep pages went in the last few months.
Hint: they’re not coming back when you fix a few links.