Will Google index invisible text with negative position:absolute

Print Friendly, PDF & Email

This is a very old trick. Last time I’ve seen it working was in 2013. First time was, probably, somewhere in 2008. This is mostly employed in black SEO. Usually to monetize hacked websites in an SEO way.

How does the hack stage work?

An image of a random conveyor

The idea here is simple: hackers set up special conveyors to automate hacking. This is an approximate process if you’re interested:

  • Get a number of exploits
  • Generate Google Dorks
  • Parse the search results
  • Automate exploiting
  • Go through the whole list of vulnerable sites and hack them

As a result, hackers end up with a bunch of completely unrelated sites with different levels of traffic.

What would you do with that? Well, there are many ways of doing it, but the main idea is to sell the traffic.

Who would buy a bunch of random traffic? Either broadly targeted and cheap ads (you might’ve seen those on the web) or hackers who build botnets. The letter would redirect the traffic to smart traffic distribution systems that would then redirect people to applicable exploit packs.

But selling this kind of traffic directly is a waste of resources really. At some point SEOs started working with hackers. That’s where the lack SEO techniques become hugely profitable, increasing the potential of hacked websites sometimes exponentially.

Black SEO and hacked sites conjunction

SEOs recognized the real SEO-value of hacked websites and came up with ideas of indirect monetization of the hacked websites. Here are some steps they perform to squeeze more money from the sites:

  1. Find a nice official affiliate partnership program with a good history and high margins (gambling, pharma, pron, etc.)
  2. Create a legit website with good on-page optimization and nice-looking landing pages/navigation.
  3. Build links to he top of the conversion funnel from hacked websites.

Sounds simple? Right. The tricky part now is making sure the hacked websites stay hacked and the links are hidden from the admins. Making sure sites stay hacked is mostly hackers’ job. They have a lot of low-level techniques of doing so, but we concentrate on SEO part here, right?

How black SEOs make sure their links are hidden?

You guessed it. Cloaking. Sounds really primitive, right? The basic definition of cloaking involves useragents and that’s all. Not really. Grown-up cloaking is a lot more advanced than just useragents. Usually it consists of two layers. First layer does just what the definition says: serves different content to different users. Let’s take a look at some first-layer techniques:

  • Useragents
  • IPs
  • Referers
  • Screen Resolutions
  • JS/Flash execution
  • etc…

All of these and a lot more are used often in conjunction for the first stage just to hide the next step from non-bots only to increase the safety. Now the next step is more front-endish. The idea here is to hide links from an observer, exposing it to the parsers/bots.

  • Text has the color of the background, hence is¬†invisible
  • Text is behind some other element
  • Text has special position that is actually outside of the screen (this is what this page is testing)
  • Links are placed on rarely accessed pages
  • etc…

What we have here is double protection. Correctly implemented cloaking can live for years. It often lives longer than the landings, leading to useless 404s or to nowhere.

And don’t forget, till the time admins find these things, the backdors are all over the backups, so it is possible now to keep restoring the links, changing the cloaking techniques.

Nope, I don’t do it. I do white-hat SEO. But it never harms to know, right?

Okay, so here’s our ridiculous position:absolute that places some text beyond the top left corner of the screen. Let’s see if G indexes the text. Btw, it’s really easy to detect it, but I believe G doesn’t detect even the most obvious cases. Let’s check it with a few different spans on this page, below this text. I picked some rarely used words for these pieces of text. And put them in h3s to make sure they are important enough to get indexed

left top corner: catoptromancy – foretelling the future by means of a mirror
right top corner: exequies – funeral rites
left bottom corner: floccinaucinihilipilification – the action or habit of estimating something as worthless (a word generally only quoted as a curiosity)
middle up: gobemouche – a gullible or credulous listener
far away: puddysticks – S. African children’s word very easy

— I quoted oxford dictionary for this experiment. (You can find a part of one of them visually at the very bottom of this super-stretched page)

Okay, it’s time to check the results out, right?

Here weGoogle Search Result - Top Right Testing



Google Search Result - Bottom Left Testing


These two cases got their snippets.

The other three were indexed too, but got generic snippets:

Google Search Result - Middle Up Testing

Google Search Result - Top Left Testing

Google Search Result - Far Away Testing

Does it mean G tries to conduct some super primitive rendering to find text around the keyword? Or maybe it’s being done depending on what was saved to the DB. I don’t know, but what I do know is that G indexes and ranks text that is impossible to see on a screen.

Google, it’s quite simple to avoid. I wonder why you don’t do it on the fly, but you know better.


Google indexes nonsensical positions:absolutes

Leave a Reply

Notify of