Home Services Contact Info

When User Agent Sniffing Goes Horribly Wrong

Posted in: Browsers,Google,Search Engine Optimisation by Richard Hearne on February 14, 2007
Internet Marketing Ireland

Ok, this serves as a good example of how not to do UA sniffing.

User Agent sniffing is a process of discovering the User Agent (browser in most cases) of the client visiting your page. Historically designers and developers used UA snifing to determine what hacks they would implement to ensure consistency across browsers and platforms.

UA sniffing is also used for many so-called ‘black-hat’ SEO techniques. You sniff for Google’s UA, which is unique, or the IP addresses of Google’s spiders and serve up a ‘special’ version of your page for the GoogleBot. In search engine terms this is known as ‘cloaking’, and is possibly the worst offence you can commit. (If you get caught :mrgreen:)

When User Agent Sniffing Goes Wrong

In this Blind People Can’t Eat Chcolate I alluded to some issues with the new Lily O’Briens Chcoolate website.

Well screen readers weren’t the only user agent that had difficulties with the new website:

Lily O'Briens Cloaking

That’s what the server was returning to the GoogleBot UA. Absolutely one of the worst cases of arsing up UA sniffing I have ever seen.

Thankfully they seem to have fixed the issue. (Hello Magico :grin:)

Can A Short-lived Mistake Cost Dearly?

I’m not sure how long the site was returning that response to GoogleBot. It was a site-wide result so if it does get picked up it will likely be site-wide. Michele already alluded to the changed page URLs, but if they were very unlucky, and GoogleBot tried to crawl the site while the server was parsing that crappy message, the site may find all of their pages quickly entering supplemental hell.

I hope they weren’t this unlucky – getting out of the supps can be a nightmare of horrible proportions.

Oh, and a hat-tip to eagle-eyed David Doran :grin:

You should subscribe to the RSS Feed here for updates.
Or subscribe to Email Updates now:

5 Comments »

  1. they may have fixed the UA issue, but they’re still screwing up redirecting the old pages. it’s unbelievable that anyone woudl simply flush all what little google-juice the old pages earned.

    Comment by fmk — February 14, 2007 @ 11:46 pm

  2. Why should people detect the user agent? For example, you design 3 different versions of your CSS or HTML (because of compatibility): MIE, Fireforx, Opera. OK… and what about other browsers/user agents? It’s senseless.

    It’s better to write “Do you see this page incorrectly? Use this browser at no cost!” and place a link to the browser’s download area there. Just my opinion…

    BTW, I always check the page under Win OS (MIE,Firefox,Opera) and under Linux before publishing any HTML/CSS update.

    Comment by Jan — September 6, 2007 @ 9:22 am

  3. Jan, yeah using the default behaviour is a good suggestion.

    Meanwhile you can use http://www.browserobject.com/useragent.aspx to detect for the browser type to make sure your CSS works correctly.

    Comment by Tim — February 12, 2008 @ 8:30 am

  4. [...] In a follow-up post Richard reveals that for a period of time after the site going online the Google Bot was also blocked from the site. [...]

    Pingback by David Doran’s Media Blog » Blog Archive » Hiding and Cloaking — October 13, 2008 @ 7:02 pm

  5. I have had a few clients who have gotten in trouble with UA sniffing. I always try to avoid sniffing the UA and if I really need to I will only make minor changes to the page.

    Comment by Casey — January 3, 2009 @ 5:12 am

Comments Feed TrackBack

Leave a comment