You Are Browsing The SEO Category

The Link Graph Is Broken

February 17 2011 // SEO // 14 Comments

Is the SEO community really surprised by the link revelations that have been in the news recently?

Anyone who is surprised clearly hasn’t done much (if any) link research lately. Because link abuse is rampant. For example, using Open Site Explorer I reviewed backlinks to a specific page for a top 500 trafficked site.

Lo and behold a very large percentage of those links came from splogs.

Splogs

What’s a splog? It’s a blog full of spam. A spam blog. You’ll know a splog when you see it.

Splog

They’re amazingly easy to spot because they have no topic focus, frequently contain large paragraphs of gibberish, have a ton of links and usually redirect about and contact pages to the home page.

Do Splogs Work?

Well, commonsense tells you splogs wouldn’t exist if they didn’t work. In addition, many of these splogs have posts dating back years. I doubt these folks would keep splogging if it wasn’t paying off.

Lets look at the 10 splogs I identified. I did a site: query to see if they were in Google, and if so how many pages were returned. I also captured the Open Site Explorer Page Authority and Domain Authority for each splog.

Splogs Work

Out of the 10 splogs, 5 have been removed from Google’s index. 5 out of 10. Sorry, I just don’t think 50% is good enough. And look at the decent Page and Domain Authority for these splogs!

More to the point, there seems to be little to no consequence for those who acquire links from splogs. No one is even bothering to pull these people over and give them a warning.

Instead it’s akin to an empty police car on the side of the road. The first few times you zip by you might be nervous. But then you figure it out. No one is in the police car. No one is there to catch you.

Splog Success

The company receiving links from these specific splogs ranks 4th on a term with 1.2 million monthly searches and 2nd on a term with 60,000 monthly searches. So, this practice certainly doesn’t seem to be hurting them.

In reviewing these splogs I wanted to find another instance where the site or sites choosing to participate were succeeding in search. It didn’t take long. Here’s a September 19, 2010 splog post about leather bikinis.

Leather Bikini Splog Post

And here’s the search result for the term ‘leather bikini’.

Leather Bikini Google SERP

The two sites participating in that splog post rank 1st, 2nd and 3rd for the target term! Now, of course they may rank highly for other reasons as well, but splog links certainly didn’t hurt them. And that is a problem.

Are Splog Links Paid Links?

Are these paid links? I can’t say for certain, but if it looks like a paid link and it behaves like a paid link, it’s probably a paid link.

There’s a method to their madness. Some of the links go to other splogs littered with AdSense, some go to Wikipedia. Make no mistake, splogs are manipulating trust and authority and are succeeding.

The Link Graph is Broken

Splogs are just one small part of the crumbling link graph. Any SEO worth his or her salt could quickly find proof of link abuse.

We all know this is happening. We’ve watched it happen. Everyone has been pitched by a paid link broker or an agency who has a ‘proprietary’ link building tool or an offshore company that can secure 100 links for $100.

We know the link graph is broken. So why are we somehow shocked and surprised when that fact is revealed?

Google Should Follow NoFollow Links

February 12 2011 // SEO // 5 Comments

What would Google look like if it relied exclusively on the nofollow link graph? It would be a sort of Bizzaro Google. We’ll call it Elgoog.

Elgoog Logo

In this search engine Matt Cutts would help ensure every SERP contained at least one result for Viagra.

Sounds like the stuff of comic books. Yet, I think the nofollow link graph could actually be useful in improving search quality.

Types of NoFollow Links

First, lets think a bit about the nofollow link graph. It isn’t homogeneous. There are different types of nofollow links and they should carry different weights within the Elgoog algorithm. I see six general classes of nofollow links: Administrative, Sculpting, Advertising, Editorial, Social and Comment.

Administrative nofollow links are internal links used for areas such as log-in, privacy, ratings or feedback. They’re generally benign.

Sculpting nofollow links are internal links used to ‘hoard page rank’ or to ensure anchor text focus. The latter is something I don’t see discussed much when abandoning the idea of page sculpting. If Google isn’t passing anchor text through nofollow links, you might find it useful to put a nofollow on obtuse and extraneous links such as ‘read more’. Whatever your position on page sculpting, it’s a type of manipulation.

Advertising nofollow links are external links used to identify paid links.

Editorial nofollow links are external links used when one site is explicitly not endorsing another site. If Aaron Wall ever linked to Mahalo you better believe he’d use a nofollow. These links are incredibly rare and, as such, quite valuable.

Social nofollow links are external links used by social platforms such as Twitter and Facebook. They could conceivably be deemed sculpting or editorial nofollow links but I’ll split them out here since they’re probably a big piece of the nofollow link graph.

Comment nofollow links are external links used in blog comments and forums. In the last two years it seems clear that Google can determine the location of a link – that the link comes from text, the footer, navigation or comments. The comment nofollow graph is where you’ll find an enormous amount of spam.

NoFollow Link Graph

The nofollow link graph would power the Elgoog algorithm. Sites with a high sculpting to total link ratio would rank well. Similarly, a high advertising to total link ratio may boost your Elgoog rank. Editorial nofollow links would provide a lift to the linked site. And of course, sites that receive a large amount of comment nofollow links would shoot to the top of Elgoog results.

In looking at just the comment nofollow link graph it seems like it should be pretty easy to identify spam. Elgoog loves spam! Remember my comment spam examples? Those comments all have links that would help that site rank well in Elgoog. As such, they should rank nowhere near the top in Google.

See how that works. If it ranks high in Elgoog, it ranks low in Google. Your Elgoog score is a signal. Sounds crazy, but on some level I feel like it might work.

Measure Unnatural Link Behavior

If people no longer naturally link, why not measure the inverse? Measure the unnatural link behavior.

At SMX West 2010 Bing made a presentation on Social Search that included an interesting slide that showed the difference between a natural community and a spam community.

How To Identify Link Spam

The difference is rather stark. Spam was highly visible.

Elgoog would reward sites with obvious link exchange schemes. It would seek out uniform anchor text. And those that received links from sites completely off topic would also benefit. In fact, Elgoog might require topic diffusion in your link graph. (Sites about lawn mowers better have links from wedding dress and online chess sites.) Of course, large amounts of links should be built in a short time frame. Slow and steady link profiles would be frowned upon.

Elgoog and the nofollow link graph could help measure manipulated trust and authority or, at a minimum, trust and authority disinformation. As the link graph degrades, wouldn’t that be a valuable signal?

Maybe Google does this already. Or they’ve poked at it and found it flawed. (I can already poke holes in it myself.) Yet, I feel like there’s a thread of potential here. Rooting out manipulated trust and authority seems a far better initiative than judging content farm quality.

What do you think?

How To Quickly Identify Comment Spam

February 10 2011 // SEO // 5 Comments

Most of the time comment spam is pretty obvious, but every now and again you want to make certain you’re trashing the right comments.

Google Your Comments

The fastest way to identify comment spam is to copy the comment and search for it on Google. Make sure to use quotes around the entire comment, which tells Google to look for the query in that exact order. If you get any results on the exact comment, it’s most likely comment spam.

Here’s one I received a few weeks ago.

Finding Comment Spam

I’ve highlighted the fact that Google is returning about 26,400 results for this bit of comment spam.

Here’s another I plucked off of another blog during my morning reading.

Comment Spam Example

Two things to note here. First is that this technique still works even when you reach the 32 word query limit. Second, this gem returned about 90,500 results. (Is it just me or does the ‘about’ sort of sound like ‘allegedly’ in this instance?)

Comment Spam

This sad and pathetic practice gives SEO a bad name. I sometimes think a database of comment spam should be created. Who knows maybe it already exists.

I can think of a number of ways that it could be used to help sites rid itself of this parasite. Why couldn’t Google alert owners (via Google Webmaster Tools) when it identified a comment that reached a certain spam threshold? Seeing the same comment even 100 times should be more than enough, never mind the 26K and 90K examples I’ve provided.

What about abandoned blogs or splogs where there is an abundance of comment spam? Couldn’t a comment spam database help flag these sites and remove them from the index and link graph?

Of course it’s more difficult than it sounds but wouldn’t it be worth it?

Retailers Slow To Adopt Like Button

February 09 2011 // eCommerce + SEO + Social Media // 4 Comments

In April 2010 Facebook launched the Open Graph and Like button, allowing sites to better control how their pages are displayed in Facebook News Feeds and search results.

Retailers Slow To Adopt Like Button

Yesterday I visited all of Internet Retailer’s Top 100 retailers to see if they were using the Like button. I did not include those who were using the Like button for their Facebook Page but instead was looking for Like button usage on product pages.

Adoption Rate of Facebook Like Button by Retailers

The adoption rate of the Like button for eCommerce seems low, with only 27% of the Top 100 online retailers using the Facebook Like button.

Like vs Share

Facebook Share was not included in the above numbers, but is more widely used by retailers. Yet, the share functionality is no longer promoted or recommended by Facebook. Searching for it on their developers platform results in very little and what does usually points to the Like button and Open Graph documentation.

While not specifically measured, I’m unsure if any of these retailers (even those with the Like button installed) were using the related social plugins. In particular, the Recommendations plugin could be an interesting cross sell feature for retailers.

Facebook Insights

Of those using the Like button, only 35% were tracking usage via Facebook Insights. My methodology for validating this was to use the Facebook Linter tool on a retailer’s domain. I counted those who had the appropriate Facebook Insights for Domains verification (fb:admins, fb:app_id or fb:page_id) enabled.

This is somewhat less surprising given the difficulty in verification, lack of robust data in Facebook Insights and ability of retailers to track downstream traffic from Facebook as a benchmark for success. However, this metric should be of concern to Facebook.

eCommerce Opportunity

Facebook Money

The Like button and Open Graph present a huge opportunity for retailers and eCommerce. Using Facebook SEO, retailers can optimize the way their products are presented on Facebook.

Each Like is a type of micro-review and an opportunity for retailers to leverage brand affinity. In addition, sites can publish stream updates to users who have Liked pages via the Open Graph API.

There are 500 million active users who spend 700 billion minutes a month on Facebook. When will retailers decide to dedicate more effort to reach this captive audience?

Blekko Censors Search

February 03 2011 // Rant + SEO // 11 Comments

The needs of the many outweigh the needs of the few.

Blekko Doesn't Grok Spock

Blekko Spam

Just prior to Farsight 2011, Blekko removed twenty sites from its search results.

“These sites are the worst spam publishers on the Web according to our users,” said Rich Skrenta, CEO of Blekko. “They are literally responsible for millions of pages on the Web that our users say are just not helpful and they’d prefer they were banned permanently. So we’re going to do that for them.”

Blekko has some interesting functionality around spam so I can see why they’d want to highlight it based on the recent spam/content farm meme surrounding search. That’s understandable. But censorship is not the answer.

Blekko Users

There is precious little data as part of this announcement. How big is Blekko? Quantcast and Compete show that the monthly unique visitor count is anywhere between 16,000 and 143,000. However, to mark anything as spam you have to be a Blekko user.

The November 2010 public launch of Blekko provided some insight into numbers and usage.

Blekko has been testing its solution to search with roughly 8,000 beta testers who have created more than 3,000 different slashtags. Blekko tells us that 11% of its existing user base come back to the site on a weekly basis.

I was a beta tester. So were a number of my colleagues – innovators, technologists and SEOs. As a search marketer we were eager to try out a competing search engine. I’m not a Google apologist.

Without hard data the math gets fuzzy, but the total number of registered users seems relatively small and is likely still composed of innovators. Do these people represent everyone?

Blekko Searches

The other missing piece of data is the searches related to these spam complaints. We don’t know the types of searches that were performed, nor the result set that was presented to users. Are the spam complaints a measure of the sites or a measure of the quality of results returned by Blekko?

Are spam complaints produced on general search queries or long tail queries? Is the incidence of spam complaints for specific sites different based on query type? (Information vs Transaction vs Navigation.)

The spam interface also leads to another question. How many of the spam complaints were made without visiting the site in question?

Blekko Censorship

Aaron Bradley took the words out of my mouth in his Blekko, Can I Please Have My Spam Back? post.

At the end of the day, I have no respect for a search engine that censors my results based on notions of quality, rather than relevancy.  It ceases to be comprehensive, it smacks of elitist righteousness and – most of all – decisions about the validity of content are being made on my behalf by people I don’t know.

Quality and taste are subjective. The fact that Blekko has chosen to use the feedback from a biased minority to censor results for the majority is unfortunate. Is the message that mainstream users don’t know enough to make their own decisions, the right decisions? If I search for ‘food’, should unhealthy foods be removed from search results?

In all seriousness, would Blekko remove specific books that users had marked as spam? According to the American Library Association, this would mean Catcher in the Rye, To Kill a Mockingbird and The Color Purple would vanish from the landscape.

Use spam feedback to reorder results, but let me make up my own mind. I don’t need a nanny search engine.

Disclosure: While I consult for Buzzillions, this post is my personal opinion and does not reflect those of Buzzillions.

Google Bait and Switch

February 02 2011 // Rant + SEO // 5 Comments

Does Google truly understand SEO? One would hope so but in the last few weeks Google took one step forward and two steps back.

What Google Says

Matt Cutts gave SEO a sort of backhanded compliment in a recent post about search neutrality.

I don’t believe all search engine optimization (SEO) is spam. Plenty of SEOs do a great job making their clients’ websites more accessible, relevant, useful, and fast.

I like Matt and I think he does understand and may even appreciate SEO.

And a recent Google Webmaster Help video titled Using Webmaster Tools Like an SEO was also a positive sign. The content is very basic and Maile seems to be talking like Mr. Rogers, but that’s probably to ensure the video helps beginners and those where English is a second language. So, they talk the talk.

What Google Does

Does Google walk the walk? The new Google Engage program recently launched and I’m seeing ads on Google promoting it.

Google SEO Search Ads

The keyword targeting seems focused around any term containing SEO. I got this one to fire when I searched for ‘seo services’.

A different version popped up during my morning Google Reader review.

Are you an SEO?

What’s the problem? Google Engage has pretty much NOTHING to do with SEO. Here’s the landing page.

Google Engage Landing Page

The highlighting is my own, but is there to underscore the fact that they’re equating search engine optimization with AdWords services. I find this disturbing.

I would give most people outside of the industry a pass on distinguishing between SEO, SEM and PPC. Google is no outsider. I think it’s pretty clear that SEO is about optimizing a site and pages for natural search. SEO is not about paid search.

Yet here they are advertising against SEO keywords, using an SEO focused display URL to encourage AdWords business. I’m left to believe that those behind Google Engage don’t understand what SEO really is or that they know what SEO is and seek to convince people to spend on paid search traffic instead of optimizing for free search traffic.

Bait and Switch

So which is it? When I search for ‘sem services’ I get a different ad.

SEM Services Adwords Ad

That ad takes me to an interesting page.

Google Defines SEO and SEM

Huh. Looks like Google’s got the definitions down pat. So I’m left to assume Google Engage is purposefully muddying the waters.

Am I blowing this out of proportion or are you disturbed by this bait and switch technique?

Google Testing Supersized Sitelinks

January 29 2011 // SEO // 6 Comments

Is Google testing a larger font size for sitelinks?

Supersized Sitelinks

Supersize Sitelinks

It looks like the sitelinks in both paid and organic listings have been supersized. Anyone else seeing supersized sitelinks?

Also, if you’re up for a chuckle, take a gander at the ‘Something different’ selections for apple.

Google Search Quality Decline or Elitism?

January 27 2011 // Marketing + SEO + Technology // 8 Comments

Are content farms really the problem or are you just a snob?

The recent complaints about Google’s search quality (here, here, here and here) range from real spam to indictments of content farms. I think we can all agree that spam (cloaking, scrapers, splogs, status code manipulation etc.) should be weeded out. But that leaves us with the larger issue: the quality of results.

Quality

The definition of quality usually refers to a ‘degree of excellence’ or ‘superiority of kind’. It’s often associated with grade. Think back to your time in school. Did you ever get back a paper you thought deserved a higher grade? You were certain it was an A paper and you got a B+ instead!

B+ Grade

Quality is a matter of taste.

Taste

Ruination IPA or Coors Light

What about beer? I adore Stone’s Ruination IPA. But I’m certain a lot more Coors Light is sold in a day than Ruination IPA in a month, maybe even a year. Even if I were to try to determine the best IPA, there would be many conflicting and passionate opinions on the topic.

Value

Perhaps it’s about value instead? Ruination IPA costs a pretty penny while Coors Light is cheap. Maybe Coors Light is the best value because of the ratio of price to quality. But people value things in very different ways. This is clear when looking at restaurant reviews.

Applebees vs The French Laundry

When I read restaurant reviews I can tell whether the reviewer has the same food bias as I do. I treat reviews which laud huge portions, or rock bottom prices, or extol the virtues of never-ending refills differently. Their view of what a good meal is differs from mine. They’re looking for quantity, no matter how mediocre the food. I’m looking for quality and generally don’t want a pound and a half of garlic mashed potatoes.

There’s nothing wrong with either perspective. But they are different.

Popularity

Google Serves Lots of People

Look around folks. What do you see more of? Fast food or fine dining? It’s fast food hands down.

And you can see this in nearly every area of life. Justin Bieber and Miley Cyrus are wildly popular musicians but I’m listening to Kasabian and Kaiser Chiefs. I haven’t touched Internet Explorer in years but it’s (sadly) still the most popular browser.

Mahalo, Squidoo and eHow get millions of visitors a month. These site are popular, and while you might find them distasteful, lacking quality or providing little value, many others (clearly) disagree.

Do I like these sites? No. Perhaps I’m a snob. Maybe you are too.

Numbers

The number of searches has skyrocketed in the last five years. Using comScore’s monthly numbers, core searches has gone from 6.9 billion at the beginning of 2007 to 16.4 billion at the beginning of 2011.

US Search Volume 2007 to 2011

At the same time Pew reports a growing percentage of adults are now online and using search engines on a daily basis.

Audience

The search audience has changed. One way to measure this is to plot daily search engine usage by adults against the innovation curve.

Diffusion of Innovation

The U.S. Census Bureau puts the population of the US at around 300 million. Using the CIA World Factbook we can estimate that 80% of those are over the age of 14. I’m going to use the resulting number (240 million) as my adult population number.

In 2007 Pew reported that 70% of adults were online and that 40% of them used search on a daily basis.

  • 240,000,000 X 70% X 40% = 67,200,000

In 2010 Pew reported that 79% of adults were online and that 49% of them used search on a daily basis.

  • 240,000,000 X 79% X 49% = 92,904,000

innovation adoption of search

In both 2007 and 2010 daily search usage penetrated the Early Majority. The difference is that the Early Majority now outnumber the Innovator and Early Adopter groups combined.

Early Majority Rule Search Volume

That’s just in three years, imagine the difference between 2005 and 2010. The picture of a daily search user is very different today.

Mental Models

The nature of our searches (as a whole) is likely changing because of who is now searching. The mental model of an Innovator or Early Adopter is going to be different than that of someone in the Early Majority.

Each group is going to approach search with different ideas and baggage. The Innovator and Early Adopter are more likely to be open to new experiences and to explore. They are more risk tolerant.

The Early Majority and Late Majority are more likely to apply their information seeking behaviors from other mediums to search. They’re looking for the familiar.

Brands

Many seemed surprised when Google Instant revealed a ‘bias’ toward brands. It has since been confirmed that Google is not engaging in any internal bias. That bias is a user bias. It’s a predication based, in large part, on the volume of searches.

Should we really be surprised? Many of these companies are spending a fortune to advertise and market their brand. Their goal is to capture mindshare and they are succeeding. So much so that people, particularly the Early and Late Majority, go online to search for those brands.

Brand Search Acceleration

In 2005, a DoubleClick report (Search Before The Purchase) showed relatively low levels of brand search. While it accelerated closer to the actual purchase, in some instances only 27% of searches were on brand. Do you honestly think that’s still true today?

eCommerce has certainly grown in that time. The number of navigation searches has climbed, which is closely related to brand. People continue to search (a lot) for Facebook or Craigslist as a way to get to those destination. But last year Bing also reported that Walmart was the 8th most searched term.

Users

Matt Cutts tells us not to chase the algorithm but to chase the user. But who is the user really? The audience has changed! And if the algorithm is trying to use human feedback as a signal, wouldn’t the results reflect that new composition?

Might that be why in October of 2010 many people noticed an algorithm change that seemed to skew toward bigger brands. It’s what Jonathan Mendez called ‘gentrification of the SERPs‘. (I wish I’d come up with that term!)

I may not think the results got better, but perhaps someone from the Early Majority or Late Majority did. They look at those results and see a lot of familiar brands and that instills confidence.

Content Farms

So when you see eHow at the top of a result and cringe, others might be thinking Google has led them to the easiest and best result. When you find a Mahalo page you might grind your teeth, but others could walk away thinking they got exactly what they needed.

I may enjoy reading the works of Shakespeare but plenty of others will be super happy to have the CliffsNotes version instead.

Which User is Google Optimizing For?

McGoogle

I believe Google when they say they want to provide the most relevant results. But there is a fair bit of subjectivity involved because the user is not some monolithic, homogeneous blob. Quality, taste, value and popularity are all going to inform what people think is relevant.

If Google is optimizing for the majority, that may mean a very different interpretation of relevancy. There’s nothing really wrong with that, but if you’re an Innovator or Early Adopter, you might think things are getting worse and not better.

There’s usually a better place to eat right down the street from a McDonald’s, but it’s McDonald’s that still gets most of the business. There are some places (North Beach in San Francisco for instance) that have a ‘no-chains’ policy.

Google could certainly do that. They could stand up and say that fast food content from Demand Media wouldn’t gain prime SERP real estate. Google could optimize for better instead of good enough. They could pick fine dining over fast food.

But is that what the ‘user’ wants?

SEO Status Codes

January 20 2011 // Analytics + SEO // 5 Comments

One of the more technical aspects of SEO is to understand, monitor and manage status codes.

Soup Nazi 400 Bad Request

What are Status Codes?

Status Codes are an essential part of HTTP, the request-response protocol that powers the Internet. Each time someone visits a page (including Googlebot) they ask the site for information. The status code is a numeric response to that request and provides guidance on how to proceed. You might be familiar with status codes such as 404 and 301.

SEO Status Codes

I recommend bookmarking the status code definitions documented by the W3C. However, I want to provide a quick reference guide specifically for SEO.

200

OK or Success. This is the response code you want to see most often. At a minimum, I want Googlebot to see a 200 response code in 90% or more instances during a crawl.

301

Moved permanently. This is the right way to redirect, telling search engines to index that content in a new location.

302

Moved temporarily. This is the wrong way to redirect (except in very rare cases). You’re essentially putting this content into limbo because it’s not at the current location but search engines won’t index the temporary location.

304

Not modified. This can be used for crawl efficiency, telling search engines that the content has not changed. You’re basically telling Googlebot not bother and to move on to other content. The advent of Caffeine may have made this unnecessary but I think it’s still worthwhile.

404

Not found. This happens when the client can’t find the content at that specific location. Too many 404s are bad. In my experience having too many is a negative algorithmic signal. Google simply doesn’t trust that sending a user to that site will be a positive experience.

I don’t have a hard and fast number for when 404s become problematic. I believe it’s probably based on a percentage of total requests to that site. As such, it’s just good practice to reduce the number of 404s.

That does not mean zero! I don’t recommend putting a 301 in place when it should return a 404. A request for domain.com/foo should return a 404. Ditto for returning a 200 when it should be a 404. (Yes, I’ve seen this lately.) I’d be surprised if having no 404s wasn’t also some sort of red flag.

410

Gone. If you know that content no longer exists, just say so. Don’t encourage Googlebot to come back again and again and again via a 404 which doesn’t tell it why that page no longer exists.

500

Internal Server Error. This generally means that the client never received an appropriate response from the site. 500 errors basically tell the search engine that the site isn’t available. Too many 500 errors call into question the reliability of that site. Google doesn’t want to send users to a site that ultimately times out and doesn’t load.

How to Track Status Codes

There are a number of ways you can track status codes. For spot checking purposes, I recommend installing one of two Firefox add-ons: HttpFox or Live HTTP Headers. These add-ons let you look at the communication between user agent and client. For example, what happens when I type ‘www.searchengineland.com’ directly into my browser bar.

HttpFox Example

Using HttpFox I see that it performs a 301 redirect to the non-www version and then resolves successfully. Google Webmaster Tools also provides you with nice insight through the Crawl Errors reporting interface.

But if you really want to use status codes to your benefit you’ll need to count and track them every day via log file analysis. I recommend creating a daily output that provides the count of status codes encountered by Googlebot and Bingbot.

Status Code Reports

Using those daily numbers you can construct insightful and actionable dashboard graphs.
Sample Status Code Reports

While this may take some doing, the investment is worthwhile. You can quickly identify and resolve 404s and 500s. Many will find it helpful to have this data (concrete numbers!) so you can prioritize issues within a larger organization.

You’ll also gain insight into how long it takes search engines to ‘digest’ a 301 and much more. Status code management can be a valuable part of an advanced SEO program.

Stop Writing for People

January 17 2011 // SEO // 46 Comments

Stop writing for people. Start writing for search engines.

I’ll wait while you run to get your pitchforks and light your torches. I know it sounds like heresy but I ask you to hold your judgment.

Search Engines Emulate Human Evaluation

Search Engines Want to be Human

The goal of search engine algorithms is to emulate the human evaluation of a site or page. This is not an easy task. In fact, it’s a really difficult task. Think of all the things that you tap into when you evaluate a new website. The amount of analysis that goes on in just a few seconds is astounding.

The thing to remember is that search engines want to be a proxy for human evaluation. They’re trying to be … human. Don’t lose sight of this.

Search Engines Are Not Smart

Doh!

But for all of that effort, search engines aren’t smart. The name of my blog is my opinion of search engines: a search engine is like a blind five year old.

The blind part comes in because they don’t care about how pretty your site is or the gorgeous color palette you’ve selected. Mind you, visual assessment is a factor humans use in evaluating a website, but search engines aren’t able to do this.

Why a five year old? For all of the advances search engines have made, they’re still not ‘reading’. They’re performing text and language analysis. That’s a huge distinction. Really. A Grand Canyon type of distinction.

A search engine would likely fail a basic reading comprehension test.

Knowing this, you need to take steps to make it very clear what that page is about and where the search engine should go next. This helps the search engine and … ultimately helps people too.

First Impressions Matter

For a number of years I ran telemarketing programs. (University fundraising if that makes you feel better about me.) What you find out is that you have only 7 seconds to convince a person to stay on the phone. They better hear something worthwhile fast or else you’ll get the dial tone.

It’s no different online. With high speed connections, tabbed browsing, real-time information and an environment where anyone can publish anything, that first impression is incredibly important. In a few seconds users are determining if that content is authoritative and relevant.

Is it any wonder why tools like FiveSecondTest and Clue have become popular?

People Scan Text

Did you know that the P.S. line is one of (if not the) most read parts in a direct mail solicitation? It is. People naturally gravitate toward it. They’re far more willing to read the P.S. line than any of the body copy.

And we see this behavior online too. Research by Jakob Nielsen shows that most readers scan instead of reading word for word.

People rarely read Web pages word by word; instead, they scan the page, picking out individual words and sentences. In research on how people read websites we found that 79 percent of our test users always scanned any new page they came across; only 16 percent read word-by-word.

Another study showed that even those who ‘look’ at your content are only reading between 18% and 28% of it.

tl;dr

Have you seen this spring up around the web lately? It stands for too long, didn’t read. It’s used to summarize content into a sentence or two. It can be used at the top of content or at the bottom. At the bottom, it serves as a close cousin to the tradition direct mail P.S. line.

But why exactly are we seeing tl;dr? Could it be that the content we’re writing just isn’t concise enough? That it’s not formatted for readability? It’s your job to make it easy for people to understand and engage with your content. Keep it simple, stupid.

SEO is more than tags and links. Today SEO is also about User Experience (UX).

The Brain Craves Repetition

There’s an old adage in public speaking that only a third of the audience is listening to you at any given time. This means that you have to repeat yourself at least three times to get your point across. A recent Copyblogger post touched on this subject.

The brain can’t pay attention to everything and it doesn’t let everything in. It figures anything that is repeated constantly must be important, so it holds on to that information.

I also believe in a type of visual osmosis. When evaluating a page for the first time, words that are repeated frequently make an impression, whether they’re specifically read or not.

People instinctively want consistency. They want to know that they’re reading the right thing, in the right way, in the right order. They want to group things. That’s one reason why ‘list’ posts are so popular.

Apply Steve Krug’s ‘Don’t Make Me Think‘ philosophy to your content. Not just for search engines but for people.

Stop Using Pronouns

Don't Use Pronouns

Why use that pronoun when you can use the actual noun instead. Sure, you know what you’re talking about and the reader might, but putting it in ensures that the reader (who is not nearly as invested in your content) is following along. And our dear friend the search engine is also better served.

Having that keyword noun in your content frequently doesn’t make it worse, it makes it better. When you read it, it may feel bloated. But the majority of your readers are skimming while the minority who are truly reading will simply not see those extra nouns. In fact, they become a bit like sign posts.

Here’s an example from the world of books: dialog! What if you didn’t attribute dialog to a specific person.

“I want Mexican food,” he said.

“No, lets get Italian food,’ he replied.

“Can’t we meet in the middle?’ he queried.

How many people are talking? Two? Three? Perhaps one if you’re a Fight Club fan.

Now lets add the names back in.

“I want Mexican food,” Harry said.

“No, lets get Italian food,” Ron replied.

“Can’t we meet in the middle?” Tom queried.

Now I know there are three people talking. And as that dialog continues (as dull as it might be) I’ll use those names as sign posts so I know who’s saying what. But will I actually ‘read’ each instance of that name? Probably not. You’ll pick up the cadence of the dialog and essentially become blind to the actual name. Blind until it becomes unclear and then you’ll seek that name out to clarify exactly who said what.

When people scan they need those sign posts. They need to see that keyword so they can quickly follow along.

Web Writing is Different

When people say you should write for the user, they mean well. In spirit, I completely agree. But in practice, it usually goes dreadfully wrong.

I’ll never forget my first job out of college. I was an Account Coordinator at an advertising agency. One of my jobs was to write up meeting notes. As an English minor I took a bit of pride in my writing skills. So it was a great shock to get back my first attempt with a river of red marks on it.

What had I done wrong?

I wasn’t using the right style. Meeting notes isn’t literature or an essay. I didn’t have to find a different word for ‘agreed’ because of my dislike of using the same verb more than once (twice at most). No, I was told that my writing was too ‘flowery’ and that I needed to aim for clarity and brevity.

I’m not saying that writing for the web is like writing meeting notes. But I am saying that writing for the web is different!

So when you tell someone to write for the user, they usually write the wrong way. They write thinking the user is going to be gorging themselves on every word, giving the content their full attention. They think the user will appreciate the two paragraph humorous digression from the main topic. They’ll want to write like David Mitchell or Margaret Atwood. (Or maybe that’s just me.)

Robots Don’t Understand Irony

Dave Eggers Irony Rant

To my knowledge, there is no double entendre database, nor a irony subroutine or a witticism filter in the algorithm. Do they have a place in your writing? Sure. But sparingly. Not just because search engines won’t understand but because, like it or not, a lot of people won’t get it either.

Everyone might not get the inside joke … like why I used the image of this particular novel above.

Write for Search Engines

Make sure they know exactly what you’re writing about. Stay focused. Break your content up into shorter paragraphs and use big descriptive titles. Avoid pronouns and don’t assume they understand what you just said in a previous paragraph. Keep it simple and give them sign posts.

Write for people the right way. Write for a search engine.

xxx-bondage.com