You Are Browsing The SEO Category

Google Preemptive Penalties

April 01 2011 // Humor + SEO // 2 Comments

Starting this month Google will begin to use a version of the Bush Doctrine to fight web spam. Google will preemptively penalize sites.

Internet Bush Doctine

Google Bush Doctrine

The main tenant of the Bush Doctrine surrounded the idea of preemptive war. Google has decided to adopt this philosophy in dealing with the rampant manipulation of trust and authority via the link graph. Instead of reacting to increases in paid links, splog networks and other schemes, Google is going on the offensive and will penalize sites preemptively.

Perhaps this is a reaction to the revelations about J.C. Penney, Forbes and Overstock, as well as the surveys and polls that indicate that most sites engage in black hat techniques and that paid links are still viable.

Reconsideration Requests

It seems as if an analysis of reconsideration requests helped lead Google to this new policy. A source on the Google web spam team says:

We learn a lot from reconsideration requests. In that environment, sites are willing to admit to and stop bad behavior. Analyzing the profile of these sites before and after has been of growing interest to the team.

Sure enough, the text surrounding reconsideration requests makes it clear that coming clean is important.

Google Reconsideration Request

Admission and corrective action is required to get out of Google’s dog house.

Preemptive Google Penalties

Preemptive penalties will force sites to divulge and cease black hat techniques. Why? Because you’re simply not going to know what Google does and doesn’t know. If you are not forthcoming (if you hold something back) and Google finds out, it will make it even tougher to get out of the dog house.

Are you feeling lucky punk?

Do you feel lucky, punk? Well … do ya?

Penalty Selection and Length

It remains to be seen how Google will select sites for preemptive penalties. Is it random or will it be initiated by members on the web spam team? Will all sites be eligible for preemptive penalties, or will some be white listed?

The length of the preemptive penalty is also unknown. Will it be lifted if the offending site doesn’t file a reconsideration request or is reconsideration required? It will be interesting to see if anyone simply tries to ride out the penalty without engaging Google directly.

And how long will Google pursue this strategy? One would hope that the data gleaned from these preemptive penalties might (quickly) help Google refine their detection efforts, allowing them to scrap this policy.

What do you make of Google’s Bush Doctrine and how will you handle a preemptive penalty?

Google +1 Analysis

March 31 2011 // SEO + Social Media // 12 Comments

Yesterday Google launched Google +1. After firing up Haircut 100 and reading Danny Sullivan’s review I sat down and gave Google +1 a try myself. Here’s what I learned and why I think Google +1 is the start of a war for social data and your attention.

Google +1

What is Google +1?

Google +1 delivers a new set of social search results based on explicit recommendations from your social graph. Right now, those recommendations are made by clicking the +1 button next to a search result – both organic and paid.

In the future, users will be able to recommend content from sites that present the +1 button. At that point, Google +1 will behave similar to a Facebook Like.

All +1s are aggregated and presented on a +1’s tab on your Google profile, which begins to look like a social networking platform.

Google +1 User Experience

Google +1 is a heavy feature on the page, with the icon appearing next to every result on a SERP. Do we really need more to look at? The +1 icon (and instant preview icon) will animate and ‘light up’ as you mouse over any part of that search result.

Google +1 User Experience

These features have nothing to do with each other yet they are presented to the user by the same action. I understand these are ‘actions’ a user can take on a search result but I’m not sure that’s intuitive.

Will the mainstream user even grok the +1 nomenclature? I do, but will the mom in suburban Omaha? Facebook was brilliant in using a term and icon that was instantly recognizable.

I also find the +1 mechanism strange in search results. Am I supposed to search, visit a site and return to the SERP to +1 that result? If search is about discovery, how would I make a judgement on a search result before visiting that site? Am I supposed to search for a whole bunch of stuff I already like so I can +1 those results?

If Google wants to put +1 in search results I think it makes more sense to model their block sites feature that only activates after I’ve visited a site and returned to search results. Alternatively, Google could use web history and only present a +1 option for those pages I have visited.

In the end, it’ll make a lot more sense when there are Google +1 buttons on websites and the search results reflect the +1 count from that page.

Google +1 Social Data

We all know that the link graph is a mess and that Google is seeking other ways to measure trust and authority. I don’t think the web of links will die out altogether but it certainly needs to be augmented with the web of people.

Google +1 Social Data

I don’t think Google wants to rely on Twitter and Facebook for such an important and emerging part of understanding web behavior. Twitter has been civil but the frog boiling of their developer platform is ominous. Facebook has been outright hostile and recently took steps to hide even more web content from Google.

Social signals matter and Google +1 gives Google their own explicit source of social data.

Google +1 allows Google to mine both your personal social graph (your friends who have +1’d items) but also the +1 graph (the popularity of an item based on all +1 activity.)

One has to believe that Google profiles will become some sort of social nexus where further meta information can be attached to +1s. Today it’s a simple list of +1 pages. Tomorrow I might be able to comment on your +1s or Yo Dawg your +1 by giving it a +1. Social proof, curation and the wisdom (or not) of crowds could rise in importance.

As Google begins to look at the +1 data, they may be able to find individuals who have more influence. Not just overall influence but influence for a specific topic.

This means that over time Google could weight a +1 from one individual on a sliding scale. A +1 from yours truly on an SEO site might carry a lot of weight, while a +1 from me on a knitting site might carry little weight.

By mapping topical influence, Google may be able to avoid +1 gaming.

Google +1 Personalization

What does this mean for search and SEO specifically? Google +1 may change the nature of search personalization. The difference between a standard result and a personalized result could get bigger and that would be an interesting SEO development.

Right now, Google’s personalization doesn’t disrupt traditional SEO. Google (and a many SEOs) talk about it a lot and they have increased the number of queries they personalize, but the personalization is still rather subtle in nature. A result might move up and down a place or two, but not from the top of page 1 to the middle of page 2.

Google +1 might change that. Search results could behave more like Facebook’s EdgeRank, which radically changes what each individual sees and experiences.

Google +1 Attention

Google +1 makes Google less reliant on competitors for social data and should help them improve search results. But is that really Google’s only goal?

In order to +1 things, you first need a public Google profile. This helps people see who recommended that tasty recipe or great campsite. When you create a profile, it’s visible to anyone and connections with your email address can easily find it.

Your +1’s are stored in a new tab on your Google profile. You can show your +1’s tab to the world, or keep it private and just use it to personally manage the ever-expanding record of things you love around the web.

Google profiles could enter the war for user attention, currently being won (handily) by Facebook. People spend enormous amounts of time on Facebook, and that’s dangerous since it’s only a matter of time before Facebook becomes a real search competitor.

We’ve seen Google redesign Google profiles and got a glimpse into future social connections like FourSquare and Github. Google profiles contains rich feeds of information from Buzz, PicasaWeb and +1. Google has also paid close attention to privacy, learning from their mistake with Buzz. It all adds up to making Google profiles a true destination for social information.

Google could claim some level of success if Google +1 slows the adoption or ubiquity of Facebook’s Like button. Fracturing Facebook’s hold on attention seems like the end game.

Google +1 and Circles

Take a spin over to your Google Dashboard and you’ll find the (no longer) mythical Circles feature.

Google Social Circle and Content

Click on View social circle and you get an idea of how far along Google is on their social product.

Googel Social Circle

Google has constructed their own social graph and are now building features and user facing tools on top of this expanding data structure. I’d argue that +1 is a data feed to further support a still emerging social networking product.

Will Google +1 work?

Google does not have a good track record when it comes to social, including SearchWiki, Stars, Wave and even Buzz.

The current search interface for creating and viewing +1s seems clunky and the entire visual field seems both too saturated and nebulous at the same time. But I’ll hold judgement until +1 buttons show up on websites. It’s then that mainstream users might better understand the functionality.

From a search quality perspective, I believe results could get better, but it’s not a fait accompli. The integrity of a user’s social graph will be paramount. How large is that social graph? Does it contain strong ties or weak ties? Which nodes in a social graph are more meaningful or influential? (e.g. – a +1 from your best friend might mean more than a +1 from someone you met once at a conference.)

Facebook is ahead of the game here with their EdgeRank algorithm. Google will need to further develop their own to make personalized search successful.

If Google really wants to take this to the next level, make a +1 bookmarklet (or bundle it with Toolbar or Chrome for wider use.) That way I can +1 sites and pages that don’t have the button on their site. This would expand the reach of social search beyond just those sites savvy enough to implement the buttons.

At the end of the day I think Google needs +1 to work, if for no other reason then to have an independent source of social data. The war for social data and your attention may have begun in earnest.

Link Gardening

March 25 2011 // SEO // 2 Comments

I’ve been thinking a lot lately about link building. The term has always made me uneasy, in part because it feels manufactured.

My mind conjures up an image of an assembly line where link after link rolls down a drab conveyor belt. It feels wrong because links are supposed to be organic. Way back when a link was a real sign of trust and authority. People linked to another site because they found it useful or interesting.

PageRank and SEO changed how people thought about links. Suddenly people linked because of SEO. The motive was no longer pure. It wasn’t about real trust and authority, but was to achieve a measure of trust and authority.

By measuring the link graph Google has forever changed it. Google has a Heisenberg problem.

Link Building

Link Building

Is link building a good thing? Too often it seems to be about getting the most links on highest ranked pages with specific anchor text.

Practitioners are like foremen on a construction site. They mix together forum links, blog links, social bookmarking links, reciprocal links, directory links, widget links, paid links and content engineered to obtain links. We’ll have your McMansion banged out in no time flat!

I’d argue that much of that work is short-sighted and bound to fail. Because links are really about establishing yourself in a neighborhood.

That means getting to know your neighbors and gaining their trust. Build that McMansion in a neighborhood full of one story Eichlers and you’re probably off to a bad start. You can overcome that, but not if you avoid the neighborhood block party or PTA. And you’ll get more out of coaching in the local little league than simply sponsoring the team.

There just aren’t any shortcuts to getting to know people and creating relationships.

Link Gardening

I think the better analogy for links might be gardening.

Link Gardening

Prep the Soil

You have to prep the soil first and get your garden in order. Does the soil have the necessary nutrients? That means you have a site that contains valuable content. You probably want to set up a drip system too, which means ensuring your content can be easily shared.

Plant the Seeds

Things don’t just magically start to grow. You need to plant seeds and put down roots. And not just any seeds but the right ones for your garden, depending on climate, soil type and amount of sun exposure. This means you’re reaching out and finding people in your neighborhood. You’re reading and commenting on their blogs. You’re following and engaging them on Twitter and Facebook. Email them and introduce yourself.

Sure, you might be able to pick up some cacti on the cheap but will that even grow in your garden? You want links from people in your neighborhood, from relevant sites not from some random off-topic blog. A handful of relevant links are going to be more powerful than a hundred random ones.

Tend the Garden

At first you’re not going to see a whole lot. That’s the truth. Gardening takes time and constant attention. You can’t just plant a few seeds and walk away thinking you’ll come back to a beautiful garden. You’ll need to water it, throw down some fertilizer, be on the look out for bugs and do some weeding.

When those first shoots start to come up the work gets even harder. Plants need love and so do those links. Thank people for the link and make sure you’re aerating the soil by continuing to produce great content.

You may have to do some pruning or pull weeds to ensure there’s room for the plants you really want. That means turning down the inevitable link requests, guarding against comment spam and watching for links from bad neighborhoods. For pest issues, make a call to pest control boston who are available 24/7.

Keep at it and over time – over months and years – your garden will be the envy of the neighborhood. People will come to you for advice and links.

Hiring a Gardener

Of course there are plenty of link builders who behave a lot more like link gardeners and you can click here to visit Drake’s 7 for free. They understand that it’s about community and relationships, and that those things can’t be manufactured.

Gorgeous Flowers

But make no mistake, some folks have a green thumb! They’re master gardeners. You can hire someone to do this for you, but it won’t be cheap. And when someone asks about that variety of plant in the corner, you might be stumped. If you have the time and resources, you might get more satisfaction out of doing it yourself.

Whether you hire a master gardener or do it yourself, I hope you decide to grow your links instead of build them.

Google Rich Social Snippets

March 21 2011 // SEO // 1 Comment

I am seeing a new type of snippet on Google SERPs for certain social site profiles.

Google Rich Social Snippets

This rich social snippet includes a profile image as well as the most recent update from that individual.

I am seeing the new snippet for Twitter and FriendFeed profiles. I have not been able to produce a rich social snippet for Facebook or LinkedIn profiles.

In addition, it looks like the snippet might have a real-time trigger and is only displayed if that person has participated on that platform within the last hour.

Twitter Rich Social Snippet

Google Rich Social Snippets

FriendFeed Rich Social Snippet

FriendFeed Rich Social Snippet

What do you think of rich social snippets? Is this new or have you seen it before?

Google Personalized Search

March 21 2011 // SEO + Social Media + Technology // Comments Off on Google Personalized Search

Google recently launched a new feature that allows users to personalize their search results by blocking certain domains. What impact will this have and what does it mean for the future of search?

The Smiths

Artificial Intelligence

A recent New York Post article by Peter Norvig discussed advances in artificial intelligence. Instead of creating HAL, the current philosophy is to allow both human and computer to concentrate on what they do best.

A good example is the web search engine, which uses A.I. (and other technology) to sort through billions of web pages to give you the most relevant pages for your query. It does this far better and faster than any human could manage. But the search engine still relies on the human to make the final judgment: which link to click on, and how to interpret the resulting page.

The partnership between human and machine is stronger than either one alone. As Werner von Braun said when he was asked what sort of computer should be put onboard in future space missions, “Man is the best computer we can put aboard a spacecraft, and the only one that can be mass produced with unskilled labor.” There is no need to replace humans; rather, we should think of what tools will make them more productive.

I like where this might be leading and absolutely love the idea of personalized results. Let me shape my own search results!

Human Computer Information Retrieval

I’ve been reading a lot about HCIR lately. It’s a fascinating area of research that could truly change how we search. Implemented the right way, search would become very personal and very powerful.

The challenge seems to be creating effective human computer refinement interfaces. Or, more specifically, interfaces that produce active refinement, not passive refinement.

At present, Google uses a lot of passive refinement to personalize results. They look at an individual’s search and web history, track click-through rate and pogosticking on SERPs and add a layer of geolocation.

Getting users to actively participate has been a problem for Google.

Jerry Maguire

A Brief History of Google Personalization

Google launched personalized search in June of 2005 and expanded their efforts in February of 2007. But the first major foray into soliciting active refinement was in November of 2008 with the launch of SearchWiki.

This new feature is an example of how search is becoming increasingly dynamic, giving people tools that make search even more useful to them in their daily lives.

The problem was that no one really used SearchWiki. In the end it was simply too complicated and couldn’t compete with other elements on the page, including the rising prominence of universal search results and additional Onebox presentations.

In December of 2009 Google expanded the reach of personalized search.

What we’re doing today is expanding Personalized Search so that we can provide it to signed-out users as well. This addition enables us to customize search results for you based upon 180 days of search activity linked to an anonymous cookie in your browser.

This didn’t go down so well with a number of privacy folks. However, I believe it showed that Google felt personalized search did benefit users. They also probably wanted to expand their data set.

In March of 2010 SearchWiki was retired with the launch of Stars.

With stars, we’ve created a lightweight and flexible way for people to mark and rediscover web content.

Stars wasn’t really about personalizing results. It presented relevant bookmarks at the top of your search results. Google clearly learned that the interaction design for SearchWiki wasn’t working. The Stars interaction design was far easier, but the feature benefits weren’t compelling enough.

A year later, Stars is replaced with blocked sites.

We’re adding this feature because we believe giving you control over the results you find will provide an even more personalized and enjoyable experience on Google.

Actually, I’m not sure what this feature is called. Are we blocking sites or hiding sites? The lack of product marketing surrounding this feature makes me think it was rushed into production.

In addition, the interaction design of the feature is essentially the same as FriendFeed’s hide functionality. Perhaps that’s why the messaging is so confused.

Cribbing the FriendFeed hide feature isn’t a bad thing – it’s simple, elegant and powerful. In fact, I hope Google adopts the extended feature set and allows results from a blocked site to be surfaced if it is recommended by someone in my social graph.

Can Google Engage Users?

I wish Google would have launched the block feature more aggressively and before any large scale algorithmic changes. The staging of these developments points to a lack of confidence in engaging users to refine search results.

Google hasn’t solved the active engagement problem. Other Google products that rely on active engagement have also failed to dazzle, including Google Wave and Google Buzz.

I worry that this short-coming may cause Google to focus on leveraging engagement rather then working on ways to increase the breadth and depth of engagement.

In addition, while we’re not currently using the domains people block as a signal in ranking, we’ll look at the data and see whether it would be useful as we continue to evaluate and improve our search results in the future.

This may simply be a way to reserve the right to use the data in the future. And, in general, I don’t have a problem with using the data as long as it’s used in moderation.

Curated data can help augment the algorithm. Yet, it is a slippery slope. The influence of others shouldn’t have a dramatic effect on my search results and certainly should not lead to sites being removed from results altogether.

That’s not personalization, that’s censorship.

SERPs are not Snowflakes

All of Google’s search personalization has been relatively subtle and innocuous. Rank is still meaningful despite claims by chicken little SEOs. I’m not sure what reports they’re looking at, but the variation in rank on terms due to personalization is still low.

SERPs are not Snowflakes

Even when personalization is applied, it is rarely a game changer. You’ll see small movement within the rankings, but not wild changes. I can still track and trend average rank, even with personalization becoming more commonplace. Given the amount of bucket testing Google is doing I can’t even say that the observed differences can be attributed solely to personalization.

I don’t use rankings as a way to steer my SEO efforts, but to think rank is no longer useful as a measurement device is wrong. Yet, personalization still has the potential to be disruptive.

The Future of Search Personalization

Google needs to increase the level of active human interaction with search results. They need our help to take search to the next level. Yet, most of what I hear lately is about Google trying to predict search behavior. Have they given up on us? I hope not.

Gary Marchionini, a leader in the HCIR field, puts forth a number of goals for HCIR systems. Among them are a few that I think bear repeating.

Systems should increase user responsibility as well as control; that is, information systems require human intellectual effort, and good effort is rewarded.

Systems should be engaging and fun to use.

The idea that the process should be engaging, fun to use and that good effort is rewarded sounds a lot like game mechanics. Imagine if Google could get people to engage search results on the same level as they engage with World of Warcraft!

World of Google

Might a percentage complete device, popularized by LinkedIn, increase engagement? Maybe, like StackOverflow, certain search features are only available (or unlocked) once a user has invested time and effort? Game mechanics not only increases engagement but helps introduce, educate and train users on that product or system.

Gamification of search is just one way you could try to tackle the active engagement problem. There are plenty of other avenues available.

Personalization and SEO

I used the cover artwork from the Smith’s last studio album at the beginning of this post. I thought ‘Strangeways, Here We Come’ was an apt description for the potential future of personalized search. However, a popular track from this album may be more meaningful.

Stop me if you think you’ve heard this one before.

SEO is not dead, nor will it die as a result of personalization. The industry will continue to evolve and grow. Personalization will only hasten the integration of numerous other related fields (UX and CRO among others) into SEO.

The block site feature is a step in the right direction because it allows control and refinement of the search experience transparently without impacting others. It could be the start of a revolution in search. Yet … I have heard this one before.

Lets hope Google has another album left in them.

Facebook Comments and SEO

March 16 2011 // SEO + Social Media + Technology // 27 Comments

Facebook Comments could be the most disruptive feature released by Facebook. Why? Comments are one of the largest sources of meta content on the web. Our conversations provide a valuable feedback mechanism, giving greater context to both users and to search engines.

The Walled Garden

Using Firebug you can quickly locate Facebook Comments and determine how they’re being rendered. Facebook Comments are served in an iframe.

Facebook Comments Delivered in iFrame

This means that the comments are not going to be attributed to that page or site nor seen by search engines. In short, Facebook Comments reside in the walled garden. All your comments are belong to Facebook.

This differs from implementations like Disqus or IntenseDebate where the comments are ‘on the page’ or ‘in-line’. One of the easier ways to understand this is to grab comment text from each platform and search for it on Google. Remember to put the entire text in quotes so you’re searching for that exact comment phrase.

Disqus Comments

Here’s a comment I made at Search Engine Roundtable via Disqus.

Comment on Disqus

Here’s a search for that comment on Google.

Disqus Comment SERP

Sure enough you can find my comment directly at Search Engine Roundtable or at FriendFeed, where I import my Disqus comments.

Facebook Comments

Here’s a comment made via Facebook Comments on TechCrunch.

Comment made via Facebook Comments

Here’s a search for this comment on Google.

Facebook Comments SERP

In this instance you can’t find this comment via search (even on Bing). The comment doesn’t exist outside of Facebook’s walled garden. It doesn’t resolve back to TechCrunch.

I thought of an edge case where Facebook Comments might show up on FriendFeed (via Facebook), but my test indicates they do not.

Comments and SEO

Search engines won’t see Facebook Comments. That is a big deal. Comments reflect the user syntax. They capture how people are really talking about a topic or product. Comments help search engines to create keyword clusters and deliver long-tail searches. Comments may signal that the content is still fresh, important and popular. All that goes by the wayside.

It’s no secret that search engines crave text. Depriving Google of this valuable source of text is an aggressive move by Facebook.

Is this on purpose? I have to believe it is. I can’t know for sure but it’s curious that my Quora question has gone unanswered by Facebook, even when I’ve asked a specific Facebook Engineer to answer.

[Update] Ray C. He did wind up answering my question and provided some examples of how Facebook comments could be made visible to search engines. (Thank you.) Essentially you grab the comments via the API and display them inline behind the comment box, similar to using a noscript tag. It’s nice that they have this capability but most will simply use the default version without question or not apply this hack due to lack of technical expertise or time.

In addition, many have since noted that Google has started indexing Facebook comments. Problem solved right? Wrong! Google has always reserved the right to associate iframe content with a URL when it felt it was important. It just rarely did so. The truth of the matter is Google is still only indexing a small fraction of Facebook comments overall. So don’t count on Google indexing your Facebook comments.

Comment Spam

Comment Spam

Comment spam is a huge problem. You know this if you’ve managed a blog for any amount of time. Google’s implementation of nofollow didn’t do much to stop this practice. So Facebook Comments is appealing to many since the forced identity will curtail most, if not all, of the comment spam.

This also means that the meta content for sites using Facebook Comments may be more pristine. This should be an advantage when Facebook does any type of Natural Language Processing on this data. A cleaner data set can’t hurt.

Article Sentiment

Extending this idea, you begin to realize that Facebook could have a real leg up on determining the sentiment of an article or blog post. Others might be able to parse Tweets or other indicators, but Facebook would have access to a large amount of proprietary content to mine page level and domain level sentiment.

Comment Reputation

Facebook can improve on sentiment by looking at comment reputation. Here’s where it gets exciting and scary all at the same time. Facebook can map people and their comments to Open Graph objects. It sounds a bit mundane but I think it’s a huge playground.

Suddenly, Facebook could know who carries a high reputation on certain types of content. Where did you comment? How many replies did you receive? What was the sentiment of those replies? What was the reputation for those who replied to you? How many Likes did you receive? How many times have you commented on the same Open Graph object as someone else?

You might be highly influential when commenting on technology but not at all when commenting on sports.

The amount of analysis that could be performed at the intersection of people, comments and objects is … amazing. Facebook knows who is saying what as well as when and where they’re saying it.

PeopleRank

PeopleRank

Facebook Comments could go a long way in helping Facebook create a PeopleRank algorithm that would help them better rank pages for their users. If I haven’t said it recently, Facebook’s Open Graph is just another version of Google’s Search Index.

In this instance, Facebook seems to be doing everything it can to develop an alternate way of ranking the web’s content while preventing Google from doing so. (Or am I projecting my own paranoia on the situation?)

PeopleRank could replace PageRank as the dominant way to organize content.

Traffic Channel Disruption

The traffic implications of Facebook Comments are substantial. By removing this content from the web, Facebook could reduce the ability of Google and Bing to send traffic to these sites. The long tail would get a lot shorter if Facebook Comments were widely adopted as is.

We’ve seen some anecdotal evidence that referring traffic from Facebook has increased after implementing Facebook Comments. That makes sense, particularly in the short-term.

The question is whether this is additive or a zero-sum game. In the long-run, would implementing Facebook Comments provide more traffic despite the potential loss in search engine traffic via fewer long-tail visits?

For publishers, the answer might be yes. For retailers, the answer might be no. That has a lot to do with the difference between informational and transactional search.

Even posing the question shows how disruptive Facebook Comments could be if it is widely adopted. It could be the true start of a major shift in website traffic channel mix.

SEO and UX

March 08 2011 // SEO + Web Design // 7 Comments

Search Engine Optimization (SEO) and User Experience (UX) are not at odds with each other. Done correctly, SEO and UX should be complimentary.

chocolate and peanut butter

Here’s why SEO and UX are like chocolate and peanut butter.

Intent

SEO is, when you get down to it, about meeting query intent. This goes well beyond the traditional breakdown between informational, transactional and navigational search.

Keyword research is performed not to just identify the keywords and modifiers with the largest search volume, but to understand the syntax and intent of users in that vertical. We’re looking for patterns and want to understand how and why people are searching on specific terms. Maybe you’d prefer to call them user stories?

For instance, why might someone search for a product manual? Is it to get specifications for that product or because they’re having a problem with the product? SEO seeks to understand intent to best satisfy that query.

Satisfaction

Can't Get No Satisfaction

Google is intensely interested in measuring user satisfaction. They measure pogosticking behavior, track long clicks versus short clicks and in some instances can analyze click-stream behavior. No, Google is not peeking at your Google Analytics data. They have other ways of obtaining this information.

The result is that an SEO will not want pages with an unnaturally high bounce rate or sessions with a very low time on site. We might not talk about delighting the user (I’ve heard just about enough of that) but we care about user satisfaction.

Really, it goes way beyond the metrics above. A savvy SEO knows that user satisfaction leads to more word-of-mouth, more social mentions and more links.

Readability

I’ve been very critical about how people write and format content for the web. It’s not just about having the right content, it’s about making it accessible and easy to read.

We’ll want to see proper font hierarchy, though many might talk about it as header optimization. Our obsession about keyword frequency is rooted in the knowledge that people crave consistency and repetition as a way to understand content.

SEO wants a person visiting a page to instantly understand what it is about. Take my five foot web design philosophy as an example, or try the five second test and similar tools. We know that what is retained by a user is likely what will be retained by a search engine.

Conversion

Traffic does nothing in and of itself. Don’t hire an SEO is they’re simply concerned about driving traffic. There’s no need to get saucer eyes about that big pool of traffic you could optimize for but would actually do nothing for your business. (If that were the case we’d simply use ‘boobs’ as a modifier for all terms.)

I suppose if you’re running an ad supported model it matters, but at the end of the day the best traffic is traffic that converts. You register a new user or you make a sale. When you do this it means you likely have a better way to connect with these people in the future.

SEO is, largely, an acquisition channel. A rising rate of repeat visits through natural search should make an SEO uneasy. Poor conversion might point to low satisfaction, to not matching query intent or to not being relevant.

Relevance

No Kitchen Sink Design Please

Google wants to return pages that are most relevant to a query.

Yet, too often sites want to throw the kitchen sink at someone when they land on a page. If I search and find your site about programmable coffee makers you should put associated links and content about coffee makers, coffee and maybe coffee cups. Don’t put content and links to lawn mowers, refrigerators, and sofas on the same page.

SEO wants focus! We want to create topic neighborhoods. It may come out as discussions about anchor text, the number of links on a page, cross linking strategies and references to page rank, but we’re really talking about relevance.

Everything they want, nothing they don’t.

Navigation

We care about how users get from one point to another. We’re mapping the information architecture (IA) of a site. We think about how many links are really necessary on the page. We’re thinking about breadcrumbs. We’ll have an opinion on drop down and mega menus. (I generally don’t like them.)

We’re analyzing how easy it is to get from the home page to any other page on the site. That’s important for users as well as search engines.

We want navigation to enforce and enhance relevance.

Social

Today, SEO is also about being social. The deteriorating link graph is augmented by social authority. Whether this is straight up brand mentions or links, both primary (acknowledged by Google and Bing to be a ranking factor) and secondary (the links generated as a result of social chatter), an SEO is going to ask how the content or product is going to be shared and distributed.

Trust and authority is earned through social evangelism.

SEO and UX

Reese's Peanut Butter Cups

Is what I describe that different than UX? Do any of these things sound like they’d be bad for your business?

Instead, I challenge organizations to think of SEO not as a necessary evil, not as something you trade-off against better user experience, but instead look at SEO as an ally to creating better user experience.

Farmer Update About Sites Not Content

February 27 2011 // SEO // 25 Comments

The recent algorithm change by Google, dubbed by Danny Sullivan as the Farmer Update, was “designed to reduce rankings for low-quality sites …”

The emphasis is mine. I think that emphasis is important because the Farmer Update was not a content level algorithm change. Google did not suddenly get better at analyzing the actual quality of content. Instead they changed the algorithm so that the sites for which they felt were “low-value add for users” or “sites that are just not very useful” were demoted.

Manual Versus Algorithmic

I’ve been asked a number of times whether this was done manually. The simple answer is no. If it were done manually it would have been a lot more precise. Google did not, for example, take a list of the top 1000 low-quality sites and demote them. That would have been akin to a simple penalty. If they had done this you wouldn’t find sites like Digital Inspiration getting caught in the crossfire.

The complex answer is sort of. Algorithms are built by people. The results generated reflect the types of data included, the weight given to each type of data and how that data is processed. People at Google decide what signals make up the algorithm, what weight each signal should carry and how those signals are processed. Changing any of these elements is a type of human or manual intervention.

Until the Google algorithm becomes self-aware it remains the opinion of those who make up the search quality team.

Dialing For Domains

Turning Algorithm Signals

I absolutely believe Google when they say they didn’t manually target specific sites. Yet, they clearly had an idea of the sites they didn’t want to see at the top of the rankings.

The way to do this would be to analyze those sites and see what signals they had in common. You build a profile from those sites and try to reverse-engineer why they’re ranking so well. Then you look to see what combination of signals could be turned up, or down that would impact those sites.

Of course, the problem is that by turning the dials to demote these sites you also impact a whole lot of other sites in the process. The sites Google sought to demote do not have a unique fingerprint within the algorithm.

The result is a fair amount of collateral damage.

Addition By Subtraction

Addition by Subtraction

The other thing to note about the technique Google used was the idea that search quality would improve if these sites were demoted. Again, it’s not that other content has been deemed better. This was not an update that sought to identify and promote better content. Instead, Google seems to believe that by demoting sites with these qualities that search quality would improve.

But did it really?

You can take a peek at what the results looked like before by visiting a handful of specific IP addresses instead of using the Google domain to perform your search. So, what about a mom who is looking for information on school bullying.

Here’s what she’ll see now.

Google School Bullying Query Result

Wikipedia is the top result for school bullying. Really? That’s the best we have on the subject? Next up is a 20/20 piece from ABC. It’s not a bad piece, but if I’m a mom looking for this serious subject do I appreciate, among other things, the three links to Charlie Sheen articles.

What was the top result before the Google Farmer Update?

Google School Bullying Query Results Before Farmer Update

The Bullying Information Center from Education.com was the top result. That’s right, Education.com was one of the larger sites caught up in this change according to the expanded Sistrix data. Is the user (that mom) better served?

This is the tip of the iceberg. In fact it was the first one I tried, by simply using SEMRush to get a list of top terms for Education.com.

Throwing The Baby Out With The Bath Water

The last problem with the site technique is the inability for it to identify the diamonds in the rough. A content site composed of thousands of independent authors may have a varying degree of content quality. Applying a site wide demotion applies the same standard to all content regardless of individual quality.

The Google Farmer Update treats great content the same way as lousy content.

Shouldn’t we want the best content, regardless of source?

The Google Opinion

I’ve been vocal about how I think it’s presumptuous to believe that Google understands our collective definition of quality. Even in the announcement I find their definition of high-quality interesting.

… sites with original content and information such as research, in-depth reports, thoughtful analysis and so on.

First, is this really what took the place of those ‘low-quality’ sites? As I’ve shown that’s not always the case.

But more to the point, do we all want research and in-depth reports? In the age of 140 characters and TL;DR, where time is of the essence and attention is scarce, is the everyman looking for these things?

And what makes an analysis thoughtful, or thoughtful enough? Is my analysis here thoughtful? I hope so, but I’m sure some people won’t agree.

Don’t Hate The Player, Hate The Game

Don't Hate The Player, Hate The Game

Let me tell you a secret, I personally agree with what Google is trying to accomplish. There are sites I absolutely detest, that if I were king of the Internet would be removed from all search results. I wish people wouldn’t fall over themselves for the cliche, but effective, ‘list’ blog post. I want people to truly read, word-for-word, instead of scan and to think that people won’t look at a 1,000 word blog post (like this) as a chore.

I wish more people took the time (particularly in this industry) to download a research paper on information retrieval and actually read it. But guess what? That’s not how it works. Maybe that’s how I, and Google, want it to work. But that’s not reality. You know it and I know it.

So while I personally get where Google is coming from, I can’t endorse or support it.

From Library To Bookstore

The reason why I have been such a fan of Google is that they were close to neutral. The data would lead them to the right result. They were the library, allowing users to query and pick out what they wanted from a collection of relevant material. Google was the virtual catalog, an online Dewey Decimal System.

Today they seem more like an independent bookstore, putting David Mitchell, Philip Roth and Margaret Atwood out front while burying Stuart Woods and Danielle Steele in the stacks. Would you like an expensive ad latte with that?

Different Is Not Better

Did search quality get better? I tend to think not. It’s certainly different, but lets not confuse those two concepts.

By subjectively targeting a class of sites Google inadvertently demoted good sites along with bad sites, good content along with bad content.

We know 6 minus 3 doesn’t equal 10.

Google Doesn’t Trust Us

February 25 2011 // Rant + SEO // 10 Comments

Yesterday Google rolled out an algorithm change “designed to reduce rankings for low-quality sites—sites which are low-value add for users, copy content from other websites or sites that are just not very useful.”

Google’s share of the search market must have been suffering, right? Wrong. comScore puts Google at 63.0% in January 2009, 65.4% in January 2010 and 65.6% in January 2011. People were not defecting.

Not only that but Google is the leader in search engine customer satisfaction according to the American Customer Satisfaction Index. So why the change?

Google Means Well

I believe Google thinks they’re making things better. I don’t see a Machiavellian scheme behind every Google action. I like these guys. Meet any of the people at Google and you realize they’ve drunk deep from the search quality kool-aid. They are true believers! On top of that, they’re usually amiable and generous with their time.

Kool Aid

But … the road to hell is paved with good intentions.

Google Got Bullied

What’s shocking is how Google got pressured into making this change. Vivek Wadhwa, Paul Kedrosky, Jeff Atwood, Michael Arrington, Rich Skrenta and others played Google. A bunch of upper-class, highly-educated technophiles convinced Google that search quality was in jeopardy. Was search quality really an issue or was this a matter of taste?

lunatic fringe

A reminder, you can please some of the people all of the time and all of the people some of the time but you can’t please all of the people all of the time.

The Echo Chamber

A good marketer knows that they are not the target market. If you’re reading this, you are not the primary search user. You might be a power user, but you are in the minority my friend.

Perhaps there is more under the hood, but from where I sit Google chose qualitative feedback over quantitative feedback. The problem? That qualitative feedback was biased. The Silicon Valley echo chamber flexed its muscle and Google acquiesced.

Red Herring

What is disappointing is that Google decided to tackle the subjective (content quality) instead of the objective (link fraud). Do we truly think that JC Penney, Forbes and Overstock are outliers? The answer is an unquestionable no.

What’s a bigger threat to search quality? The blatant and rampant manipulation of trust and authority via link fraud or the creation of content (of varying quality) to meet query intent?

What Changed

A staggering 11.8% of queries were impacted by this algorithm change. I’m curious about how Google effected this change.

Did they re-weight current signals or create new signals? Google acknowledges that data from the Personal Blocklist Chrome extension was not used. That doesn’t mean other new signals or data weren’t used. But even if Google did introduce other new signals, to impact 11.8% of the queries it seems reasonable to believe that current signals were also re-weighted.

That assumption and hours of SERP review lead me to the following conjecture.

  • Trusted TLDs (org, gov, edu) were given more weight
  • Exact Match Keyword Domains were given more weight
  • Forums were given more weight
  • On-Site text was given more weight

The last presents itself in an odd way. Sites that look like they were last touched in 2003 are ranking well. It’s as if Google sought a ‘no style’ version of the web. This also includes a number of long form blogs. Sadly, many of these same sites are bloated with AdSense. Now, AdSense is everywhere so … that’s to be expected. But the position of the ad units on many of these sites is completely against any UX standard.

This is a very simplistic and blunt analysis. I’m sure others will tease out other differences and we’ll never know for sure what changed. But what it tells me is that Google changed quantitative measures to meet a pre-determined qualitative goal.

The Real Story

Google passed judgment on the quality and value of sites in what seems like a very subjective manner. How exactly did these sites and specific pages rank so well in the past? What suddenly changed? Did the pogosticking rate creep up? Did internal satisfaction metrics of the ‘reasonable surfer’ change? I’m not hearing any of that. I’m hearing subjective terms like ‘quality’, ‘value’ and ‘useful’ being thrown around.

Google is setting their own perceived metric of value in conflict with other signals, metrics and feedback. The message? Google doesn’t trust us to know any better. It’s not about what we want. It’s about what Google thinks we should want.

Skeptical Cat

The idea that Google altered current signals to effect a perceived content quality metric should terrify you.

It’s all very well and good when those changes don’t impact you. You guffaw at Mahalo’s demise. But what happens when they come for you? What happens when you’re suddenly the target? How will you feel when your content is called into question?

The Future of Facebook Search?

February 22 2011 // SEO // 5 Comments

Facebook continues to test and improve their own search results. Yet, are we too focused on how Facebook is tackling traditional search?

Lately I’ve been thinking about another way Facebook could implement search, transforming the news feed into a search result.

More Like This

What if Facebook added a simple More Like This link to certain news feed items?

Future of Facebook Search

Clicking on the More Like This link would return a news feed with related content. In this instance, it would return Open Graph pages related to Samsung and HDTVs.

A filtered news feed becomes the search result.

The news feed would be a mix of related pages Liked by those in your own social graph, popular pages from the Open Graph in general and, potentially, sponsored stories.

I think the More Like This verbiage conveys relevancy and also capitalizes on the Like terminology. And presenting the results in a news feed hopefully reduces friction and eases people from one mental context to another.

I’d probably use a compact display that presented the number of Likes and Comments for each item. Perhaps comments are exposed just for those in your own social graph with the remaining comments available on demand?

Really, the design element are all things Facebook could quickly test and iterate on. Getting a More Like This feature to work is the difficult part.

Content Analysis

Implementing a More Like This feature relies on a number of assumptions. The largest of these assumptions is whether Facebook can identify the content of a news feed item. My example might be difficult because it’s a simple status update without a link that has Open Graph data already attached to it.

Yet, Facebook has developed content filtering for pages, updating their capability recently to auto detect potential spam posts and now provides keyword level blacklists.

More importantly, Facebook has tested semantically related content. While none of these point to true natural language processing, there seems to be a fair amount of effort on bridging the divide between text and Open Graph objects.

Moving Intent

Why is this interesting? I believe a More Like This feature would change or move user intent. Search has traditionally been about intent harvesting. Users come to Google with an intent. (“I want to find a creme brulee recipe.”) At that point it’s a bit like shooting fish-in-a-barrel.

Creme Brulee

Why did I want to find that creme brulee recipe? What created that intent?

Did I see one on Tastespotting? Did someone start a favorite dessert thread on Facebook? My wife ordered one last night? Or maybe I saw a check-in at Whole Foods which gets me thinking about their creme brulee?  Any of these things, or a combination of them, could have been responsible for intent generation.

AdSense is largely looking to bridge the divide between intent generation and intent harvesting. It hopes to match ads to content. While it’s gotten better, the Google Search Network performs far better than the Google Display Network. Similarly, Facebook Ads tries to bridge the same divide through interest targeting.

It’s the difference active and passive, between explicit and implicit.

Assisted Search

A More Like This feature creates an interaction – an activity. The user is raising their hand and requesting more information about that content or topic. It might not be a traditional search – it may not translate into intent harvesting –  but it’s certainly much further down the spectrum.

Lets be clear, this isn’t nearly as explicit as someone typing in a search query. I’d think of More Like This clicks as assisted searches. They signal some greater level of interest and engagement. How much is difficult to tell, though you could learn a lot through the Like and click-stream behavior from these More Like This results.

Facebook More Like This Example

Clicking More Like This in this instance would return a collection of pages about football, specifically about Liverpool football. It could also contain an item for Liverpool FC jerseys or a link to a game plan ticket package. Facebook could learn, just as Google does, what mix of results provides the best experience.

Of course, a lot of this depends on semantics.

Semantic Enhancement

Semantics could make the More Like This feature quite powerful. If Facebook can identify the topic (or object), semantics could help surface a host of related content and expand the advertising targets for each assisted search.

Lets take my creme brulee example. Semantics would help you find recipes but also restaurants with the best creme brulee, as well as retailers who carry a creme brulee torch.

This is all predicated on an object based search system rather than a keyword based in system. This is important when thinking about monetization. A keyword based system delivers specificity. An advertiser can bid on the term ‘buy creme brulee torch’ and be confident they’re matching the query intent.

An object based system may not deliver that level of detail. But it seems the likely first step if Facebook were going to implement a More Like This feature, allowing advertisers to bid against More Like This results by object.

Facebook SEO would also become critical if something akin to More Like This were implemented and became popular.

Search Assumptions

Facebook already has the social component that traditional search engines like Bing and Google are furiously trying to integrate into their own search results. So while I and many others are focused on the amount of volume and revenue produced through the Facebook search bar, the number of total searches could soar through a different entry point into search.

Are we laboring under the false assumption that Facebook must implement search the same way as Google?

xxx-bondage.com