Google SEO Communication

June 02 2011 // Marketing + SEO // 6 Comments

Google has a love hate relationship with the SEO community. They view many SEO agencies, consultants and services as part of the problem – parasites that seek to exploit and game their algorithm. No doubt, many fall into this category.

NIN Pretty Hate Machine CD Cover

Unfortunately, Google’s lack of transparency contributes to the problem, spawning a host of poor theories and misguided practices. In addition, the changing nature of the algorithm creates a powerful variant of bit rot – outdated information and myths that stubbornly persist.

In response, Google has worked (perhaps reluctantly) to improve communication with the SEO community. They send employees to search conferences, write blogs, create videos, maintain a forum, provide informational tools and have a presence on social media platforms (Twitter) and sites (Hacker News).

The vast majority of these efforts are undertaken by one person: Matt Cutts.

Last month Google increased their communication efforts, dedicating a blog to search (it’s about time!) and doing a live 90 minute Q&A session via YouTube. I’m encouraged by these new developments but Google still doesn’t have a solid share of voice within the SEO community and when it does it is often viewed with suspicion.

Here are three ways Google could improve SEO relations.

Google Search Summit

Invite select members (perhaps 50) of the SEO community to the Google campus for a search summit with Google engineers. This is very different from a conference where the day-to-day mechanics of the SEO industry are discussed.

Instead, I propose a real exchange of ideas on the nature and problems of search. It could even have a lean component where groups are challenged to propose a new way to deal with a specific search problem.

There are a number of smart folks in the SEO community who could contribute positively to discussions on search quality or web spam. Even if Google doesn’t believe this, understanding how the SEO community perceives certain stances, guidelines and practices would be valuable.

At a minimum, the dialog would provide additional context behind search guidelines and algorithmic efforts. For Google, this means the attendees become agents of ‘truth’. By allowing the SEO community to truly engage and learn, they can help transmit Google’s message. I’m not talking about a Kool Aid conversion but instead building a greater degree of trust through knowledge transfer and personal relationships.

Attendance would require some modicum of discretion and a certain level of knowledge or interest in information retrieval, human computer interaction, natural language processing and machine learning.

Even if I didn’t get an invite (though I’d want one), I think it’s a good idea for Google and the SEO community.

Google Change Log

The SEO community is intensely curious about when  and what changes are made to search, whether they be algorithmic or design oriented. Some amount of transparency here would go a long way. Would it really hurt to let the SEO community know that a certain type of bucket test was in the field?

We’re already seeing most of the UX tests, with blogs cranking out screenshots of the latest SERP oddity they’ve encountered. So why not publish a changelog, using FriendFeed as a model.

FriendFeed Change Log

FriendFeed makes it clear that this wasn’t comprehensive, but it did provide a level of transparency and insight into pain points and personality. The latter even more so because the user is linked to their FriendFeed account.

Imagine a Google changelog where the user is linked to a Google Profile. God forbid we learn a little bit about the search quality engineers.

I understand that there are certain changes that cannot be shared. But opening the kimono just a little would go a long way.

LOLMatts

Matt Cutts is willing to interact at length at conferences and jump into comment threads (in a single bound). He gets a bit of help from folks like Maile Ohye and John Mueller, but he’s essentially a solo act.

If Google isn’t going to allow (or encourage) more engineers to interact with stakeholders (yeah, I have a business background) then you have to amplify the limited amount of Matt we have at our disposal.

What better way than to create a Matt Cutts meme? LOLMatts!

Matt Cutts Meme on Page Sculpting

Yes, this is tongue in cheek, but my point is to do some marketing.

Matt Cutts Meme about Cloaking

Make the messages pithy and viral.

Matt Cutts Meme about Meta Keywords

Break through the clutter and keep it simple.

Matt Cutts Meme about Paid Links

Make it easier for people to pass along important information. I’ve just created four LOLMatts that cover page sculpting, cloaking, meta keywords and paid links. Of course this can go wrong in a multitude of ways and be used for evil. But the idea is to think of ways to amplify the message.

Develop some interesting infographics. Heck, Danny Sullivan even got you started. Get busy creating some presentations (you could do worse than to use Rand as a model) and upload them to SlideShare. Or create an eBook and let people pay for it with a Tweet.

Let’s see some marketing innovation.

TL;DR

Google’s rocky relationship with the SEO community could be improved through real interaction and engagement, an increase in transparency (both technical and human) and marketing techniques that would amplify their message.

The SEO community and Google would benefit from these efforts.

Yahoo Email Hacked

May 23 2011 // Rant + Technology // 445 Comments

(IMPORTANT: Before I get to my story, if your Yahoo! email has been hacked I recommend that you immediately change your password, update your security questions and ensure your Yahoo! Mobile and Y! Messenger are both up-to-date. You should also visit Yahoo! Email Abuse Help and use this process if you are unable to login to your Yahoo! account. Also, make sure to read the comments on this post since there is a tremendous amount of good information there as well.)

(UPDATE 12/13/11: Yahoo has introduced second sign-in verification as an added security measure. It will require that you add a mobile phone number and verify it via a text message. Here’s the direct link to start using second sign-in verification.)

It happened just before we arrived at the San Francisco Zoo. We are at a red light on Sloat Boulevard when my phone started to vibrate.

Buzz. Buzz. Buzz. Buzz. Buzz. Buzz. Buzz. Buzz. Buzz. Buzz. Buzz. Buzz. Buzz.

Had the rapture come a day late? No. I was getting undeliverable messages. Lots of them. My Yahoo email had been hacked!

admiral akbar star wars its a trap spoof

Here are the two important lessons I learned as a result.

I Have Good Friends

I didn’t want our day at the Zoo ruined, me staring into my phone resetting passwords and figuring out what happened. So I put the problem on the back burner and proceeded to have a fun family day.

But I did take time to quickly tap out a response to people who replied to the spam coming from my hijacked account. Why? Because they took the time and effort to give me a heads up that I had a problem. These were good people. Good friends.

The thing is, I’d gotten a number of these same emails lately from other hacked Yahoo accounts. I figured these people knew they’d been compromised and I didn’t need to respond. With the shoe on the other foot, I realized those emails were comforting even though I was well aware of the problem.

I’ll shoot off an email the next time I get a hacked email from someone.

Yahoo Email Security Failed

The odds are that I will get another one of those emails because I learned just how easy Yahoo makes it for hackers.

Upon getting home I went about securing my account. On a lark, I checked Yahoo’s ‘View your recent login activity’ link.

yahoo recent login activity

Sure enough at 10:03 AM my account was accessed from Romania. This obvious login anomaly didn’t set off any alarms? Shouldn’t my security questions have been presented in this scenario? I have never logged in from Romania before.

I’ve never logged in from outside the US. Yahoo knows this. In fact, Yahoo knows quite a bit about my location.

yahoo location history

My locations puts me in three states: California, New York and Pennsylvania. I also have location history turned on, so it’s not just my own manually saved locations (some of which are ancient), but Yahoo’s automated location technology keeping track of me.

Do you see Romania in this list? I don’t.

Why is Yahoo making it this easy for spammers to hijack accounts? Make them work a little bit! At a minimum, make them spoof their location.

Yahoo should have noted this anomaly and used my security questions to validate identity. I still would have had to change my password (which wasn’t that bad) but I would have avoided those embarrassing emails.

A simple rule set could have been applied here where users are asked to validate identity if the login (even a successful one) is outside of a 500 mile radius of any prior location.

I’ve had a Yahoo account for over 10 years without a problem, even as I moved my business accounts over to Gmail.

Yesterday I thanked those friends who had my back. Unfortunately, Yahoo wasn’t one of them.

SEO Freeloaders

May 12 2011 // SEO // 8 Comments

This is not a ‘SEO is Dead’ post. Let me make that clear from the beginning. But SEO is going to get tougher, not because of the Panda update or anything else Google may implement but because search volume growth is decelerating.

SEO Drafting

The SEO industry has had the wind at its back as search volume soared month after month and year after year. Some might say it was pretty tough not to fall into success.

That’s not to say there wasn’t a lot of good SEO going on. But if you were posting 25% yearly SEO growth were you really being effective? Shouldn’t SEO growth be normalized based on search volume trends?

Search Volume Trends

Search Volume Trends 2004 to 2011

This graph measures explicit monthly US searches from December of 2004 to April 2011 using a mix of comScore qSearch and Nielsen//NetRatings MegaView Search reports. In that time the number of monthly searches has risen from 3.3 billion to 16.9 billion.

Search Volume Growth

To some the trend might look rosy. But look closer. Using December search volume as a benchmark, the year over year (YoY) growth in search volume is decelerating.

YoY Search Volume Growth

The YoY growth in 2011 could be in the single digits. Of course you could drill down into specific category (even keyword) search growth, but I believe this is a macro level trend based on demographics.

Search Adoption

When I debated the definition of search quality, I mapped out daily search usage against the innovation curve. In May of 2010 79% of American adults were online and 87% used a search engine to find information.

If you do the math you find that approximately 165 million American adults or 69% of the total adult population now use search.

Search Adoption Table

That puts us a little more than half way through the Late Majority, and that was a full year ago.

SEO Shakeout

Recently the SEO industry has grappled with the idea of standards or certifications and differed on ‘outing’ SEO companies who violate search engine guidelines. The industry is maturing, but I wonder if we’re missing the bigger picture.

That gale force tailwind we once had is now a gentle breeze. Decelerating search volume growth will squeeze mediocre SEO out of our industry.

It will push us all to up our game, to evolve and specialize. The free ride is nearly over, it’s time to put up or shut up.

Translating Panda Questions

May 08 2011 // SEO // 2 Comments

On Friday, Google released a list of questions to help guide publishers who have been impacted by the Panda update.

Because Google can’t (or won’t) give specifics about their algorithm, we’re always left to read between the lines, trying to decipher the true meaning behind the words. Statements by Matt Cutts are given the type of scrutiny Wall Street gives those by Ben Bernanke.

Speculation is entertaining, but is it productive? Google seems to encourage it, even within this recent blog post.

These are the kinds of questions we ask ourselves as we write algorithms that attempt to assess site quality. Think of it as our take at encoding what we think our users want.

So perhaps there is value (beyond entertainment) in trying to translate and decode the recent Panda questions.

Panda Questions Translation

Matt Cutts to English Translation

Would you trust the information presented in this article?

The web is still about trust and authority. The fact that this is the first question makes me believe it’s a reference to Google’s normal calculation of PageRank using the (rickety) link graph.

Is this article written by an expert or enthusiast who knows the topic well, or is it more shallow in nature?

Is Google looking at the byline of articles and the relationships between people and content? Again, the order of this question makes me think this is a reference to the declining nature of the link graph and the rising influence of the people graph.

Does the site have duplicate, overlapping, or redundant articles on the same or similar topics with slightly different keyword variations?

This reveals a potential internal duplicate content score and over-optimization signal in which normal keyword clustering is thwarted by (too) similar on-site content. It may also be referred to as the eHow signal.

Would you be comfortable giving your credit card information to this site?

Outside of qualitative measures, Google might be looking for the presence or prominence of a privacy policy.

Does this article have spelling, stylistic, or factual errors?

Google Best Guess at Eiffel Tower Height

We already know that Google applies a reading level to content. But maybe they also extract and run statements through a fact checking database. So, stating that the Eiffel tower was 500 meters (instead of 324) might be a negative signal.

Are the topics driven by genuine interests of readers of the site, or does the site generate content by attempting to guess what might rank well in search engines?

This sounds like a mechanism to find sites that have no internal topical relevance. In particular, it feels like a signal designed to identify splogs.

Does the article provide original content or information, original reporting, original research, or original analysis?

Nice example of keyword density here! But it certainly gets the point across though, doesn’t it? Google isn’t green when it comes to content recycling. Google wants original content.

Does the page provide substantial value when compared to other pages in search results?

This type of comparative relevance may be measured, over time, through pogosticking and rank normalized CTR.

How much quality control is done on content?

Misspelled Sign

I think this is another reference to spelling and grammar. Google is proud (and should be) of their Did You Mean? spelling correction. I can’t imagine they wouldn’t want to apply it in other ways. As for grammar, I wonder if they dislike dangling prepositions?

Does the article describe both sides of a story?

I believe this is a Made for Amazon signal that tries to identify sites where the only goal is to generate clicks on affiliate links. I wonder if they’ve been able to develop a statistical model through machine learning that identifies overly one-sided content?

Is the site a recognized authority on its topic?

This seems like a clear reference to the idea of a site being a hub for information within a specific topic.

Is the content mass-produced by or outsourced to a large number of creators, or spread across a large network of sites, so that individual pages or sites don’t get as much attention or care?

Not much between-the-line reading necessary on this one. Google doesn’t like content farms. This might as well just reference Mahalo and Demand Media.

Was the article edited well, or does it appear sloppy or hastily produced?

Once again, more emphasis on attention to detail within the content. Brush up on those writing and editing skills!

For a health related query, would you trust information from this site?

The qualifier of health makes me think this is a promotional signal, seeking to identify sites that are promoting some supplement or herb with outrageous claims of wellness. I’d guess machine learning on the content coupled with an increased need for citations (links) from .gov, .org or .edu sites could produce a decent model.

Would you recognize this site as an authoritative source when mentioned by name?

Brands matter. You can’t get much more transparent.

Does this article provide a complete or comprehensive description of the topic?

Oddly, the first thing that jumps to mind is article length. Does size really matter?

Does this article contain insightful analysis or interesting information that is beyond obvious?

Beyond obvious is an interesting turn of phrase. I’m not sure what to make of it unless they’re somehow targeting content like ‘How To Boil Water‘.

It may also refer to ‘articles’ that are essentially a rehash of another article. You’ve seen them, the kind where large portions of another article are excerpted surrounded by a small introductory sentence.

Is this the sort of page you’d want to bookmark, share with a friend, or recommend?

Social Signals

This clearly refers to social bookmarking, Tweets, Facebook Likes and Google +1s and the social signals that should help better (or at least more popular) content rise to the top. These social gestures are the modern day equivalent to a link.

Does this article have an excessive amount of ads that distract from or interfere with the main content?

There is evidence (and a good deal of chatter) that Google is actually rendering pages and can determine ads and chrome (masthead, navigation etc.) from actual content. If true, Google could create a content to ad ratio.

I’d also guess that this ratio is applied most often based on what is visible to the majority of users. How much real content is visible when applied using Google’s Browser Size tool? You should know.

Would you expect to see this article in a printed magazine, encyclopedia or book?

This anachronistic question is about trust. I simply find it interesting that Google still believes that these old mediums convey more trust than their online counterparts.

Are the articles short, unsubstantial, or otherwise lacking in helpful specifics?

Another question around article length and ‘shallow’ content. I wonder if there is some sort of word diversity metric that could be applied that would help identify articles that lacked substance and specifics.

Are the pages produced with great care and attention to detail vs. less attention to detail?

The verbiage of ‘pages produced’ makes me think this is about code and not content. We’ve heard that code validation isn’t really a signal, but that’s different from seeing gross errors in the mark-up that translates into bad user experience.

Would users complain when they see pages from this site?

Blocked Sites

This is obviously a reference to the Chrome Personal Blocklist extension and new Blocked Sites functionality. Both features seemed like reactions to pressure from people like Vivek Wadhwa, Paul Kedrosky, Jeff Atwood, Michael Arrington and Rich Skrenta.

That this question is last in this list makes it seem like it was a late addition, and might lend some credence to the idea that these spam initiatives were spurred by the public attention brought by the Internati.

Panda Questions Analysis

Taken together it’s interesting to note the number of questions that seem to be consumed with grammar, spelling and attention to detail. Yet, if Google had really gotten better at identifying quality in this way, wouldn’t it have been better to apply it on the URL level and not the domain level? (I have a few ideas why this didn’t happen, but I’ll share that in another post.)

Overall, the questions point to the shifting nature of how Google measures trust and authority as well as a clear concern about critical thinking and the written word. In light of the recent changes at Google, is this evidence that Google is more concerned with returning knowledge rather than simple results?

WordPress Duplicate Content

April 27 2011 // Rant + SEO + Technology // 23 Comments

In February Aaron Bradley sent me an email to let me know that I had a duplicate content problem on this blog. He had just uncovered and rectified this issue on his own blog and was kind enough to give me a heads up.

Comment Pagination

The problem comes in the way that WordPress handles comment pagination. The default setting essentially creates a duplicate comment page.

Here’s what it looks like in the wild. Two pages with the same exact content.

http://blog.wolframalpha.com/2011/04/18/new-age-pyramids-enhance-population-data/comment-page-1/

http://blog.wolframalpha.com/2011/04/18/new-age-pyramids-enhance-population-data

That’s not good. Not good at all.

Comment-Page-1 Problem

The comment-page-1 issue offends my own SEO sensibilities, but how big of a problem is it really?

WordPress Spams Google

There are 28 million inurl results for comment-page-1. 28 million!

Do the same inurl search for comment-page-2 and you get about 5 million results. This means that only 5 million of these posts attracted enough comments to create a second paginated comment page. Subtract one from the other and you wind up with 23 million duplicate pages.

The Internet is a huge place so this is probably not a large percentage of total pages but … it’s material in my opinion.

Change Your Discussion Settings

If you’re running a WordPress blog I implore you to do the following.

Go to your WordPress Dashboard and select Settings –> Discussions.

How To Fix Comment-Page-1 Problem

If you regularly get a lot of comments (more than 50 in this default scenario) you might want to investigate SEO friendly commenting systems like Disqus, IntenseDebate or LiveFyre.

Unchecking the ‘break comments into pages’ setting will ensure you’re not creating duplicate comment pages moving forward. Prior comment-page-1 URLs did redirect, but seemed to be doing so using a 302 (yuck). Not satisfied I sought out a more permanent solution.

Implement an .htaccess RewriteRule

It turns out that this has been a known issue for some time and there’s a nice solution to the comment-page-1 problem in the WordPress Forum courtesy of Douglas Karr. Simply add the following rewrite rule to your .htaccess file.

RewriteRule ^(.*)/comment-page-1/ $1/ [R=301,L]

This puts 301s in place for any comment-page-1 URL. You could probably use this and keep the ‘break comments into pages’ setting on, which would remove duplicate comment-page-1 URLs but preserve comment-page-2 and above.

Personally, I’d rather have the comments all on one page or move to a commenting platform. So I turned the ‘break comments into pages’ setting off and went a step further in my rewrite rule.

RewriteRule ^.*/comment-page-.* $1/ [R=301,L]

This puts 301s in place for any comment-page-#. Better safe than sorry.

Don’t Rely on rel=canonical

Many of the comment-page-1 URLs have a rel=canonical in place. However, sometimes it is set up improperly.

Improper Rel=Canonical

Here the rel=canonical actually reinforces the duplicate comment-page-1 URL. I’m not sure if this is a problem with the Meta SEO Pack or simple user error in using that plugin.

Many times the rel=canonical is set up just fine.

Canonical URL from All-In-One SEO Pack

The All in One SEO Pack does have a Canonical URL option. I don’t use that option but I’m guessing it probably addresses this issue. The problem is that rel=canonical doesn’t stick nearly as well as a 301.

Comment-Page-1 in SERP

So even though this post from over three months ago has a rel=canonical, the comment-page-1 URL is still being returned. In fact, there are approximately 110 instances of this on this domain alone.

Comment Page 1 Site Results

Stop Comment-Page-1 Spam

23 million pages and counting. Sure, it would be nice if WordPress would fix this issue, but short of that it’s up to us to stop this. Fix your own blog and tell a friend.

Friends don’t let friends publish duplicate content.

The Fresh Content Myth

April 14 2011 // SEO // 4 Comments

One of the SEO myths that seems to stubbornly persist is the value of fresh content. The problem revolves around the definition of fresh. Google likes new content. That’s very different from refreshed content, which is where many people seem to focus their attention.

Here’s a quick guide to new versus refreshed content, illustrated with cats (and Vladimir Putin).

Google Loves Kittens

Google Loves Kittens

New content is what Google craves. Googlebot will fawn over newly minted content. “Awwwww, so cute!”

This is what Google means when they say they want fresh content. They want kittens! It’s not that they don’t like cats, but they’re all grown up. They’re not as exciting or surprising anymore.

Kittens For Sale!

One of the tricks some people talk about is changing the time stamp on a piece of content. The idea being that by changing the date it makes the content look new. Now, my SEO philosophy is based on the idea that search engines are like children, but even a five year old can tell the difference between a kitten and a cat.

Kitten vs Cat

Saying your cat is really a kitten won’t work. Google knows it’s still a cat.

Renaming Your Cat

Others will change the title tag thinking that, just by making this change, the search engine will treat it like ‘fresh’ content. If you renamed your cat, would it suddenly become a kitten?

One of the problems here is that you can sometimes change the title tag to something better … or worse.

Cat vs Putin

If your title tag for this content was Cat and you change it to Russian Blue you’ll probably do better. If you then change the title tag from Russian Blue to Blue Russian you probably won’t. At that point Google may think your content is about a very cold individual from Russia.

Renaming your content doesn’t transform your cat into a kitten. However, changing your cat’s name may have an impact on relevance one way or the other.

Dressing Your Cat

Basement Cat Sweater

New comments or reviews makes my content fresh, right? Wrong! Dressing your content up in a new outfit does not turn it into new content. Once again, you might get a false positive, because the additional text may add (or subtract due to pagination) something to the content.

Putting a sweater on your cat might make it more interesting, but it’s still a cat.

Changing Your Cat

Maybe you come home and your child has painted the cat green. Perhaps the cat comes home one night with a small notch out of its ear. Is that cat now a kitten? No! Small changes to your content do not make it new.

All Cats

Even major changes to content don’t transform a cat into a kitten. Again, you might change the relevance of that content (for good or for bad) but it won’t be new. If the content is completely different (a dog instead of a cat) you probably want to create a new piece of content.

Because Google likes puppies too.

Are Brands Good For Search?

April 11 2011 // eCommerce + SEO // Comments Off on Are Brands Good For Search?

Brands are becoming a greater part of search results. But is that a good thing?

Brands in Search

Brands are the solution, not the problem. Brands are how you sort out the cesspool.

That’s what then Google CEO Eric Schmidt said in October of 2008. In March of 2009 the Vince Change made good on that comment, giving brands an extra boost in search results. And today we talk about the rising prominence of brand signals in Google’s algorithm.

Brands, Comfort and Trust

Many claim that brands increase trust in search results. Users see something familiar and that conveys a level of trust. This might be true (though I think people may be conflating comfort with trust) but, more to the point, is it really what search is about?

Why do we search? Many definitions of search imply the act of locating something otherwise unknown or concealed from us. That certainly doesn’t apply to these brands.

Around The Internet Corner

A recent comment on this blog is what really got me thinking about how brands and search intersect.

If I wanted to buy something, I’d go straight to Amazon. I don’t need the top of half my Google searches to all be stores I can drive down the street to get to.

A number of years ago it was far more difficult to get from one point of the Internet to the other. Connection speeds were slower and tabbed browsing wasn’t as ubiquitous as it is today.

Search supplanted browsers as the fastest way to get from point A to point B. But today, not only are those stores ‘down the street’ they’re also just around the Internet corner. The next site is one ‘open a new tab’ click away.

Search may no longer be the fastest way to get from point A to point B.

Speedy Navigation

Search Speed and Navigation

I think Google has responded to this evolution. Google Instant can be viewed in a very different light if you think about whether search or the browser is the fastest way to get from place to place. Shaving off those seconds are tremendously important in ensuring that users continue to use Google to navigate the Internet.

What about navigation? Navigational searches are on the rise, and we seem to tacitly accept navigational search as a given part of the landscape. But why?

Why are we still using Google to search inventory of known sites and brands? We know how to get to these stores. Well … Google makes it easy, providing more and more pathways to brands and stores.

Google Related Brands and Stores

I think these implementations might also be teaching users that they could simply visit these sites directly. Right now inertia is on Google’s side, but for how much longer?

Better Brand Search

Browsers have a real opportunity to retake control of user navigation. Unfortunately, the human computer interface for browsers is dreadful.

Firefox Search Box

Could the search box dynamically change the search engine based on the query? Right now the user is forced to change this on a per query basis. And I’d bet selecting other search engines is a low single digit percentage activity.

Maybe a transactional search brings up the option to search your favorite stores, launching each in a separate tab? It could even be a separate window, creating a self-contained environment for you to shop your favorite stores for that product.

Perhaps as you visit eCommerce sites your browser prompts you to add that site to your personal mall. Then when you’re looking for a product, you simply enter it (in a different and well labeled field) and your personal mall is created.

This is but one off-the-cuff idea! There are so many other ways to tackle this problem that would eliminate the need for traditional search engines.

But who is going to do this? Mozilla has little incentive to innovate in this direction given their lucrative relationship with Google. Chrome? Not unless someone else did it first. That leaves Internet Explorer who have consistently shown a lack of vision and execution.

Another search engine? I do like what DuckDuckGo is doing, by automatically putting an Amazon search result at the top when it identifies a transactional query. But I’m not sure any upstart has the power to turn the tide without help from a browser.

Brands Hasten Search Demise

Why search Google if it’s just returning the same brands I already know and trust? Especially since I can get to those sites (quickly) without the annoying ads.

By placing more and more brands at the top of search results I feel like Google is hastening this realization. Users may begin to see the results as more comfortable and trusted but not more valuable.

brand directory

Search is currently the infrastructure of the Internet mall. It’s how people ‘walk’ from one store to the other. Homogenized brand results may turn Google into a directory of the Internet mall. You might reference the directory once in a while when you’re stuck, but most of the time you’ll ‘walk’ from store to store on your own instead.

Open Graph Business Intelligence

April 06 2011 // Social Media + Technology // 1 Comment

Facebook’s Open Graph can be used as a valuable business intelligence tool.

Here’s how easy it can be to find out more about the people powering social media on your favorite sites.

How It Works

The Open Graph is populated with meta tags. One of these tags is fb:admins which is a list of Facebook user IDs.

fb:admins open graph tag

Here we are on a Time article that is clearly using the Open Graph.

Sample Time.com Article

The fb:admins tag is generally found on the home page (or root) of a site because that’s one of the ways you grant people access to Insights for Websites.

Lint Bookmarklet

You could open up a new tab and go to the Facebook Linter Tool to enter the domain or you can use my handy Bookmarklet that gives you one-click access to Lint that site.

Get Lint Info

Drag the link above to your bookmark bar and then click on it anytime you want to get information about the Open Graph mark-up from that site’s home page.

Linter Results

The results will often include a list of Facebook IDs. In this instance there are 8 administrators on the Time domain.

Facebook Lint for Time

Click on each ID to learn as much as that person’s privacy settings will allow. You can find out quite a bit when you do this.

In this instance I’ve identified Time’s Technical Lead, a Senior Program Manager (with a balloon decorating company on the side), a bogus test account (against Facebook rules) and the Program Manager, Developer Relations for … Facebook.

I guess it makes sense that Time would get some special attention from Facebook. Still, it raised my eyebrows to see a Facebook staffer as a Time administrator.

Cat Lee

Cat actually snagged ‘cat’ as her Facebook name (nicely done!) and says her favorite football team is the Eagles. I might be able to strike up a conversation with her about that. Go Eagles!

I’d probably also ask her why a fake test account is being used by Time.

Tester Time on Facebook

That is unless Time really does have a satanic handball enthusiast on staff.

Dig Deeper

Sometimes a site won’t use fb:admins but will authenticate using fb:app_id instead. But that doesn’t mean your sleuthing has come to an end. Click on the App ID number and you’ll usually go to that application.

Time Facebook Application Developer Information

By clicking on Info I’m able to view a list of Developers. Some of these I’ve already seen via fb:admins but two of them are actually new, providing a more robust picture of Time’s social media efforts and resources.

You’ll only be stymied if the site is using fb:page_id to authenticate. That’s generally a dead end for business intelligence.

Open Graph Business Intelligence

I imagine this type of information might be of interest to a wide variety or people from recruiters to journalists to sales and business development professionals like Jimmy John Shark. You could use this technique on its own or collect the names and use LinkedIn and Google to create a more accurate picture of those individuals.

How would you use this information?

There Are No New Ideas, Just New Buzzwords

April 05 2011 // Marketing // Comments Off on There Are No New Ideas, Just New Buzzwords

“There are no new ideas. There are only new ways of making them felt.” – Audre Lorde

Okay, there might be some new ideas, but very few of them. Far fewer than marketers would have you believe. But that’s their job right? They come up with ways to make you feel different about an old idea.

Buzzwords

The easiest way for marketers to do this is through buzzwords. Oddly, I think marketers are often more susceptible to buzzwords. They create them and in many instances they wind up believing in their own creations. They become certain that the hot new buzzword is an entirely new and groundbreaking idea.

But it’s not. That’s not to say that it isn’t a good idea, it’s just not new. Here are a two recent examples.

Social Proof

The way some folks talk about it, you’d think social proof was the love child of Twitter and Facebook. Social proof, persuasion and crowd psychology have been around for a long time, even before Cialdini made it popular.

If you see more people liking something, or someone you trust liking something you’re more likely to like that too.

social proof

McDonald’s figured this out a long time ago. We’re bombarded with ‘number of satisfied users’ claims. You always remove at least one if not two strips when posting a tear off flyer. And people have been using these things called testimonials, often from celebrities, for quite a while.

Social proof works, it has offline and it will online too. But lets not go crazy making it into something new.

Crowdsourcing

I remember going to Hershey Park as a kid and being asked to describe my perfect candy bar. It was just a guy holding a clip board, scribbling down the ideas of all the kids coming into the amusement park that day.

In 1981 the Chicago White Sox held a uniform design contest. Anyone could enter and the fans could vote on the finalists. To this day, I swear someone stole my design.

crowdsourcing

Clearly new technologies have enabled businesses to collect more information and to collaborate with others on a larger scale, but the idea of canvassing and engaging with your customers is not new.

Beyond Buzzwords

no lemmings

Just because it’s hot and trendy doesn’t automatically mean it’s right for your business. Look beyond the buzz. Break it down into the fundamentals.

Remember, buzzwords are a new way of feeling about an old idea. It’s rarely as complicated (or expensive) as it seems. Heck, you might be doing it already and not even know it.

“The more things change, the more they stay the same.” – Jean-Baptiste Alphonse Karr

Google Preemptive Penalties

April 01 2011 // Humor + SEO // 2 Comments

Starting this month Google will begin to use a version of the Bush Doctrine to fight web spam. Google will preemptively penalize sites.

Internet Bush Doctine

Google Bush Doctrine

The main tenant of the Bush Doctrine surrounded the idea of preemptive war. Google has decided to adopt this philosophy in dealing with the rampant manipulation of trust and authority via the link graph. Instead of reacting to increases in paid links, splog networks and other schemes, Google is going on the offensive and will penalize sites preemptively.

Perhaps this is a reaction to the revelations about J.C. Penney, Forbes and Overstock, as well as the surveys and polls that indicate that most sites engage in black hat techniques and that paid links are still viable.

Reconsideration Requests

It seems as if an analysis of reconsideration requests helped lead Google to this new policy. A source on the Google web spam team says:

We learn a lot from reconsideration requests. In that environment, sites are willing to admit to and stop bad behavior. Analyzing the profile of these sites before and after has been of growing interest to the team.

Sure enough, the text surrounding reconsideration requests makes it clear that coming clean is important.

Google Reconsideration Request

Admission and corrective action is required to get out of Google’s dog house.

Preemptive Google Penalties

Preemptive penalties will force sites to divulge and cease black hat techniques. Why? Because you’re simply not going to know what Google does and doesn’t know. If you are not forthcoming (if you hold something back) and Google finds out, it will make it even tougher to get out of the dog house.

Are you feeling lucky punk?

Do you feel lucky, punk? Well … do ya?

Penalty Selection and Length

It remains to be seen how Google will select sites for preemptive penalties. Is it random or will it be initiated by members on the web spam team? Will all sites be eligible for preemptive penalties, or will some be white listed?

The length of the preemptive penalty is also unknown. Will it be lifted if the offending site doesn’t file a reconsideration request or is reconsideration required? It will be interesting to see if anyone simply tries to ride out the penalty without engaging Google directly.

And how long will Google pursue this strategy? One would hope that the data gleaned from these preemptive penalties might (quickly) help Google refine their detection efforts, allowing them to scrap this policy.

What do you make of Google’s Bush Doctrine and how will you handle a preemptive penalty?

xxx-bondage.com