You Don’t Count Friends

January 18 2011 // Rant + Social Media // 3 Comments

When was the last time you counted your friends … in real life.

You Don’t Count Friends

My guess is that you have never actually sat down and counted your friends. Maybe when you were 6 you counted your best friends on one hand but you didn’t wake up every morning and recount. Yet online we’re constantly reminded of and trained to tally our friends.

The Prisoner

We’ve become prisoners to social numbers. The numbers on Facebook and Twitter; on Feedburner and Quora. Not only are we held hostage by those numbers, we become them too. We’re number 212 on someone’s list, number 83 on another.

I am not a number! I am a free man!

I love numbers and could literally lose myself in an Excel spreadsheet for a day. But the numbers attached to friends and followers simply seem unnatural and don’t map to any offline behavior.

People are generally not alerted when someone ‘unfriends’ them in real life. What does that even mean? It probably means you grew apart and just don’t talk anymore. No biggie.

But online the drop in friend count is right there in your face. Suddenly you have to explain and account for it. WTF!

Lose The Social Numbers

Lose The Numbers

What it we lost the numbers. Maybe you still need some sort of tally? But could we come up with a word to describe a range of numbers? Something that would be more real?

  • A Handful
  • Several
  • Some
  • Many
  • A Lot
  • Tons

I know, I know, people will probably still want to get from Several to Some or from A Lot to Tons. But maybe it helps a little? Or maybe we just remove the numbers all together. Poof.

Is there an app for that? Like an ad blocker but for social media numbers? Contact me if you want to help build one.

Stop Writing for People

January 17 2011 // SEO // 46 Comments

Stop writing for people. Start writing for search engines.

I’ll wait while you run to get your pitchforks and light your torches. I know it sounds like heresy but I ask you to hold your judgment.

Search Engines Emulate Human Evaluation

Search Engines Want to be Human

The goal of search engine algorithms is to emulate the human evaluation of a site or page. This is not an easy task. In fact, it’s a really difficult task. Think of all the things that you tap into when you evaluate a new website. The amount of analysis that goes on in just a few seconds is astounding.

The thing to remember is that search engines want to be a proxy for human evaluation. They’re trying to be … human. Don’t lose sight of this.

Search Engines Are Not Smart

Doh!

But for all of that effort, search engines aren’t smart. The name of my blog is my opinion of search engines: a search engine is like a blind five year old.

The blind part comes in because they don’t care about how pretty your site is or the gorgeous color palette you’ve selected. Mind you, visual assessment is a factor humans use in evaluating a website, but search engines aren’t able to do this.

Why a five year old? For all of the advances search engines have made, they’re still not ‘reading’. They’re performing text and language analysis. That’s a huge distinction. Really. A Grand Canyon type of distinction.

A search engine would likely fail a basic reading comprehension test.

Knowing this, you need to take steps to make it very clear what that page is about and where the search engine should go next. This helps the search engine and … ultimately helps people too.

First Impressions Matter

For a number of years I ran telemarketing programs. (University fundraising if that makes you feel better about me.) What you find out is that you have only 7 seconds to convince a person to stay on the phone. They better hear something worthwhile fast or else you’ll get the dial tone.

It’s no different online. With high speed connections, tabbed browsing, real-time information and an environment where anyone can publish anything, that first impression is incredibly important. In a few seconds users are determining if that content is authoritative and relevant.

Is it any wonder why tools like FiveSecondTest and Clue have become popular?

People Scan Text

Did you know that the P.S. line is one of (if not the) most read parts in a direct mail solicitation? It is. People naturally gravitate toward it. They’re far more willing to read the P.S. line than any of the body copy.

And we see this behavior online too. Research by Jakob Nielsen shows that most readers scan instead of reading word for word.

People rarely read Web pages word by word; instead, they scan the page, picking out individual words and sentences. In research on how people read websites we found that 79 percent of our test users always scanned any new page they came across; only 16 percent read word-by-word.

Another study showed that even those who ‘look’ at your content are only reading between 18% and 28% of it.

tl;dr

Have you seen this spring up around the web lately? It stands for too long, didn’t read. It’s used to summarize content into a sentence or two. It can be used at the top of content or at the bottom. At the bottom, it serves as a close cousin to the tradition direct mail P.S. line.

But why exactly are we seeing tl;dr? Could it be that the content we’re writing just isn’t concise enough? That it’s not formatted for readability? It’s your job to make it easy for people to understand and engage with your content. Keep it simple, stupid.

SEO is more than tags and links. Today SEO is also about User Experience (UX).

The Brain Craves Repetition

There’s an old adage in public speaking that only a third of the audience is listening to you at any given time. This means that you have to repeat yourself at least three times to get your point across. A recent Copyblogger post touched on this subject.

The brain can’t pay attention to everything and it doesn’t let everything in. It figures anything that is repeated constantly must be important, so it holds on to that information.

I also believe in a type of visual osmosis. When evaluating a page for the first time, words that are repeated frequently make an impression, whether they’re specifically read or not.

People instinctively want consistency. They want to know that they’re reading the right thing, in the right way, in the right order. They want to group things. That’s one reason why ‘list’ posts are so popular.

Apply Steve Krug’s ‘Don’t Make Me Think‘ philosophy to your content. Not just for search engines but for people.

Stop Using Pronouns

Don't Use Pronouns

Why use that pronoun when you can use the actual noun instead. Sure, you know what you’re talking about and the reader might, but putting it in ensures that the reader (who is not nearly as invested in your content) is following along. And our dear friend the search engine is also better served.

Having that keyword noun in your content frequently doesn’t make it worse, it makes it better. When you read it, it may feel bloated. But the majority of your readers are skimming while the minority who are truly reading will simply not see those extra nouns. In fact, they become a bit like sign posts.

Here’s an example from the world of books: dialog! What if you didn’t attribute dialog to a specific person.

“I want Mexican food,” he said.

“No, lets get Italian food,’ he replied.

“Can’t we meet in the middle?’ he queried.

How many people are talking? Two? Three? Perhaps one if you’re a Fight Club fan.

Now lets add the names back in.

“I want Mexican food,” Harry said.

“No, lets get Italian food,” Ron replied.

“Can’t we meet in the middle?” Tom queried.

Now I know there are three people talking. And as that dialog continues (as dull as it might be) I’ll use those names as sign posts so I know who’s saying what. But will I actually ‘read’ each instance of that name? Probably not. You’ll pick up the cadence of the dialog and essentially become blind to the actual name. Blind until it becomes unclear and then you’ll seek that name out to clarify exactly who said what.

When people scan they need those sign posts. They need to see that keyword so they can quickly follow along.

Web Writing is Different

When people say you should write for the user, they mean well. In spirit, I completely agree. But in practice, it usually goes dreadfully wrong.

I’ll never forget my first job out of college. I was an Account Coordinator at an advertising agency. One of my jobs was to write up meeting notes. As an English minor I took a bit of pride in my writing skills. So it was a great shock to get back my first attempt with a river of red marks on it.

What had I done wrong?

I wasn’t using the right style. Meeting notes isn’t literature or an essay. I didn’t have to find a different word for ‘agreed’ because of my dislike of using the same verb more than once (twice at most). No, I was told that my writing was too ‘flowery’ and that I needed to aim for clarity and brevity.

I’m not saying that writing for the web is like writing meeting notes. But I am saying that writing for the web is different!

So when you tell someone to write for the user, they usually write the wrong way. They write thinking the user is going to be gorging themselves on every word, giving the content their full attention. They think the user will appreciate the two paragraph humorous digression from the main topic. They’ll want to write like David Mitchell or Margaret Atwood. (Or maybe that’s just me.)

Robots Don’t Understand Irony

Dave Eggers Irony Rant

To my knowledge, there is no double entendre database, nor a irony subroutine or a witticism filter in the algorithm. Do they have a place in your writing? Sure. But sparingly. Not just because search engines won’t understand but because, like it or not, a lot of people won’t get it either.

Everyone might not get the inside joke … like why I used the image of this particular novel above.

Write for Search Engines

Make sure they know exactly what you’re writing about. Stay focused. Break your content up into shorter paragraphs and use big descriptive titles. Avoid pronouns and don’t assume they understand what you just said in a previous paragraph. Keep it simple and give them sign posts.

Write for people the right way. Write for a search engine.

Find Keyword Modifiers with Google Refine

January 15 2011 // SEO // 8 Comments

Google RefineKeyword research is a vital component of SEO. Part of that research usually entails finding the most frequent modifiers for a keyword. There are plenty of ways to do this but here’s a new way to do so using Google Refine.

Google Refine

Google Refine came about through the Metaweb acquisition in July of 2010 and is an evolution of Freebase Gridworks. So what is it exactly?

Google Refine is a power tool for working with messy data, cleaning it up, transforming it from one format into another, extending it with web services, and linking it to databases like Freebase.

I’ve been poking at Freebase for years thanks to Chris Eppstein and think that it was one of Google’s smarter acquisitions of late. But I just returned to Google Refine as I embarked on some keyword research.

Root Keywords

Lets say you have a site that sells boots. Clearly the term ‘boots’ is one of the root (or main) keywords for the site. Finding keyword modifiers can help you match query intent to products and site content. Modifiers can be applied to SEO and PPC campaigns.

There are a number of keyword tools but I’ll use Google in this example.

Boots Keyword Suggestions

There are 794 keyword suggestions and many of them overlap with one another. I could wade through them in Excel and apply some sort of filter or toss them into a Pivot Table but Google Refine actually makes this much easier.

Install Google Refine

You’ll need to download and install Google Refine and then point your browser to http://127.0.0.1:3333/ to get started.

Start a Google Refine Project

Create a Google Refine Project

Browse for and select that downloaded keyword file, type in a Project name and click Create Project.

Google Refine Interface

At this point it’s a lot like having a pre-formatted Google Doc. But that’s where the similarities end.

Apply a Word Facet

Google Refine comes loaded with a massive amount of intelligence. What I’m going to show you is probably the least sophisticated part of Google Refine. Select the Keyword drop down arrow and navigate to Word Facet. (Facet > Customized facets > Word facet)

Apply a Google Refine Word Facet

You’ll quickly get a new pane on the left hand side showing the result of applying this word facet.

Google Refine Word Facet Result

Sorting a Word Facet

Google Refine is telling me that it’s narrowed those 794 rows into 497 choices and ordered them by name. But instead I want to learn about the most frequent modifiers. No problem. Just sort by count.

Sort Word Facet by Count

Just like that I get the most frequent modifiers for the term boots. You still need to apply some smarts to understand why ‘for’ might be listed or how ‘high’ might be used as a modifier. But it’s a super quick way to get an at-a-glance perspective.

Word Drill Down

If you’re having trouble figuring out a specific word you can just click on the word to get a sample of those keyword terms.

Google Refine Word Facet Drill Down

Who knew wide calves were such a problem?

Google Refine and Keyword Research

Google Refine doesn’t replace other SEO tools. Instead it’s just another tool on your tool belt. That said, I have only showed you a fraction of what Google Refine is capable of. In particular, there are some very interesting clustering algorithms that could be applied to keyword research.

I’m just getting started and will keep playing with (aka learning) Google Refine to see just how it might streamline keyword research.

Optimize Your Sitemap Index

January 11 2011 // Analytics + SEO // 20 Comments

Information is power. It’s no different in the world of SEO. So here’s an interesting way to get more information on indexation by optimizing your sitemap index file.

What is a Sitemap Index?

A sitemap index file is simply a group of individual sitemaps, using an XML format similar to a regular sitemap file.

You can provide multiple Sitemap files, but each Sitemap file that you provide must have no more than 50,000 URLs and must be no larger than 10MB (10,485,760 bytes). […] If you want to list more than 50,000 URLs, you must create multiple Sitemap files.

If you do provide multiple Sitemaps, you should then list each Sitemap file in a Sitemap index file.

Most sites begin using a sitemap index file out of necessity when they bump up against the 50,000 URL limit for a sitemap. Don’t tune out if you don’t have that many URLs. You can still use a sitemap index to your benefit.

Googling a Sitemap Index

I’m going to search for a sitemap index to use as an example. To do so I’m going to use the inurl: and site: operators in conjunction.

Google a Sitemap Index

Best Buy was top of mind since I recently bought a TV there and I have a Reward Zone credit I need to use. The sitemap index wasn’t difficult to find in this case. However, they don’t have to be named as such. So if you’re doing some competitive research you may need to poke around a bit to find the sitemap index and then validate that it’s the correct one.

Opening a Sitemap Index

You can then click on the result and see the individual sitemaps.

Inspect Sitemap Index

Here’s what the sitemap index looks like. A listing of each individual sitemap. In this case there are 15 of them, all sequentially numbered.

Looking at a Sitemap

The sitemaps are compressed using gzip so you’ll need to extract them to look at an individual sitemap. Copy the URL into your browser bar and the rest should take care of itself. Fire up your favorite text program and you’re looking at the individual URLs that comprise that sitemap.

Best Buy Sitemap Example

So within one of these sitemaps I quickly find that there are URLs that go to a TV a Digital Camera and a Video Game. They are all product pages but there doesn’t seem to be any grouping by category. This is standard, but it’s not what I’d call optimized.

Sitemap Index Metrics

Within Google Webmaster tools you’ll be able to see the number of URLs submitted and the number indexed by sitemap

Here’s an example (not Best Buy) of sitemap index reporting in Google Webmaster tools.

Sitemap Index Metric Sample

So in the case of the Best Buy sitemap index, they’d be able to drill down and know the indexation rate for each of their 15 sitemaps.

What if you created those sitemaps with a goal in mind?

Sitemap Index Optimization

Instead using some sequential process and having products from multiple categories in an individual sitemap, what if you created a sitemap specifically for each product type?

  • sitemap.tv.xml
  • sitemap.digital-cameras.xml
  • sitemap.video-games.xml

In the case of video games you might need multiple sitemaps if the URL count exceeds 50,000. No problem.

  • sitemap.video-games-1.xml
  • sitemap.video-games-2.xml

Now, you’d likely have more than 15 sitemaps at this point but the level of detail you suddenly get on indexation is dramatic. You could instantly find that TVs were indexed at a 95% rate while video games were indexed at a 56% rate. This is information you can use and act on.

It doesn’t have to be one dimensional either, you can pack a lot of information into individual sitemaps. For instance, maybe Best Buy would like to know the indexation rate by product type and page type. By this I mean, would Best Buy want to know the indexation rate of category pages (lists of products) versus product pages (an individual product page.)

To do so would be relatively straight forward. Just split each product type into separate page type sitemaps.

  • sitemap.tv.category.xml
  • sitemap.tv.product.xml
  • sitemap.digital-camera.category.xml
  • sitemap.digital-camera.product.xml

And so on and so forth. Grab the results from Webmaster Tools and drop them into Excel and in no time you’ll be able to slice and dice the indexation rates to answer the following questions. What’s the indexation rate for category pages versus product pages? What’s the indexation rate by product type?

You can get pretty granular if you want though you can only pack each sitemap index with 50,000 sitemaps. Then again, you’re not limited to just one sitemap index either!

In addition, you don’t need 50,000 URLs to use a sitemap index. Each sitemap could contain a small amount of URLs, so don’t pass on this type of optimization thinking it’s just for big sites.

Connecting the Dots

Knowing the indexation rate for each ‘type’ of content gives you an interesting view into what Google thinks of specific pages and content. The two other pieces of the puzzle are what happens before (crawl) and after (traffic). Both of these can be solved.

Crawl tracking can done by mining weblogs for Googlebot (and Bingbot) by the same sitemap criteria. So, not only do I know how much bots are crawling each day I know where they’re crawling. As you make SEO changes, you are then able to see how it impacts the crawl and follow it through to indexation.

The last step is mapping it to traffic. This can be done by creating Google Analytics Advanced Segments that match the sitemaps using regular expressions. (RegEx is your friend.) With that in place, you can track changes in the crawl to changes in indexation to changes in traffic. Nirvana!

Go to the Moon

Doing this is often not an easy exercise and may, in fact, require a hard look at site architecture and URL naming conventions. That might not be a bad thing in some cases. And I have implemented this enough times to see the tremendous value it can bring to an organization.

I know I covered a lot of ground so please let me know if you have any questions.

Quora’s Not A Competition (But I’m Winning)

January 03 2011 // Life + Social Media // 2 Comments

It’s a new year and like millions of others I’ve taken stock and made some resolutions.

The Dark Passenger

Perhaps it was in this state of mind that I caught myself turning Quora into a competition. It’s not (or shouldn’t be) and my initial motivations for answering were more altruistic than self-serving. But like some dark passenger (hat tip to Dexter), my competitive nature has emerged. Mind you, there’s nothing wrong with being competitive.

I’ve been criticized for being too self-assured, cocky or condescending. “You seem to think your opinion is always right.” I’ve heard that a number of times. My response is another question. Why would I give an opinion that I didn’t believe in?

While true, I doubt that response helps my case. That’s not to say that I’m never wrong or that I don’t change my mind. I can be persuaded to see another point of view. I enjoy intelligent debate.

That brings me to Adam Lasnik, who started off the new year with two great blog posts. I wholeheartedly agree with his publish first, think later criticism. His musings on why and whether we should contribute to sites like Quora got me thinking.

Why are we contributing to Quora? It’s a funny business in a way. Quora’s business is our contributions. It’s the same knock I have against article directories. They make a business on your content, leasing back a small fraction of their trust and authority in the form of backlinks. It’s not a particularly healthy relationship.

Is Quora different?

I still see value in contributing to a community like Quora and Stack Overflow. I don’t think a policy of isolation is the right course of action. Sharing your expertise is good business. But it makes me think about the motivations for contributing. On the face, you want to share your knowledge with someone. They have a question. You have an answer.

But it’s not like someone asking you in person, or via email or any other number of mediums. They’re not just asking you. Instead of getting one answer, they’ll get a number of answers. That can often be good, but it’s then up to the person or community to determine which of those answers is … best. Dress it up as most useful or interesting – people assign judgment to your content.

Keeping Score

Should we be surprised when we get caught up in wanting to have that best answer? It reminded me of a lyric from Love’s Not A Competition (But I’m Winning) by the Kaiser Chiefs.

I’m not sure what’s truly altruistic anymore,
When every good thing that I do is listed and you’re keeping score,

Whoa.

So I’m guarding against this ego based, game mentality. I don’t want to want to be first to answer a question, nor do I want that dark passenger to push me to contribute more. I’d like to be far more collegiate in nature, because this isn’t a zero sum game.

Google is not a Field of Dreams

January 02 2011 // Rant + SEO // 1 Comment

There is a rising tide of advice lately extolling the virtues of creating a valuable site focused on the user and that the rest … will simply come.

If you build it they will come

If you think this happens online (or anywhere outside of the movies), you’ll be waiting a long time for Ray Liotta to saunter out of that digital cornfield. And you won’t have traffic beating a path to your door like the closing shot of Field of Dreams.

Field of Dreams SEO

The story sounds great, doesn’t it? Write scintillating content and you’ll get search traffic. Build a useful, interesting site and the Google gods will smile upon you. Maybe this is how it works in some magical Utopian world where it rains marshmallows.

Field of Dreams

It would be great if the sites that were most useful were always ranked appropriately. But spend any time doing SEO (the kind where you’re in the trenches) and you know this is patently not true, nor (sadly) does it seem to be getting much better.

That’s not to say that you shouldn’t build a great site focused on users that delivers tremendous value. That just isn’t enough. You still need SEO to ensure the great site you’ve built gets in front of the right people.

Trash and Treasure

Here’s the hard truth, you may not be appealing to as many people as you think. Your definition of value might not be the definition others use, particularly not the definition Google uses. This is why I find the pollyanna around Field of Dreams SEO to be so dangerous. Because there is a nugget of truth to the notion.

Writing great content and building a valuable site is a critical part of SEO. But this means different things to different people. Put another way, one person’s trash is another person’s treasure and vice versa. As an example, I may find William Faulkner unreadable, but others may adore his novels.

Waterworld

SEO winds up being director, editor and agent – helping to shape your content and site so it is appealing to the major studios. Sure, maybe you can go the ‘Indie’ route, bucking the establishment and releasing it in small art house movie theaters. But how many times does that really work?

don't ignore seo

Ignore SEO and you’ll wind up with Waterworld instead of Field of Dreams.

2011 Predictions

December 31 2010 // Analytics + Marketing + SEO + Social Media + Technology + Web Design // 3 Comments

Okay, I actually don’t have any precognitive ability but I might as well have some fun while predicting events in 2011. Lets look into the crystal ball.

2011 Search Internet Technology Predictions

Facebook becomes a search engine

The Open Graph is just another type of index. Instead of crawling the web like Google, Facebook lets users do it for them. Facebook is creating a massive graph of data and at some point they’ll go all Klingon on Google and uncloak with several bird of prey surrounding search. Game on.

Google buys Foursquare

Unless you’ve been under a rock for the last 6 months it’s clear that Google wants to own local. They’re dedicating a ton of resources to Places and decided that getting citations from others was nice but generating your own reviews would be better. With location based services just catching on with the mainstream, Google will overpay for Foursquare and bring check-ins to the masses.

UX becomes more experiential

Technology (CSS3, Compass, HTML5, jQuery, Flash, AJAX and various noSQL databases to name a few) transforms how users experience the web. Sites that allow users to seamlessly understand applications through interactions will be enormously successful.

Google introduces more SEO tools

Google Webmaster Tools continues to launch tools that will help people understand their search engine optimization efforts. Just like they did with Analytics, Google will work hard in 2011 to commoditize SEO tools.

Identity becomes important

As the traditional link graph becomes increasingly obsolete, Google seeks to leverage social mentions and links. But to do so (in any major way) without opening a whole new front of spam, they’ll work on defining reputation. This will inevitably lead them to identity and the possible acquisition of Rapleaf.

Internet congestion increases

Internet congestion will increase as more and more data is pushed through the pipe. Apps and browser add-ons that attempt to determine the current congestion will become popular and the Internati will embrace this as their version of Greening the web. (Look for a Robert Scoble PSA soon.)

Micropayments battle paywalls

As the appetite for news and digital content continues to swell, a start-up will pitch publications on a micropayment solution (pay per pageview perhaps) as an alternative to subscription paywalls. The start-up may be new or may be one with a large installed user base that hasn’t solved revenue. Or maybe someone like Tynt? I’m crossing my fingers that it’s whoever winds up with Delicious.

Gaming jumps the shark

This is probably more of a hope than a real prediction. I’d love to see people dedicate more time to something (anything!) other than the ‘push-button-receive-pellet’ games. I’m hopeful that people do finally burn out, that the part of the cortex that responds to this type of gratification finally becomes inured to this activity.

Curation is king

The old saw is content is king. But in 2011 curation will be king. Whether it’s something like Fever, my6sense or Blekko, the idea of transforming noise into signal (via algorithm and/or human editing) will be in high demand, as will different ways to present that signal such as Flipboard and Paper.li.

Retargeting wins

What people do will outweigh what people say as retargeting is both more effective for advertisers and more relevant for consumers. Privacy advocates will howl and ally themselves with the government. This action will backfire as the idea of government oversight is more distasteful than that of corporations.

Github becomes self aware

Seriously, have you looked at what is going on at Github? There’s a lot of amazing work being done. So much so that Github will assemble itself Voltron style and become a benevolently self-aware organism that will be our digital sentry protecting us from Skynet.

Quora Button

December 27 2010 // Social Media + Web Design // 8 Comments

I like Quora, so much so that I wanted to add it as another contact option on this blog. But I couldn’t find a Quora button that matched my current buttons. So, I took a crack at making one myself.

Quora Button

Quora Button

Feel free to use it or make a better one. (Just let me know when you do.) In the interim, you should follow me on Quora and explore the growing knowledge community.

Google Split Testing Tool

December 23 2010 // Analytics + SEO // Comments Off on Google Split Testing Tool

In November Matt Cutts asked ‘What would you do if you were CEO of Google?‘ He was essentially asking readers for a wish list of big ideas. I submitted a few but actually forgot what would be at the top of my list.

Google Christmas

Google A/B Testing

Google does bucket testing all the time. Bucket testing is just another (funnier) word for split testing or A/B testing.

A/B testing, split testing or bucket testing is a method of marketing testing by which a baseline control sample is compared to a variety of single-variable test samples in order to improve response rates. A classic direct mail tactic, this method has been recently adopted within the interactive space to test tactics such as banner ads, emails and landing pages.

Google provides this functionality through paid search via AdWords. Any reputable PPC marketer knows that copy testing is critical to the success of a paid search campaign.

SERP Split Testing Tool

Why not have split testing for SEO? I want to be able to test different versions of my Title and Meta Description for natural search. Does a call to action in my meta description increase click-through rate (CTR)? Does having my site or brand in my Title really make a difference?

As search marketers we know the value of copy testing. And Google should want this as well. Wouldn’t a higher CTR (without an increase in pogosticking) be an indication of a better user experience? Over time wouldn’t iterative copy testing result in higher quality SERPs.

Google could even ride shotgun and learn more about user behavior. If you need a new buzz word to get it off the ground, try crowd sourced bucket testing on for size.

This new testing tool can live within Google Webmaster Central Tools and Google should be able to limit the number of outside variables by ensuring the test is only served on one data cluster. For extra credit Google could even calculate the statistical relevance of the results. Maybe you partner with (or purchase) someone like Optimizely to make it happen.

If this tool is on your Christmas list, please Tweet this post.

Social Entropy

December 21 2010 // Social Media // 2 Comments

It’s been frustrating to see social networking take on the properties of game mechanics instead of organic social behavior. That’s why The Real Life Social Network, a presentation and research by Paul Adams, was such a breath of fresh air.

If you haven’t reviewed it already, you should. It’s one of the smartest and most thoughtful investigations into how to translate offline social relationships online.

Dunbar’s Number

Dunbar's Number

One of the concepts Adams touches on is Dunbar’s Number.

Dunbar’s number is a theoretical cognitive limit to the number of people with whom one can maintain stable social relationships.

While there are upper and lower limits, the number generally sits at 150. I’ve argued the importance of Dunbar’s Number a few times and believe that it is still relevant online. Perhaps technology could increase the number slightly, but not by much in my opinion.

All Relationships Are Not Equal

Best Friends

This seemingly mundane statement hasn’t really been reflected in most social networks. People put different values on relationships. Is someone a friend or an acquaintance? Are they a colleague or a mentor? Who’s your BFF?

In fact, relationships could have a different value based on the context. My best bicycling pal might not be the friend I talk to about my personal life. A close friend might not be the person I ping for a discussion about SEO.

This concept is translated into strong ties (real friends) and weak ties (acquaintances and contacts). Most social networks treat tie strength equally. Lists provide some way to divide your social graph and divvy strong from weak, but it’s still a rather blunt tool.

Memory

Memory Is Not Infinite

Adams also references memory in his presentation. Memory is not infinite. I think that’s an astute observation and dovetails into the conversation about information overload.

I’m fascinated by the idea that weak tie information could crowd out the strong. Could having too many weak ties mixed in with the strong prevent us from having real social relationships? Could the quest for more connections actually marginalize the ones that matter?

Are we overwhelming our memory with a tidal wave of social information?

Groups

Your social graph is made up of groups. Similar to the idea that relationships have different values, your relationships fall into groups. They may be about where or how you met that person. These are my high school friends. These are my friends from San Diego. These are the people I worked with at such-and-such job.

Often these groups also reveal interests. You may have a group of friends surrounding a topic. I have some book friends. But my book friends might not be Philadelphia Eagles fans too. (Paul does a much better job of detailing this in his presentation.)

I had an opportunity to chat with Armen Berjikly at Experience Project earlier this year. What I found amazing was how they allowed users to express all facets of their personality. You could join any number of groups without them defining your entire experience on the site.

People are not just one thing.

Social Evolution

Social Evolution

If people are not just one thing, they’re also never the same. People evolve as they gain more life experience.

So, what happens to our groups?

How many of your high school friends do you really keep up with and does that dwindle as you get farther away from that time in your life? Your interests might change. Maybe you moved from Malibu to Omaha, so you’re not into surfing anymore. Will you keep up with all of your surfing buddies? Your childhood best friend may not be a close friend today.

I’ve worked in Fund Raising and Advertising. But I haven’t kept up with most of the people in those industries. I have less in less in common with them over time. Or take a book group. You might enjoy that for a while, but over time it likely disintegrates. The funny thing is, that doesn’t mean I don’t like books or even book groups. I may wind up joining another book group.

It’s what I refer to as social entropy.

Social Entropy

Just A Friend

The process of social entropy is OK! It’s natural. Relationships change (Biz Markie’s unrequited love likely faded.) In fact, it might be necessary so you can grow and forge new relationships. It’s a type of creative destruction. I’m not the same person I was in high school, why would I maintain all of those relationships 20 years later?

If I did try to maintain all of those relationships, I’d quickly exceed Dunbar’s Number. In addition, my social graph would increasingly have more weak ties than strong.

How does this translate online? This year I also lucky enough to chat with Lyle Fong of Lithium Technologies. Among many other things, he noted the need for groups to splinter or evolve.

If you’re ever been in an online group you’ve probably experienced this problem. The group probably starts off wonderfully. The signal to noise ratio is excellent. But because of that more and more people join. But ultimately that reduces the signal to noise ratio. Often a core set of members will flee the group to … start a new one. Or another set of members will flee to start a group with a slightly different topic.

Conversely, limiting group membership can also lead to social entropy. A defined group may begin with a flurry of interactions from many members. But then a few begin to dominate the conversation. Others simply fade into the background as they’re pulled in different directions or lose interest. Suddenly, it’s a very small group which doesn’t provide enough stimulus even for those dominating the conversation.

Right after Paul published his research I reached out to him. Though swamped with requests, he was kind enough to get back to me, confirming social entropy and how groups change. At that time it was thought Paul would lead Google’s new social effort. Yesterday he revealed he’s moving to Facebook.

Social 3.0

Building interfaces which allow for social entropy seems incredibly valuable.

So far, the focus has been on establishing relationships, but what about the natural process of breaking them? There has been some comical editorial about services which would help you ‘break up’ with friends. There can be a lot of emotional freight when you decide to unfriend someone. Feelings hide behind those friend numbers. Should those numbers even be exposed in the first place?

Or maybe there should there be a TTL on relationships? Sure, I wanted to check in on that freshman college roommate but do I then want to know about his daily life from then on?

The 50 friend limit imposed by Path is an interesting concept, forcing people to choose only those with whom you have a strong tie.

In real life people evolve and grow apart. I believe the social network that allows people (and their relationships) to evolve will be most successful.

xxx-bondage.com