UPDATE



Hi. This is an old, unmaintained blog. You may find these sites more to your liking:

Carson Brackney: This is my primary site.

Ad Astra Traffic: Content production/article writing service.

Ad Astra Traffic Team: For those who'd like to get writing gigs with Ad Astra.


Wednesday, September 13, 2006

The "optimal keyword density" story...Part two...There's no magic number...

So, we've noticed that there are as many opinions regarding the optimal keyword density percentage as there are people talking about it. Some will say 2% is perfect. Others will tell you to shoot for 10%. What's the right number?

There isn't one. Anyone who says they have uncovered the perfect keyword density percentage is wrong. Bold statement? Not really. There are some fundamental reasons why the quest for some magic percentage is doomed.

First, search engines don't really use keyword density in their algorithms. It's not quite that simple. They rely on term vector analysis, which is a slightly more complicated proposition. I am not a math wizard, but I was able to slowly but surely make my way through some interesting research by information retrieval experts that explain exactly what term vector analysis is and how it works. Both local information from individual documents in a set is evaluated. So is more global information derived from the whole of the database. There is also weight given to linkages in order to accurately assess what Dr. E Garcia referred to "the degree of connectivity between documents."

The localized keyword density isn't overvalued because, in part, it opens the door for abuse. This is why search engines have evolved from using more simple density calculations after people adopted keyword spamming practices. They are counterbalanced by the global factors considered in the calculation.

Term Vector Theory and Keyword Weights is a great introduction to the actual mathetmatics underlying information retrieval algorithms. I'm going to skip all of the really juicy math stuff and present a few key statements from the analysis:

[It] is evident that keyword weights are affected by

1. local term counts
2. the shear volume of documents in the database.

Therefore, the popular notion that term weights are or can be estimated with "keyword density values" is quite misleading.


...and..

Frankly, SEOs/SEMs that spend their time adjusting keyword density values, going after keyword weight tricks or buying the latest "keyword density analyzer" are wasting their time and money.

...finally...

To conclude, keyword density values should not be taken for term weights. As local word ratios, density values are not good discriminators of relevancy.

Basically, keyword density is a gross oversimplification of the process used by search engines to assess the relative value of individual pages and sites. It may seem like a relevant proposition, but in reality it isn't very helpful.

One could argue that the authors of the quoted material are wrong and that the keyword density crowd are on the right track. However, I think that even a cursory examination of the more serious scholarly research on the topic should dissuade us from embracing that belief. Even if it doesn't, a few other factors should compel us to look at keyword density solutions with a very suspicious eye.

Consider, for a moment, the purpose of a search engine. If you get down to the core of search engines, it's all about putting users and information together. Google and the rest can only attract their necessary audiences by providing results that allow users to find the information for which they are looking. When search engines begin to serve up results that aren't relevant to the user, they start to lose market share.

If there were a perfect keyword density that fell somewhere between 1% and 20%, one of the thousands of people who focus their career efforts on getting sites ranked highly would undoubtedly stumble upon it. If you drop the highs and lows off the range of espoused optimal densities, that is not that huge of a range to carefully research and split-test to see what number really works.

So, what would that mean? It would mean that the search engines were heavily reliant upon a publicly determinable factor that could be intentionally manipulated to improve search engine rankings. It would be kicking open the door for those with the "magic number" to shoot up the charts just by making sure their targeted keyword phrase logged in at 4.25%, or whatever.

At that point, the entire concept would be irrelevant, as it would become an intentionally created norm by anyone running a website with an eye cast toward success. If you KNEW that a 4.25% density was truly optimal and had a real effect on search engine rankings, how long would it take you to optimize your content for the "magic number." You'd be on task right away, tweaking and adjusting to game the search engines.

That's exactly the kind of thing that could lead to a decline in search engine query result relevancy. Suddenly, the world's greatest expert on widgets--someone offering insightful, brilliant commentary on all things widgety--would be leap-frogged by a slew of fifteen-year-old kids with mini-sites running Adsense simply because they hit the 4.25% mark, all other things being equal. That is what the search engines DON'T want. It's also why the idea of a predetermined optimal keyword percentage doesn't make a lot of sense.

Additionally, search engines and their algorithm's are not static concepts. They are constantly shifting and changing as their owners look for ways to improve results and stay one step ahead of search engine gamers. Thus, even if there was a perfect percentage, you certainly couldn't count on it remaining optimal as the algorithm underwent its updates. Let's say you don't agree with the analysis of why having a set number doesn't make logical sense. Let's say you don't buy the IR experts' explanation of term vector theory. You should still be suspicious of the idea of a knowable optimal density, because that number could be subject to constant change.

Jill Whalen, from High Rankings, summarizes nicely:

If it weren't so prevalent, and there weren't so many people paying good money for training that teaches crazy things like this, it would actually be pretty funny. The thought of writing copy with a particular keyword density percentage in mind is ludicrous on so many levels.


Whalen goes a step beyond where I do. She actually encourages writers to refuse to write with a predetermined keyword density in mind, even when the client desperately wants material with a 3.5% (or whatever) density. I don't push it that far. As long as I believe their requested target number can be met without sacrificing the overall quality of Content Done Better's work, I will accept the assignment and meet the keyword density request. I might not believe those extra machinations are working, but I will do what it takes to keep a client happy.

So, if there is no optimal keyword density figure, how should one handle content needs? That's the next post in this series... So, stay tuned.


Technorati Tags:
, , , , , , ,

Del.icio.us Tags:
, , , , , , ,