You could follow random SEO advice and rely on tricks and rumors. Or you could take a rational approach to SEO.
If you throw a rock in any direction in a large city, odds are high you hit a self-proclaimed “SEO guru” in the head. They “know” (without any way to prove it) the perfect keyword densities and link formats and everything else about SEO. I’m not an SEO guru. But you can approach SEO in a purely rational way:
If I were Google, how would I work?
Google is made by people—rational people I suppose. So, if you think about how search engine rankings work rationally, you should end up with similar logic. I know it sometimes feels like they do irrational choices (like the “Panda” update that hit hard many quality sites). But I’m sure they at least attempt to think rationally the big question: How to rank webpages?
Here’s a rational approach to SEO, which should work if the search engines are rational ;)
So, what’s rational SEO? It’s feeding the search engines what a rationally thinking search engine is looking for.
If a Google search meant that a human reads all webpages and then ranks them against your keyword, it would be rational. But since Google isn’t a person, it only has the rules people built into it. Fortunately those people are rational. And by definition rational choices can be traced. So, lets trace the rational behind search engines.
There are three questions a search engine asks when it ranks results:
- What’s the question?
- Which pages are closest to answering that question?
- Which of the results is the most trustworthy?
You could relatively easily interpret what someone is looking for, based on the keywords they used. But for a computer program it isn’t always so easy. It can only follow rules.
Top 10 rational search engine ranking factors
Here’s a top 10 of the most important ranking factors I’d use as a search engine. If Google (and the merry band of search engines) are rational, they’ve probably come up with a similar logic.
10. Outbound links. Links to related high quality external sites and sources, are a sign of knowledge. And they can also help in categorizing the topic of the page.
9. Meta description. This is what you (usually) see below the title in the search results page. If it’s custom written, it probably is a fairly accurate representation of the content on the site. I use an SEO plugin to do it. As an added bonus, the meta description has a huge impact on the click-through rate (it’s worth nothing to be the top ranking result if no one clicks through, and you won’t stay at the top, if people don’t click).
8. Article length. The longest isn’t always the best. But nor is the shortest. It all depends on the question, but a 1000-word article tends to be better researched than a 200-word article. A longer article is also much more likely to answer the person’s question, so they’re less likely to return to the search results to find another link to click.
7. Site statistics. Older sites are usually more reliable. But so is younger content. So, fresh content in an old site should be most reliable. Other statistics like Technorati and Alexa rankings aren’t the most accurate but they do provide some clue of a website’s popularity.
6. Surrounding content. What’s the site generally about, and who’s the author. Lots of related content on the same site or by the same author indicates relevance and trustworthiness.
5. Inbound links. The currency of the internet and the most talked about SEO factor is inbound links. If others are linking to a page or site, it’s a sign of good quality. Where the links come from is the key here. It’s like in “real life”; a recommendation by a respected person is worth a lot more than a thousand recommendations by random people. Where exactly the links point to is also important. Links to the home page only says, “The site is somehow significant enough to be linked to.” But a deep link (a link to a specific page/article) says, “This specific content is worth linking to.”
4. Keywords in the content. The most obvious factor in SEO. But not only the exact keywords are important. Close matches and related words tell a lot about an article. And if there are no related words, then the page is probably irrelevant to the search. This doesn’t mean you should stuff an article full of the keyword you’re targeting. Keyword prevalence is far more meaningful than keyword frequency, which leads me to the next point.
3. Title. Nothing describes a webpage better than it’s title (and the beginning of the article). It has to be short, so there’s nothing extra to confuse the search engine. I believe the title is the most important factor in getting to the first pages. But the last two points in this list define which results get to the very top, and which results fade away from the first pages.
2. Click through rate. A computer program is always only a computer program, even if it was built by Google. Analyzing what real people do is the best way for a search engine to understand the relevance of its results. So, looking at which results people click should be a major indicator of true relevance.
1. Bounce rate. If you return to the search results page after clicking through to one result, it’s safe to assume that the page didn’t answer your question. The faster you return, the less valuable the page was for you. When you no longer return to search more, your question was most likely answered (not necessarily, but it’s a good sign for a search engine).
So, those are the most important factors I’d use if I had to build a new search engine. As a bonus I’ve gathered three more SEO tips for you. These are often talked about, but I’m not sure what to think of them.
Top 3 controversial SEO tips
3. Meta keywords. A representative from Google has directly said that Google isn’t interested in meta keywords. Why not?! If there’s only a couple of meta keywords, they’re likely to represent the content accurately. And if there’s a hundred meta keywords, they tell that the page is probably spam. I don’t know what to think about this. Does Google lie or does it waste a perfectly good way to rank webpages.
2. Page source code. Some people claim a clean code would rank better than a messy one. I’m not an HTML expert but I find this difficult to believe. A search engine bot can easily scan through the code and only pay attention to the headers, paragraphs, and links. What’s the difference to the bot? Page speed certainly makes a difference (because it affects the users’ experience) and its affected by messy code, though.
1. Commenting on blogs. Anyone can leave a comment on a blog. It tells nothing about the quality of the comment’s author’s content. So, why should links in the comment section matter? I find it difficult to believe that they would make any difference in search engine rankings.
Overall, SEO isn’t a traffic generation tactic I recommend often. In most cases, it’s too unreliable in my opinion to be the best option. Here are the traffic generation methods I recommend most often to my clients.