Avoid 7 Mistakes Making Your Site Ignored by Search Engines & Your Visitors

A small business owner may not know that his/her webmaster may ruin the web site with or without intention of doing simple mistakes that could cause a problem to the search marketing.

Christine Churchill disclosed the 7 common mistakes. In her below article, she explains clearly what they are, why they can stumble you and also proposes how to deal with them.

1. You suffer from relative linking issues
Every webmaster knows it's good insurance to regularly check your site for broken links. However, one of the most common types of broken links is self-imposed and thus preventable. What is it? It sounds woefully easy, but using the right kind of relative links can save your visitors a lot of frustration.

What's a relative link? An example of a relative link is "staff.html"—in an anchor tag this would appear as:

<*a href="http://www.blogger.com/staff.html"> Staff

Many times designers will use a relative link because of the ease during migration from design site to live site. The problem with this simple type of relative link is that they can break as your site grows in complexity and you develop a hierarchical directory structure. The relative link gets its name because it is "relative" to the current directory. If you happen to move the content to a different directory you can end up with a 404 "file not found" error because the relative links are pointing to pages that no longer fall under the current directory.

Another option is an absolute link reference that uses the full http address in the domain name. An example is the Search Engine Land staff page (http://www.searchengineland.com/staff.html).

In an anchor tag an absolute link to this page might appear as:

<*a href="http://www.searchengineland.com/staff.html">Staff

A number of sites have gone to the use of absolute links due to being scraped or the fear of hijacking. The downside of this practice is that many companies use a staging server to test sites prior to uploading them to the open web. The reference to an actual domain complicates the testing process when the site is in a developmental environment. Consequently, many designers prefer to use a relative linking structure.

Here's a typical situation when sites grow. Small businesses often start with small, flat sites. That is to say, every page links from the home page, but goes no deeper. Over time the webmaster creates subdirectories to logically group files that contain, for instance, new product lines. The webmaster cuts and pastes all his/her previous footers and other navigation (that used relative links) into the new subdirectory. However, since the pages do not exist in the new subdirectory, errors begin popping up all over the place. What a mess!

Enter the "absolute relative link," sometimes called the server-relative or domain-relative link. This is the hybrid version of the absolute and relative link. (And no, I didn't make up the name. I read about them after being burned by plain relative links back in the late 1990's. It was a lesson I never forgot!) An absolute relative link includes an initial backslash to tell the server to start from the root directory and follow this path to the page. An example in an anchor tag would be:

<*a href="http://www.blogger.com/products/fun-product.html">Fun Product

The absolute relative (server-relative link) offers a flexible solution that will make your web designer happy because s/he can test the site on a staging server and migrate it to a live server without domain name problems. It also allows him to use a standard navigation scheme, which works even if the links are referenced from different subdirectories.

The good news is it is EASY to check your links. There are numerous automated link checkers including Zenu Link Sleuth that can scan your site and report bad links to you. Remember that links are the pathway engines use to crawl your site. You don't want the spiders and bots to crash into a dead end at your main navigation, so use a link checker.

2. Spider traps are keeping your site from being properly indexed
Your marketing department "ooo'd" and "ahh'd" over the web designer's concepts so you paid big bucks to have it coded. Now you have a drop-dead-gorgeous site that spiders can't crawl easily… if at all.

Google's technical guidelines tell us, "...if JavaScript, cookies, session IDs, frames, DHTML, or Flash keep you from seeing all of your site in a text browser, then search engine spiders may have trouble crawling your site." Sadly, most of these problems could have been remedied before they were a problem if the webmaster knew they were issues.

As a small business webmaster with an "un-crawlable" site, you have several options. You can arm yourself with knowledge about spider traps and look for them yourself, or you can bring a search engine optimization (SEO) consultant on board to review the design and techniques used on the site. A knowledgeable optimizer brought on during the design phase can advise you on how to create a beautiful site—and even keep some flash—while ensuring the site is easy to crawl. You don't have to give up glitz to do well in the engines; you just have to be careful how the site is constructed.

A fast way to view your site in the way a search engine would is to download the Lynx text web browser and run your site through it. This is a free Open Source piece of software originally developed at the University of Kansas that lets you see your page as a search engine might read it: in text format.

Google's Webmaster Guidelines are a great resource for learning more about designing crawler-friendly sites.

3. Previous SEO firms used shady tactics that cast your site in a bad light
Shady tactics come in many forms. They become a problem when you go past visible text and start embedding keywords in the code at every opportunity. Invisible text, excessive keyword stuffing, doorway pages, cloaking practices or any number of other shady tactics can cause the engines to put your site in Search Engine Hell. There are a number of "tricks" that ill-informed or unscrupulous SEOs and webmasters might use in an attempt to coerce the search engines to rank a site higher than it otherwise deserves. The problem with these tactics is they are short-lived. As search engines continue to improve the sophistication of their indexing and ranking algorithms, more black-hat tactics will be detected and dealt with (usually by banning your site from the index).

Note that while many of these tactics involve manipulating the content in some shady way (cloaking, doorway pages and hidden text being the worst offenders), bad linking tactics can also cause your site to be viewed as shady by the search engines. Ever used a cheap linking company to build links? Well that's likely what you got: cheap links! While the initial monetary cost was low, now you have to pay the real price. Oftentimes, companies like these link your site to link farms and bad neighborhoods. Or worse yet, you get ninety percent of your links from comment spam.

If you're unsure of your link status, sign up for Webmaster Central and let Google tell you the known links it has indexed. If you see that all your links are the same type (e.g. all are reciprocal or are all from spammy sites), you have a problem.

If you have already been caught by Google and been banned from their index, you should take a hard look at your site to determine what the cause might be. Remove the offending tactic then humbly request re-inclusion.

Want more information on other tactics that can get you in trouble? Check out Google's guidelines.

4. You have canonical domain name problems
Does an engine see "www.mysite.com" and "mysite.com" as separate sites? What are the downsides of ignoring this problem? If you don't redirect one version of the site to the other, Google and other engines will see the two as separate sites. Yep—that means you could have major link-splitting and duplicate content problems.

The most common fix to this issue is to create a 301 permanent redirect from one to the other. Also, Google's Webmaster Tools now allow you to specify to Google which version you prefer. This will help with canonical domain issues in Google, but not other engines.
For instructions on creating a 301 on an Apache server see the Apache Web Server documentation.IIS redirects are handled differently.

"What about using the DNS CNAME entry," you may ask? It is acceptable to set up a DNS CNAME entry to point alternate names back to the primary name. This would be done as follows:

<*table width="50%" border="0"><*tbody><*tr><*td>mysite.com<*td>A <*td>aaa.bbb.ccc.ddd (IP address direct)<*tr><*td>www.mysite.com<*td>CNAME <*td>mysite.com

This tells everyone (not just browsers and search engine spiders) that www.mysite.com is an alias for the preferred mysite.com.

Is it possible to handle canonical name issues this way? Yes. Does it work with the search engines? Yes, if properly implemented. Is it recommended? No, because it is easy to make implementation errors that result in an infinite loop on the DNS resolution. In addition, this involves getting the IT department or the ISP staff involved, which may take longer to implement than a 301 redirection rule.

5. You're haunted by an unreliable server
Going cheap on hosting often equates to unreliability. Look for providers that include telephone support (not just email) and have extended operating hours. You'll also want to check their time zone, especially if you live on the coasts.

Consider setting up an outside service to monitor your server so you know how your ISP is really doing. While most brag about 99.99% uptime, you may find they are stretching the truth. There are several low-cost server-monitoring services out there (Red Alert and Server Check Pro, for example), that can ping your server every 15 minutes then let you know when and if it is down. This type of service can also detect slow-performing servers.

Some have chosen to use a server checker on the hosting company's site in order to decide which host to use. You should be aware that the performance of the hosting company's site may differ from the sites they host.

In addition to affecting the customer's experience on the site, a poorly performing server can also have a negative effect on your site's search engine performance. If the site is down or slow in responding, the search engine spider may get tired of waiting and decide to de-list that page.

6. All your web content has been stolen
Your web site is publicly accessible and the content is available electronically. These two facts make it simple for an unscrupulous webmaster to snag a copy of your site's content and clone it on the web. Automated scrapers are stealing content daily making it possible for others to use your content for their gain.

The easiest way to detect when this has happened is to grab a long text snippet (perhaps eight to 10 words or so) from your site and drop it into a search box, placing quotes around it to indicate an exact match search for that phrase. Assuming the content is original and unique, the only page that shows up should be your own. There are also tools like Copyscape.com that can check for violations of your content rights on a regular basis.

If another site has stolen and published your content, Google may think your site contains the duplicate content and the offending site has the original. At Google Webmaster Central Blog, Adam Lasnik wrote a great piece called Deftly Dealing With Duplicate Content.

If you find yourself the victim of stolen content, Google has spelled out the steps to filing a copyright infringement complaint. A great way to prove your ownership of the copy is to use the Way Back Machine. This free tool will allow you to show how long the content has been on your website.

7. Bogons can eat your web site
No, this isn't the name of a new monster flick. This is a bizarre and unusual situation involving Internet traffic from IP addresses that are not currently assigned to ANY ISP. Since these lists are manually maintained and can lag behind current legitimate assignments, search engines may be unable to access your site through no fault of your own. If you recently changed your hosting to a newly commissioned data center or if your ISP recently assigned you an IP from a newly commissioned block of addresses, you should see this hosting issues article for the details. This unusual case falls into the fact-is-stranger-than-fiction category, but since we witnessed at least one account of it firsthand, we know it can happen.

Brand Marketers: The Wiki

"If you manage a brand of any importance, it is highly likely that the Wikipedia page pertaining to your brand is in the top search results at all the major search engines...

Brand websites exist to build and reinforce a carefully crafted brand message, create a vivid brand personality, educate consumers about product benefits, build brand affinity, increase consumption and facilitate the creation of a brand community. Wikipedia pages have the potential to screw up the message and muddy the brand image that firms meticulously try to construct online.", stated Mr.Sandeep Krishnamurthy.

A part of the above message made me realized that the Wiki, one of potential social medias, can affect the brand marketing both positively and negatively. Wiki can help you make PR professionals and marketing indirectly if you join this open community whereas your brand can also be affected if there is any unexpected event occurred in a negative way. That is, your brand's rumor or bad reputation from the situations affecting the negative brand image can be widely spreaded overnight because Wiki(pedia) is the unique open community and free to read, write, edit by public. That is what we should be aware.

Here below are some suggestions by him;
  1. 1. Make a list of Wikipedia pages that affect you.
  2. 2. Locate the RSS feed for each page.
  3. 3. Subscribe to it using Google Reader.


First, make a list of every Wikipedia page that affects you. This is non-trivial. If you blow this, the rest will not make much sense. Include the following:
The main page for your brand.

  1. 1. The main page for sub-brands.
  2. 2. The main page for your competitors.
  3. 3. Specialty pages focusing on your brand (e.g., McDonald's legal issues).
Second, locate the RSS feed for each page. To do this, follow these steps:
  1. 1. Click on the History tab of the Wikipedia page you care about.
  2. 2. On the left sidebar, you should see "RSS Atom."
  3. 3. Click on RSS.
  4. 4. Copy and paste the URL into Google Reader.

Third, in Google Reader, create a new folder that includes all the pages you want to follow. Refresh.

How to react to negative info. Before you do anything, remember the following:

  1. 1. Wikipedia pages are fluid and can change many times a day.
  2. 2. Wikipedia does not have a policy about how firms can participate.
  3. 3. Page vandalism is to be expected for short periods of time.

There is no single "webmaster" that you can contact. Make sure you read the detailed policies of Wikipedia. He recommends these two links as starting points: The Five Pillars and List of Policies. If you find something objectionable or inaccurate on Wikipedia, you can do one of the following:

  1. 1. You can participate in the forums related to the page and voice your concern.
  2. 2. You can contact one of the previous editors of the page.

Wikipedia does not condone copyright violations or libel. If you notice serious violations, you can contact Wikipedia at info-en-q@wikimedia.org.

If Wikipedia does not respond to your emails, you can consider a press release or something low key. Do not simply go in and do massive deletions. If you are detected, the negative PR will not be worth it.

I think these ways can help you decrease the negative image and strengthen your brand in case there is a rumour in the social content site like the Wiki.

How to deal with the traffic without Google

What if the Google popularity dropped one day and the traffic had moved to other search engines like Yahoo, MSN, Ask or even Snap, newer one? Have you ever pondered a plan B to deal with this? Here are some suggestions by Paul J. Bruemmer.


Before implementing Plan B, you'll want to review your site objectives and SEO plans to ensure maximum traffic from Yahoo, Microsoft and Ask. Your strategy will vary depending on your site category. In "The Marketer's Common Sense Guide to E-Metrics" (PDF), Bryan Eisenberg and Jim Novo define four types of websites: ecommerce, content, lead-generation and self-service. We might now add social networking sites. The objectives for these sites will vary.

  • 1. Ecommerce sites want to increase sales while decreasing marketing expenses.
  • 2. Content sites want to increase readership, visitor interest and site stickiness.
  • 3. Lead-generation sites want to increase segment leads.
  • 4. Self-service sites want to increase customer satisfaction while decreasing customer support costs.
  • 5. Social-networking sites want to increase membership numbers and the amount of member interactivity.

The above objectives require different optimization strategies and different measurement metrics. That said, consider starting by identifying your business goals, performing extensive keyword research and some basic predictive ROI modeling; then you'll be prepared to take a look at executing a search marketing plan without Google.

Next, it is important to understand why businesses use search engines, a.k.a. interactive marketing, within their standard five-year marketing plans. Essentially there are five reasons:

1. Create brand awareness
2. Sell products/services
3. Generate leads
4. Drive traffic
5. Provide information
Based on accomplishing your business goals, select one of the five above and use the appropriate organic and paid search marketing strategies and tactics at Yahoo, MSN Live Search and Ask.

Know your audience to expand your market

If someone asks "Tell me about your audience", what is the response from your research?. The best responses come from you who have spent time and money researching your audience. It shows if you know well enough your audience before marketing to them. Here is a part of the NextStage CEO's article which tells you how to keep your audience engaged with your website.

You can't market successfully to anybody until you know who they are, what they think, how they think, what they respond to and what they'll respond with. The smaller your target audience, the more you must design specifically for it. Large audiences are easy to design for, just keep it simple!

The first message must be instructions on how to build a receiver. The first message is not from you to your market, it's from your market to you and is: "This is what will get our attention, so this is what has to be in your marketing message."

Analyzing people is interesting. If you can track visitor logical processes, cognitive processes, decision styles, memorization methods, emotional cues and others (age, gender, buying styles, best branding strategies, impact ratios, touch factors, education level, income level, etc.) , it would be really great.

Knowing your audience in depth and detail is a required first step. The more richly detailed and complete your knowledge is about your audience, the more you can do to build a receiver -- a website, video, print ad, et cetera -- they will naturally and effortlessly interact with. There are two crucial elements to "receiver" design for building that receiver.

The first -- rich audience personae -- is incredibly straightforward.
The second crucial element is Audience Focused Optimization.

That is "Quantifying and Optimizing the Human Side of Online Marketing." For example, knowing how the audience thinks enabled design modifications that kept visitors engaged and returning through the redesign process. The end result of these efforts was demonstrated by visitor comments and emails.

The simple fact that Jim Sterne and his crew let visitors know ahead of time when a redesign would be online resulted in three major outcomes:

> The day the redesign went live, the site experienced huge spikes in traffic, levels of interest and navigation.

> People emailed that they'd been on the Emetrics Summit site and noted the update announcement.

> Other emails demonstrated that people had returned specifically to discover what had changed since their last visit. (The wording in that last line is intentional. People didn't return to "see," they returned to "discover." They were on the site thinking, evaluating, analyzing, interpreting. In other words, they were engaged.)

These are some generalized suggestions to make simple modifications to existing sites and also they are easily implemented directives for sites in the making.

Critical:

> Use fewer menu items
> Rename remaining menu items so that they are questions that can be answered
> Make the menu system/structure completely consistent from page to page

Important:

> Make the progression of pages tell a story so that one web page logically and thematically leads to the next web page

> Make the menu system either consistently horizontal or consistently vertical (remove top-of-page city menu and replace it with graphic links that already exist within the banner image)

Desirable:

> Create a breadcrumb trail as a navigation aide

Source/References: iMediaConnection