5 Steps to Satisfy Your Website Visitors

Have a look at these 5 steps which will help you please your website readers and search bots.

First step - You should write content which is relevant to your site. If your site is about travel, you should not write about car accessories.

Second step – The content you write should be easy to understand. Do not use fancy words or difficult words that are hard to comprehend.

Third step – Your content should not be written more than 500 words. Do write it concisely with the point you want to communicate.

Fourth step – This step is quite hard because you have to concern English grammar. You should make sure that your content is grammatically correct as good grammar will impress most visitors.

Fifth step – Keywords and keyword phrases should not be overused in your content because too many keywords or keyword phrases will not only irritate readers but also search engine spiders which will think that your site is a spam. The keyword density of 1-3% is considered to be good for content and search engine crawlers. So, make sure that your keywords and keyword phrases are appropriately used and well tuned in the content.

What is the keyword density?
Keyword density is basically the number of keywords or keyword phrases in a piece of content divided by the total number of words.

Put in your mind that you have to take some time to create a short list of keywords that are relevant to the subject you are about to write. Next, try to naturally scatter the keywords or keyword phrases into your content. As a result, not only will actual people love you, but the search engine spiders will, too.

Two Ways to Track Profit from Online Shoppers

For internet marketers, they would like to know how to track a result from advertising online. You cannot rely on the statistics of web traffic only because there is a reliable survey report saying that more than 80% of online lookers will search for product information first in WWW, then make a decision later and prefer to purchase offline.

How do you track if the online advertising investment is cost-effective?

I would like to recommend two ways which will help you track your profit.

Firstly, have you ever get a discount coupon when you go shopping or when you buy a product? Do you think if it can urge you to buy the product with a discounted price next time or if it can encourage you to try a new product with special price? Yes or No. I think most will say ‘Yes’.

This is what I am suggesting you to use the idea of using ‘coupon’ to make your online promotions through e-mail marketing or even search engine marketing. It is not a printed coupon but a ‘printable’ coupon. This interactive marketing will help you monetize traffic, establish a traceable action and engage online shoppers who like to shop offline.

As printable coupons will be printed on demand by customers, it shows their intention to purchase your product as a coupon redemption. Plus, if the offer is enticing, of course, this will spread to those whom they know. Their friends will be invited to print the promotional coupon.

To achieve the goals, you should set a limit on the number of online printable coupons which will be distributed by individual participants in a promotion to control a part of marketing budgets. Using a toll-free number for an online promotion is the second way to follow the result of online advertising.

As to the offline behavior of online consumers, put your specific contact number in your advertising banner will help you to track the efficiency of your ad because when they see your ad and are interested in what you present along with the toll-free number which is an alternative channel for them to contact you with free of charge and they give you a ring, all calls you receive from that toll-free number represent the opportunity of closing sales and that your ad is working efficiently.

Try online coupons and toll-free number to improve your goal conversions from advertising online and you will gain a profit from online consumers in the real world sales.

Quick Way to SEO Your Site in Less Than 60 Minutes

Matt McGee has a very useful information to optimize your site with his steps in less than one hour.

SEO Your Site in Less Than an Hour

A. Visit the home page, www.domain.com.

1.Does it redirect to some other URL? If so, that’s bad.
2.Review site navigation:
  • > Format — text or image? image map? javascript? drop-downs? Text is best.
  • > Page URLs — look at URL structure, path names, file names. How long are URLs? How far away from the root are they? Are they separated by dashes or underscores?
  • > Are keywords used appropriately in text links or image alt tags?

3.Review home page content:

  • > Adequate and appropriate amount of text?
  • > Appropriate keyword usage?
  • > Is there a sitemap?
  • > Do a “command-A” to find any hidden text.
  • > Check PageRank via SearchStatus plugin for Firefox

4.View source code:

  • > Check meta description (length, keyword usage, relevance).
  • > Check meta keywords (relevance, stuffing).
  • > Look for anything unusual/spammy (keywords in noscript, H1s in javascript, etc.).
  • > If javascript or drop-down navigation, make sure it’s crawlable.
  • > Sometimes cut-and-paste code into Dreamweaver to get better look at code-to-page relationship.
B. Analyze robots.txt file. See what’s being blocked and what’s not. Make sure it’s written correctly.

C. Check for www and non-www domains — i.e., canonicalization issues. Only one should resolve; the other should redirect.

D. Look at the sitemap (if one exists).
  • 1.Check keyword usage in anchor text.
  • 2.How many links?
  • 3.Are all important (category, sub-category, etc.) pages listed?

E. Visit two category/1st-level pages.
Repeat A1, A2, A3, and A4 - this will be quicker since many objects (header, footer, menus) will be the same. In particular, look for unique page text, unique meta tags, correct use of H1s, H2s to structure content.

Check for appropriate PageRank flow. Also look at how they link back to home page. Is index.html or default.php appended on link? Shouldn’t be.

F. Visit two product/2nd-level pages.
Same steps as E.
Also, if the site sells common products, find 2-3 other sites selling same exact items and compare product pages. Are all sites using the same product descriptions? Unique content is best.

G. Do a site:domain.com search in all 3 main engines.
Compare pages indexed between the three. Is pages indexed unusually high or low based on what you saw in the site map and site navigation? This may help identify crawlability issues. Is one engine showing substantially more or less pages than the others? Double-check robots.txt file if needed.

H. Do site:domain.com *** -jdkhfdj search in Google to see supplemental pages.
All sites will have some pages in the supplemental index. Compare this number with overall number of pages indexed. A very high percentage of pages in the supplemental index = not good.

I. Use Aaron’s SEO for Firefox extension to look at link counts in Yahoo and MSN. If not in a rush, do the actual link count searches manually on Yahoo Site Explorer and MSN to confirm.

-----------------------------------------------------------------------------------------------

Besides, I would like to add further some nice comments by Paul for the SEO:
1) constant check of the links: domain.com to see what links rank higher than others so that you can assess which site is better for promotion purposes
2) using the good Page Strength from http://www.seomoz.org/page-strength
3) having both the sitemap you mention and an XML sitemap and
4) getting (in 1-2 minutes) the Google and Yahoo Keycodes.

Keep it as your handy checklist... :) for SEO your site quickly.

What You Don't Know About Your Web Site Can Hurt You

Here is an interesting article written by Christine Churchill, President of KeyRelevance.com, a full service search engine marketing firm. You will know better your web site from this article.

----------------------------------------------------------------------------------------------

It's tough to be a small business in today's fast paced world. Small businesses not only have to know their core industry inside out, but now they have the additional burden of being proficient in online marketing. Since many small businesses have limited staff, most people within these companies wear multiple hats, from CEO to webmaster. Unfortunately, this often means the person responsible for the web site knows very little about it. Everything may seem to run flawlessly for a time, but then, when something goes wrong, they are left scrambling for help and at the mercy of their Internet service provider (ISP).

This is the first of two articles designed to raise the awareness of small business owners. Take heed: little details associated with your web site that are ignored can have a negative affect on your site and your business. Some problems affect the business mechanics; others affect your search positioning. The good news is, most are easy to fix once you're aware of them. Being informed of potential challenges provides you with the opportunity to prepare and avoid serious consequences down the road.

If you wear the webmaster hat, you'll want to carefully review the following list for some helpful information and tips. The more you know, the less time and money you'll waste later on.

1. Your domain name is about to expire, and you don't know it
Every domain name has at least three contacts associated with it: administrative, technical and registrant. When the domain name is about to expire, renewal notices are sent multiple times. Unfortunately, in many cases, the person whose email is on the account no longer works at your company. The notices are still sent, but since that email address is no longer valid, they go unread. What happens? Your domain name expires and your site "mysteriously" goes offline.

Your domain name is extremely important and worth protecting. Don't assume everything is fine. Do a WHOIS search and find out the details on your account. You may discover the information is wrong, out of date, or not what you expected. More than one company has been shocked to find they didn't actually own their domain name... and now they have to buy it.

If you are the rightful owner and your ownership laspes, you have a grace period of 30 - 60 days to renew (depending on the registrar). After that time, the domain name becomes available for anyone to purchase. Recapturing your domain name after it has been released and purchased can be an expensive process that often involves lawyers. The secondary domain market is a booming business. The players know the value of established domain names and fully intend to take advantage of them. Avoid this heartache by checking your company's domain name status today and then registering it for a long period.

In an article entitled "How to Protect Your Domain Name," fellow Small Is Beautiful columnist Matt MaGee tells a true story of his experience helping a small business owner who lost his domain name.

2. Your robots.txt file has banished search engines from your site
This is one of those invisible problems that can kill your site with regard to rankings. To make matters worse, it can go on for months without anyone knowing there is a problem. I don't want to sound like a doomsayer, but don't assume your company is immune to this problem. We've even seen it happen to widely known and publicly traded businesses with a dedicated staff of IT experts.

There are numerous ways to accidentally alter your robots.txt file. Most often it occurs after a site update when the IT department rolls up files from a staging server to a live server. In these instances, the robots.txt file from the staging server is accidentally included in the upload. (A staging server is a separate server where new or revised web pages are tested prior to uploading to the live server. This server is generally excluded from search engine indexing on purpose to avoid duplicate content issues.)

If your robots.txt excludes your site from being indexed, your site will drop from the engines' databases. You may think you did something wrong that got your site penalized or banned, but it's actually your robots.txt file telling the engines to go away.

How do you tell what's in your robots.txt file? The easiest way to view your robots.txt is to go to a browser and type your domain name followed by a slash then "robots.txt." It will look something like this in the address bar: http://www.mydomainname.com/robots.txt.

If you get a 404-error page, don't panic. The robots.txt file is actually an optional file. It is recommended by most engines but not required.

You have a problem if your robots.txt file says:

User-agent: *
Disallow: /

A robots.txt file that contains the above text is excluding ALL robots - including search engine robots - from indexing the ENTIRE site. If you have certain sections you don't want indexed by the engines (such as an advertising section or your log files), you can selectively disallow them. A robots.txt that disallows the ads and logs directories would be written like this:

User-agent: *
Disallow: /ads
Disallow: /logs

The disallow shown above only keeps the robots from indexing the directories listed. Some webmasters falsely think that if they disallow a directory in the robots.txt file that it protects the area from prying eyes. The robots.txt file only tells robots what to do, not people (and the standard is voluntary so only "polite" robots follow it). If certain files are confidential and you don't want them seen by other people or competitors, they should be password protected.

At SES New York 2007, Danny Sullivan hosted a robots.txt summit where search engine representatives talked about the frequent misuse of the file and how webmasters accidentally excluded their sites from indexing. To learn more about the robots.txt file see http://www.robotstxt.org/.

Here's something good to know: If you are using Google Webmaster Tools, Google will indicate which URLs are being restricted from indexing.

3. Your site is scaring your customers with expired SSL certificate notices
If you're a small business conducting ecommerce, you're probably familiar with Secure Sockets Layer (SSL) Certificates. These certificates enable encryption of sensitive information during online transactions. When the certificate is up to date the technology protects your web site and lets customers know they can trust you. Sadly, many times the person who originally set up the certificate moves on. Because their email no longer works, the renewal notices fall to the side. So you plod along unaware of the lurking danger. Sales plummet and no one can determine why.

Finally, someone notices the "scary security messages" that appear when someone starts the checkout process. If you're lucky, a customer will call and tell you about the problem. If you're smart, you'll have an employee periodically verify that your checkout process and SSL certificate are working properly.

To check your SSL certificate, visit a secure page on your site then double click on the padlock icon in the bottom right corner of your browser. A window will pop up showing the SSL certification details including the expiration date. If the certificate is set to expire in less than 2-3 weeks, you should begin working with your IT department or ISP to get the certificate renewed.

4. Your content management system (CMS) is limiting your search engine success
Search engine optimizers have a love-hate relationship with CMS. The CMS can make adding content to a site easy for the non-programmer, but often times the system is hostile toward search engines. A CMS that doesn't allow unique titles, META tags, breadcrumbs, unique alt attributes, and other on-page optimization techniques can limit a site's success. For more details, I highly recommend you read an article by my colleague, Stephan Spencer, on search-friendly content management systems.

5. When you changed domain names, your redirects were set up improperly
Google and other search engines will treat various types of redirects differently. To ensure that the current domain inherits all the link equity the old domain has earned, verify that your site utilizes "301 permanent" redirects rather than "302 temporary" redirects. These numbers are codes that your web server sends to browsers and search engine spiders telling them how to handle the web page. If your server tells the search engine spiders that the new location is only temporary, the search engines will ignore the redirection and not transfer the existing link equity to the new site.

To properly implement this, you need to ensure that every page of the old site is properly redirected to the corresponding new page. If the domain name changed, but the site architecture did not, then simply redirecting the old domain to the new is sufficient. If the page URLs changed as a part of a larger redesign, insure that every page in the old site is properly (301) redirected to a page on the new domain.

Lisa Barone over on Bruce Clay's blog wrote an excellent "non-scary" article on how to set up a 301 redirect that is easy for even the non-techie to follow. And Aaron Wall at SEObook has a detailed 301 case study on how well the different engines recognized and followed 301s on his site.

6. Your site is sharing an IP address with a spamming site
Many small businesses choose to use a virtual or shared hosting service rather than purchasing their own server. This arrangement is usually less expensive than dedicated hosting and meets the needs of the small business. In many cases a virtual hosting arrangement is fine, but keep in mind that the search engines pay attention to who your neighbors are on that shared server.

Some sites have the unfortunate luck of being placed on a server with sites using known spam techniques. Since your site is on the same IP address as the spammy guys your site may be unjustly penalized by the actions of other sites. In other words, you might be a victim of guilt by association.

Even if you are running your own dedicated server, there is a small chance you'll face a similar issue. Dedicated servers are grouped into something called "Class C IP blocks." Basically, all the IP addresses are the same except for the last number. Frequently, all the sites in these situations are owned by the same company so, in essence, while your site might be legit, there may be 253 other servers out there besmirching your site's good name.

If you are concerned about being in a bad hosting environment, ask your ISP for the names of the other sites being hosted on your IP address (in the case of shared hosting, more than one site may be served from the same IP address). Also ask them for the domain names of the other sites that differ only in the last number of their IP address.

7. You've got the overloaded server blues
Does your site take forever to load? If your page file size is reasonable and you have a fast browser connection, the problem may not be with your site, but with the server at the hosting company.

The hosting company may have too many sites hosted on one server. They may also have you on a server with a site that is extremely active and monopolizes the server resources. The overload can result in your server timing out when a request is made from a spider. If this condition is chronic, it could result in the engine thinking your site is down. That could result in your site being dropped from the index.

Another problem with a slow loading site is that it can cost you business. Most web surfers are impatient. If they don't see your site loading within a few seconds, they leave. They don't care what the cause of the slow loading is; they simply move on.

If your ISP provides a service level agreement (SLA) regarding performance, uptime, etc. you are likely OK. Any provider that offers such a guarantee will have implemented procedures that would make triggering those SLA thresholds unlikely. If your site is consistently sluggish, however, request that your site be moved to a new server. Note that this will cause some hiccups because your IP address will change, so make sure this is the actual source of the sluggish performance before requesting the switch. Consider moving to a higher class of service or a dedicated server. If your web site is a core part of your business, pay the marginal costs needed to improve the service.

8. Your site is broken on Firefox
During the "browser wars" of the late 1990s, it was important to check your site under multiple browsers (including browsers for Macs and Unix) because many times a site would "break" or render oddly under different browsers. As Internet Explorer (IE) achieved dominance, many IE-centric web designers thought of browser compatibility as an issue of the past because IE was very forgiving. IE would properly display even sloppily coded sites

With the enthusiastic spread of the Firefox browser, the compatibility issue has reared its head again. Firefox is a more W3C-standards compliant browser so sites that look great under IE sometimes break under Firefox. Pages that use proprietary tags that only work under one browser (usually IE) or pages that contain syntax errors (especially unclosed tags or strange nesting) can cause a web page to render poorly in Firefox as well as Opera or other standards-compliant browsers.

In June 2007 a OneStat study on browser use reported that Firefox commanded a 19.65% share of the US browser market and 12.72% globally. If you're in a high-tech industry, your percentage of visitors using Firefox may be even higher.

In parts of Europe, adoption of the Firefox browser is even higher, especially in Germany where the share is over 26% (that's better than 1 in 4 visitors!). The browser wars get even more interesting when you consider that the most widespread browser in China is Maxthon, a browser of which most Westerners have never heard.

What this means to the small business webmaster is that you can't ignore browser compatibility any more, or you may be giving 20% of your visitors a bad experience.

----------------------------------------------------------------------------------------------

Avoid 7 Mistakes Making Your Site Ignored by Search Engines & Your Visitors

A small business owner may not know that his/her webmaster may ruin the web site with or without intention of doing simple mistakes that could cause a problem to the search marketing.

Christine Churchill disclosed the 7 common mistakes. In her below article, she explains clearly what they are, why they can stumble you and also proposes how to deal with them.

1. You suffer from relative linking issues
Every webmaster knows it's good insurance to regularly check your site for broken links. However, one of the most common types of broken links is self-imposed and thus preventable. What is it? It sounds woefully easy, but using the right kind of relative links can save your visitors a lot of frustration.

What's a relative link? An example of a relative link is "staff.html"—in an anchor tag this would appear as:

<*a href="http://www.blogger.com/staff.html"> Staff

Many times designers will use a relative link because of the ease during migration from design site to live site. The problem with this simple type of relative link is that they can break as your site grows in complexity and you develop a hierarchical directory structure. The relative link gets its name because it is "relative" to the current directory. If you happen to move the content to a different directory you can end up with a 404 "file not found" error because the relative links are pointing to pages that no longer fall under the current directory.

Another option is an absolute link reference that uses the full http address in the domain name. An example is the Search Engine Land staff page (http://www.searchengineland.com/staff.html).

In an anchor tag an absolute link to this page might appear as:

<*a href="http://www.searchengineland.com/staff.html">Staff

A number of sites have gone to the use of absolute links due to being scraped or the fear of hijacking. The downside of this practice is that many companies use a staging server to test sites prior to uploading them to the open web. The reference to an actual domain complicates the testing process when the site is in a developmental environment. Consequently, many designers prefer to use a relative linking structure.

Here's a typical situation when sites grow. Small businesses often start with small, flat sites. That is to say, every page links from the home page, but goes no deeper. Over time the webmaster creates subdirectories to logically group files that contain, for instance, new product lines. The webmaster cuts and pastes all his/her previous footers and other navigation (that used relative links) into the new subdirectory. However, since the pages do not exist in the new subdirectory, errors begin popping up all over the place. What a mess!

Enter the "absolute relative link," sometimes called the server-relative or domain-relative link. This is the hybrid version of the absolute and relative link. (And no, I didn't make up the name. I read about them after being burned by plain relative links back in the late 1990's. It was a lesson I never forgot!) An absolute relative link includes an initial backslash to tell the server to start from the root directory and follow this path to the page. An example in an anchor tag would be:

<*a href="http://www.blogger.com/products/fun-product.html">Fun Product

The absolute relative (server-relative link) offers a flexible solution that will make your web designer happy because s/he can test the site on a staging server and migrate it to a live server without domain name problems. It also allows him to use a standard navigation scheme, which works even if the links are referenced from different subdirectories.

The good news is it is EASY to check your links. There are numerous automated link checkers including Zenu Link Sleuth that can scan your site and report bad links to you. Remember that links are the pathway engines use to crawl your site. You don't want the spiders and bots to crash into a dead end at your main navigation, so use a link checker.

2. Spider traps are keeping your site from being properly indexed
Your marketing department "ooo'd" and "ahh'd" over the web designer's concepts so you paid big bucks to have it coded. Now you have a drop-dead-gorgeous site that spiders can't crawl easily… if at all.

Google's technical guidelines tell us, "...if JavaScript, cookies, session IDs, frames, DHTML, or Flash keep you from seeing all of your site in a text browser, then search engine spiders may have trouble crawling your site." Sadly, most of these problems could have been remedied before they were a problem if the webmaster knew they were issues.

As a small business webmaster with an "un-crawlable" site, you have several options. You can arm yourself with knowledge about spider traps and look for them yourself, or you can bring a search engine optimization (SEO) consultant on board to review the design and techniques used on the site. A knowledgeable optimizer brought on during the design phase can advise you on how to create a beautiful site—and even keep some flash—while ensuring the site is easy to crawl. You don't have to give up glitz to do well in the engines; you just have to be careful how the site is constructed.

A fast way to view your site in the way a search engine would is to download the Lynx text web browser and run your site through it. This is a free Open Source piece of software originally developed at the University of Kansas that lets you see your page as a search engine might read it: in text format.

Google's Webmaster Guidelines are a great resource for learning more about designing crawler-friendly sites.

3. Previous SEO firms used shady tactics that cast your site in a bad light
Shady tactics come in many forms. They become a problem when you go past visible text and start embedding keywords in the code at every opportunity. Invisible text, excessive keyword stuffing, doorway pages, cloaking practices or any number of other shady tactics can cause the engines to put your site in Search Engine Hell. There are a number of "tricks" that ill-informed or unscrupulous SEOs and webmasters might use in an attempt to coerce the search engines to rank a site higher than it otherwise deserves. The problem with these tactics is they are short-lived. As search engines continue to improve the sophistication of their indexing and ranking algorithms, more black-hat tactics will be detected and dealt with (usually by banning your site from the index).

Note that while many of these tactics involve manipulating the content in some shady way (cloaking, doorway pages and hidden text being the worst offenders), bad linking tactics can also cause your site to be viewed as shady by the search engines. Ever used a cheap linking company to build links? Well that's likely what you got: cheap links! While the initial monetary cost was low, now you have to pay the real price. Oftentimes, companies like these link your site to link farms and bad neighborhoods. Or worse yet, you get ninety percent of your links from comment spam.

If you're unsure of your link status, sign up for Webmaster Central and let Google tell you the known links it has indexed. If you see that all your links are the same type (e.g. all are reciprocal or are all from spammy sites), you have a problem.

If you have already been caught by Google and been banned from their index, you should take a hard look at your site to determine what the cause might be. Remove the offending tactic then humbly request re-inclusion.

Want more information on other tactics that can get you in trouble? Check out Google's guidelines.

4. You have canonical domain name problems
Does an engine see "www.mysite.com" and "mysite.com" as separate sites? What are the downsides of ignoring this problem? If you don't redirect one version of the site to the other, Google and other engines will see the two as separate sites. Yep—that means you could have major link-splitting and duplicate content problems.

The most common fix to this issue is to create a 301 permanent redirect from one to the other. Also, Google's Webmaster Tools now allow you to specify to Google which version you prefer. This will help with canonical domain issues in Google, but not other engines.
For instructions on creating a 301 on an Apache server see the Apache Web Server documentation.IIS redirects are handled differently.

"What about using the DNS CNAME entry," you may ask? It is acceptable to set up a DNS CNAME entry to point alternate names back to the primary name. This would be done as follows:

<*table width="50%" border="0"><*tbody><*tr><*td>mysite.com<*td>A <*td>aaa.bbb.ccc.ddd (IP address direct)<*tr><*td>www.mysite.com<*td>CNAME <*td>mysite.com

This tells everyone (not just browsers and search engine spiders) that www.mysite.com is an alias for the preferred mysite.com.

Is it possible to handle canonical name issues this way? Yes. Does it work with the search engines? Yes, if properly implemented. Is it recommended? No, because it is easy to make implementation errors that result in an infinite loop on the DNS resolution. In addition, this involves getting the IT department or the ISP staff involved, which may take longer to implement than a 301 redirection rule.

5. You're haunted by an unreliable server
Going cheap on hosting often equates to unreliability. Look for providers that include telephone support (not just email) and have extended operating hours. You'll also want to check their time zone, especially if you live on the coasts.

Consider setting up an outside service to monitor your server so you know how your ISP is really doing. While most brag about 99.99% uptime, you may find they are stretching the truth. There are several low-cost server-monitoring services out there (Red Alert and Server Check Pro, for example), that can ping your server every 15 minutes then let you know when and if it is down. This type of service can also detect slow-performing servers.

Some have chosen to use a server checker on the hosting company's site in order to decide which host to use. You should be aware that the performance of the hosting company's site may differ from the sites they host.

In addition to affecting the customer's experience on the site, a poorly performing server can also have a negative effect on your site's search engine performance. If the site is down or slow in responding, the search engine spider may get tired of waiting and decide to de-list that page.

6. All your web content has been stolen
Your web site is publicly accessible and the content is available electronically. These two facts make it simple for an unscrupulous webmaster to snag a copy of your site's content and clone it on the web. Automated scrapers are stealing content daily making it possible for others to use your content for their gain.

The easiest way to detect when this has happened is to grab a long text snippet (perhaps eight to 10 words or so) from your site and drop it into a search box, placing quotes around it to indicate an exact match search for that phrase. Assuming the content is original and unique, the only page that shows up should be your own. There are also tools like Copyscape.com that can check for violations of your content rights on a regular basis.

If another site has stolen and published your content, Google may think your site contains the duplicate content and the offending site has the original. At Google Webmaster Central Blog, Adam Lasnik wrote a great piece called Deftly Dealing With Duplicate Content.

If you find yourself the victim of stolen content, Google has spelled out the steps to filing a copyright infringement complaint. A great way to prove your ownership of the copy is to use the Way Back Machine. This free tool will allow you to show how long the content has been on your website.

7. Bogons can eat your web site
No, this isn't the name of a new monster flick. This is a bizarre and unusual situation involving Internet traffic from IP addresses that are not currently assigned to ANY ISP. Since these lists are manually maintained and can lag behind current legitimate assignments, search engines may be unable to access your site through no fault of your own. If you recently changed your hosting to a newly commissioned data center or if your ISP recently assigned you an IP from a newly commissioned block of addresses, you should see this hosting issues article for the details. This unusual case falls into the fact-is-stranger-than-fiction category, but since we witnessed at least one account of it firsthand, we know it can happen.

Brand Marketers: The Wiki

"If you manage a brand of any importance, it is highly likely that the Wikipedia page pertaining to your brand is in the top search results at all the major search engines...

Brand websites exist to build and reinforce a carefully crafted brand message, create a vivid brand personality, educate consumers about product benefits, build brand affinity, increase consumption and facilitate the creation of a brand community. Wikipedia pages have the potential to screw up the message and muddy the brand image that firms meticulously try to construct online.", stated Mr.Sandeep Krishnamurthy.

A part of the above message made me realized that the Wiki, one of potential social medias, can affect the brand marketing both positively and negatively. Wiki can help you make PR professionals and marketing indirectly if you join this open community whereas your brand can also be affected if there is any unexpected event occurred in a negative way. That is, your brand's rumor or bad reputation from the situations affecting the negative brand image can be widely spreaded overnight because Wiki(pedia) is the unique open community and free to read, write, edit by public. That is what we should be aware.

Here below are some suggestions by him;
  1. 1. Make a list of Wikipedia pages that affect you.
  2. 2. Locate the RSS feed for each page.
  3. 3. Subscribe to it using Google Reader.


First, make a list of every Wikipedia page that affects you. This is non-trivial. If you blow this, the rest will not make much sense. Include the following:
The main page for your brand.

  1. 1. The main page for sub-brands.
  2. 2. The main page for your competitors.
  3. 3. Specialty pages focusing on your brand (e.g., McDonald's legal issues).
Second, locate the RSS feed for each page. To do this, follow these steps:
  1. 1. Click on the History tab of the Wikipedia page you care about.
  2. 2. On the left sidebar, you should see "RSS Atom."
  3. 3. Click on RSS.
  4. 4. Copy and paste the URL into Google Reader.

Third, in Google Reader, create a new folder that includes all the pages you want to follow. Refresh.

How to react to negative info. Before you do anything, remember the following:

  1. 1. Wikipedia pages are fluid and can change many times a day.
  2. 2. Wikipedia does not have a policy about how firms can participate.
  3. 3. Page vandalism is to be expected for short periods of time.

There is no single "webmaster" that you can contact. Make sure you read the detailed policies of Wikipedia. He recommends these two links as starting points: The Five Pillars and List of Policies. If you find something objectionable or inaccurate on Wikipedia, you can do one of the following:

  1. 1. You can participate in the forums related to the page and voice your concern.
  2. 2. You can contact one of the previous editors of the page.

Wikipedia does not condone copyright violations or libel. If you notice serious violations, you can contact Wikipedia at info-en-q@wikimedia.org.

If Wikipedia does not respond to your emails, you can consider a press release or something low key. Do not simply go in and do massive deletions. If you are detected, the negative PR will not be worth it.

I think these ways can help you decrease the negative image and strengthen your brand in case there is a rumour in the social content site like the Wiki.

How to deal with the traffic without Google

What if the Google popularity dropped one day and the traffic had moved to other search engines like Yahoo, MSN, Ask or even Snap, newer one? Have you ever pondered a plan B to deal with this? Here are some suggestions by Paul J. Bruemmer.


Before implementing Plan B, you'll want to review your site objectives and SEO plans to ensure maximum traffic from Yahoo, Microsoft and Ask. Your strategy will vary depending on your site category. In "The Marketer's Common Sense Guide to E-Metrics" (PDF), Bryan Eisenberg and Jim Novo define four types of websites: ecommerce, content, lead-generation and self-service. We might now add social networking sites. The objectives for these sites will vary.

  • 1. Ecommerce sites want to increase sales while decreasing marketing expenses.
  • 2. Content sites want to increase readership, visitor interest and site stickiness.
  • 3. Lead-generation sites want to increase segment leads.
  • 4. Self-service sites want to increase customer satisfaction while decreasing customer support costs.
  • 5. Social-networking sites want to increase membership numbers and the amount of member interactivity.

The above objectives require different optimization strategies and different measurement metrics. That said, consider starting by identifying your business goals, performing extensive keyword research and some basic predictive ROI modeling; then you'll be prepared to take a look at executing a search marketing plan without Google.

Next, it is important to understand why businesses use search engines, a.k.a. interactive marketing, within their standard five-year marketing plans. Essentially there are five reasons:

1. Create brand awareness
2. Sell products/services
3. Generate leads
4. Drive traffic
5. Provide information
Based on accomplishing your business goals, select one of the five above and use the appropriate organic and paid search marketing strategies and tactics at Yahoo, MSN Live Search and Ask.

Know your audience to expand your market

If someone asks "Tell me about your audience", what is the response from your research?. The best responses come from you who have spent time and money researching your audience. It shows if you know well enough your audience before marketing to them. Here is a part of the NextStage CEO's article which tells you how to keep your audience engaged with your website.

You can't market successfully to anybody until you know who they are, what they think, how they think, what they respond to and what they'll respond with. The smaller your target audience, the more you must design specifically for it. Large audiences are easy to design for, just keep it simple!

The first message must be instructions on how to build a receiver. The first message is not from you to your market, it's from your market to you and is: "This is what will get our attention, so this is what has to be in your marketing message."

Analyzing people is interesting. If you can track visitor logical processes, cognitive processes, decision styles, memorization methods, emotional cues and others (age, gender, buying styles, best branding strategies, impact ratios, touch factors, education level, income level, etc.) , it would be really great.

Knowing your audience in depth and detail is a required first step. The more richly detailed and complete your knowledge is about your audience, the more you can do to build a receiver -- a website, video, print ad, et cetera -- they will naturally and effortlessly interact with. There are two crucial elements to "receiver" design for building that receiver.

The first -- rich audience personae -- is incredibly straightforward.
The second crucial element is Audience Focused Optimization.

That is "Quantifying and Optimizing the Human Side of Online Marketing." For example, knowing how the audience thinks enabled design modifications that kept visitors engaged and returning through the redesign process. The end result of these efforts was demonstrated by visitor comments and emails.

The simple fact that Jim Sterne and his crew let visitors know ahead of time when a redesign would be online resulted in three major outcomes:

> The day the redesign went live, the site experienced huge spikes in traffic, levels of interest and navigation.

> People emailed that they'd been on the Emetrics Summit site and noted the update announcement.

> Other emails demonstrated that people had returned specifically to discover what had changed since their last visit. (The wording in that last line is intentional. People didn't return to "see," they returned to "discover." They were on the site thinking, evaluating, analyzing, interpreting. In other words, they were engaged.)

These are some generalized suggestions to make simple modifications to existing sites and also they are easily implemented directives for sites in the making.

Critical:

> Use fewer menu items
> Rename remaining menu items so that they are questions that can be answered
> Make the menu system/structure completely consistent from page to page

Important:

> Make the progression of pages tell a story so that one web page logically and thematically leads to the next web page

> Make the menu system either consistently horizontal or consistently vertical (remove top-of-page city menu and replace it with graphic links that already exist within the banner image)

Desirable:

> Create a breadcrumb trail as a navigation aide

Source/References: iMediaConnection

First post

Finally, I could install the 3-column theme which was created by Thur. It is really cool.