A webring in general is a collection of websites from around the Internet joined together in a circular structure. When used to improve search engine rankings, webrings can be considered a search engine optimization technique.
To be a part of the webring, each site has a common navigation bar; it contains links to the previous and next site. By clicking next (or previous) repeatedly, the surfer will eventually reach the site they started at; this is the origin of the term webring. However, the click-through route around the ring is usually supplemented by a central site with links to all member-sites; this prevents the ring from breaking completely if a member site goes offline.
Webrings are usually organized around a specific theme, often educational or social. Web rings usually have a moderator who decides which pages to include in the web ring. After approval, webmasters add their pages to the ring by 'linking in' to the ring; this requires adding the necessary HTML or JavaScript to their site.
e martë, 16 tetor 2007
Web portal
is a site that functions as a point of access to information on the World Wide Web. Portals present information from diverse sources in a unified way. Popular portals are MSN, Yahoo, and AOL. Aside from the search engine standard, web portals offer other services such as news, stock prices, infotainment and various other features. Portals provide a way for enterprises to provide a consistent look and feel with access control and procedures for multiple applications, which otherwise would have been different entities altogether.
A personal portal is a site on the World Wide Web that typically provides personalized capabilities to its visitors, providing a pathway to other content. It is designed to use distributed applications, different numbers and types of middleware and hardware to provide services from a number of different sources. In addition, business portals are designed to share collaboration in workplaces. A further business-driven requirement of portals is that the content be able to work on multiple platforms such as personal computers, personal digital assistants (PDAs), and cell phones.
A personal portal is a site on the World Wide Web that typically provides personalized capabilities to its visitors, providing a pathway to other content. It is designed to use distributed applications, different numbers and types of middleware and hardware to provide services from a number of different sources. In addition, business portals are designed to share collaboration in workplaces. A further business-driven requirement of portals is that the content be able to work on multiple platforms such as personal computers, personal digital assistants (PDAs), and cell phones.
Spamdexing
Spamdexing is any of various methods to manipulate the relevancy or prominence of resources indexed by a search engine, usually in a manner inconsistent with the purpose of the indexing system.[1] It is a form of search engine optimization. Search engines use a variety of algorithms to determine relevancy ranking. Some of these include determining whether the search term appears in the META keywords tag, others whether the search term appears in the body text or URL of a web page. Many search engines check for instances of spamdexing and will remove suspect pages from their indexes.
Spam in blogs
Spam in blogs (also called simply blog spam or comment spam) is a form of spamdexing. It is done by automatically posting random comments or promoting commercial services to blogs, wikis, guestbooks, or other publicly accessible online discussion boards. Any web application that accepts and displays hyperlinks submitted by visitors may be a target.
Adding links that point to the spammer's web site artificially increases the site's search engine ranking. An increased ranking often results in the spammer's commercial site being listed ahead of other sites for certain searches, increasing the number of potential visitors and paying customers.
Adding links that point to the spammer's web site artificially increases the site's search engine ranking. An increased ranking often results in the spammer's commercial site being listed ahead of other sites for certain searches, increasing the number of potential visitors and paying customers.
Scraper site
is a website that copies all of its content from other websites using web scraping. No part of a scraper site is original. A search engine is not a scraper site: sites such as Yahoo and Google gather content from other websites and index it so that the index can be searched with keywords. Search engines then display snippets of the original site content which they have scraped in response to your search.
Web scraping
generically describes any of various means to extract content from a website over HTTP for the purpose of transforming that content into another format suitable for use in another context.
Web scraping
generically describes any of various means to extract content from a website over HTTP for the purpose of transforming that content into another format suitable for use in another context.
Google bomb
(also referred to as a 'link bomb') is Internet slang for a certain kind of attempt to influence the ranking of a given page in results returned by the Google search engine, often with humorous or political intentions.[1] Because of the way that Google's algorithm works, a page will be ranked higher if the sites that link to that page use consistent anchor text. A Google bomb is created if a large number of sites link to the page in this manner. Google bomb is used both as a verb and a noun. The phrase "Google bombing" was introduced to the New Oxford American Dictionary in May 2005.[2] Google bombing is closely related to spamdexing, the practice of deliberately modifying HTML pages to increase the chance of their being placed close to the beginning of search engine results, or to influence the category to which the page is assigned in a misleading or dishonest manner.
The term Googlewashing was coined in 2003 to describe the use of media manipulation to change the perception of a term, or push out competition from search engine results pages (SERPs).[3]
The term Googlewashing was coined in 2003 to describe the use of media manipulation to change the perception of a term, or push out competition from search engine results pages (SERPs).[3]
Link doping
Link doping refers to the practice and effects of embedding a large number of gratuitous hyperlinks on a website in exchange for return links. Mainly used when describing weblogs (or blogs), link doping usually implies that a person hyperlinks to sites he or she has never visited in return for a place on the website's blogroll for the sole purpose of inflating the apparent popularity of his or her website. Since the PageRank algorithms of many web directories and search engines rely on the number of hyperlinks to a website to determine its importance or influence, link doping can result in a high placement or ranking for the offending website (see also Google bomb or Google wash).
Link doping
Link doping refers to the practice and effects of embedding a large number of gratuitous hyperlinks on a website in exchange for return links. Mainly used when describing weblogs (or blogs), link doping usually implies that a person hyperlinks to sites he or she has never visited in return for a place on the website's blogroll for the sole purpose of inflating the apparent popularity of his or her website. Since the PageRank algorithms of many web directories and search engines rely on the number of hyperlinks to a website to determine its importance or influence, link doping can result in a high placement or ranking for the offending website (see also Google bomb or Google wash).
Link campaign
are a form of online marketing and is also a method for search engine optimization. A business seeking to increase the number of visitors to its web site can ask its strategic partners, professional organizations, chambers of commerce, suppliers, and customers to add links from their web sites. A link campaign may involve mutual links back and forth between related sites, but it doesn't have to require the reciprocation of links.
Increasing the number of links to a site has two beneficial effects:
* Search engines such as Google judge the importance of a site by the quality, relevance and number of other sites that link to it.
* The additional links result in visitors moving from the linking site to the target site.
Increasing the number of links to a site has two beneficial effects:
* Search engines such as Google judge the importance of a site by the quality, relevance and number of other sites that link to it.
* The additional links result in visitors moving from the linking site to the target site.
Keyword Stuffing
Where to Place Keywords
It is important to stuff your keyword into the title, headings, image alt statements, hyperlinks on the page, hyperlinks pointing to the page and in your general keyword rich text.
How to: Keyword Stuffing
General techniques for keyword stuffing are to use invisible text, the hidden input tag, or back in the day people would duplicate tags or repeat the same word over and over again in the meta keyword tag.
Some people also use the keywords way too often in the visible page copy to where the page reads horrible.
Why Keyword Stuffing is Bad
Using a keyword over and over again, the keyword becomes more and more targeted until it is too rich in density. The page may trip a spam filter or sound goofy to readers...either way the page will not convert.
Search engines such as Yahoo! actively edit their search results. If you are caught keyword stuffing by an editor or competitor your site might get banned.
The Correct Keyword Density
Generally I do not fret much about keyword density. I make sure to get my keywords in inbound links, at the beginning of the page title, in the meta description, in the page header and most of the sub headers and in the page content.
Just doing the above will give you a more than sufficient keyword density.
For competitive terms page copy is not heavily weighted in most search engines either.
Keyword stuffing can be considered to be either a white hat or a black hat tactic, depending on the context of the technique, and the opinion of the person judging it. While a great deal of keyword stuffing is employed to aid in spamdexing, which is of little benefit to the user, keyword stuffing in certain circumstances is designed to benefit the user and not skew results in a deceptive manner. Whether the term carries a pejorative or neutral connotation is dependent on whether the practice is used to pollute the results with pages of little relevance, or to direct traffic to a page of relevance that would have otherwise been de-emphasized due to the search engine's inability to interpret and understand related ideas.
It is important to stuff your keyword into the title, headings, image alt statements, hyperlinks on the page, hyperlinks pointing to the page and in your general keyword rich text.
How to: Keyword Stuffing
General techniques for keyword stuffing are to use invisible text, the hidden input tag, or back in the day people would duplicate tags or repeat the same word over and over again in the meta keyword tag.
Some people also use the keywords way too often in the visible page copy to where the page reads horrible.
Why Keyword Stuffing is Bad
Using a keyword over and over again, the keyword becomes more and more targeted until it is too rich in density. The page may trip a spam filter or sound goofy to readers...either way the page will not convert.
Search engines such as Yahoo! actively edit their search results. If you are caught keyword stuffing by an editor or competitor your site might get banned.
The Correct Keyword Density
Generally I do not fret much about keyword density. I make sure to get my keywords in inbound links, at the beginning of the page title, in the meta description, in the page header and most of the sub headers and in the page content.
Just doing the above will give you a more than sufficient keyword density.
For competitive terms page copy is not heavily weighted in most search engines either.
Keyword stuffing can be considered to be either a white hat or a black hat tactic, depending on the context of the technique, and the opinion of the person judging it. While a great deal of keyword stuffing is employed to aid in spamdexing, which is of little benefit to the user, keyword stuffing in certain circumstances is designed to benefit the user and not skew results in a deceptive manner. Whether the term carries a pejorative or neutral connotation is dependent on whether the practice is used to pollute the results with pages of little relevance, or to direct traffic to a page of relevance that would have otherwise been de-emphasized due to the search engine's inability to interpret and understand related ideas.
How to Analyze Your Log Files
"What should I look at when analyzing my website traffic?" If you manage a website, this question is likely on your mind. Especially if you've ever spent an afternoon digging through your log files trying to extract useful information.
A Use these key stats to better manage and market your e-business site:
1. Number of Unique Visitors
Monitor this number to determine if the activity level on your site is increasing, decreasing or staying the same. Then use this information to measure the impact of your marketing efforts.
Remember to filter:
* Traffic created by internal visitors
* Traffic created by search engine spiders
Need help spotting spiders? Try this resource:
* Search Engine World: Spiders Crawlers and Indexers! [http://www.searchengineworld.com/spiders/]
2. Home Page Click-Through Rate
Use your Top Entry Page and Single Page Access stats to see how many visitors entering through your Home page never go beyond it. Knowing this number will help you measure the effectiveness of your current Home page - and future changes to it.
You can also use your log files to track and measure the click-through rate of your other top entry and single access pages - providing you with the opportunity to enhance them accordingly.
* Learn how to calculate your click-through rate [http://www.popinteractive.com/webinsights/20020516.asp]
3. Average Length Per Visit & Page Views Per Visitor
What changes would you make to your site if you knew that your average visitor:
* Spends less than 45 seconds on any single page
* Often only visits a total of three or four pages?
Analyze your stats to determine what the actual usage patterns are for your site, and make decisions about your content accordingly.
Why so fast and so few? Visitors are extremely goal oriented and task driven when they use the Web. They will quickly scan a page to determine if it contains or links to the information they want. If scanning the page does not convince them they're on the right track they will abandon the page -- and frequently the site.
Learn more about average usage habits:
* Nielsen//NetRatings: Global Internet Index Average Usage [http://www.nielsen-netratings.com/news.jsp?section=dat_gi]
4. Least and Most Visited Pages
Monitoring these stats can help you identify usability and navigation problems with your site - and measure the impact of the changes you make to address these problems.
To see how even a small change can increase the traffic to any page, we suggest identifying a key page you would like to drive more visitors to and modifying your Home page to give the link to it more prominence. Then use your stats to monitor the impact.
5. Most Submitted Forms and Scripts
Knowing and understanding your site statistics is great, but conversion is what really counts. Tracking the number of forms submitted over time can help you determine if the work you're doing to increase traffic is actually resulting in more leads and business.
6. Top Referring Sites and URLs
Use these statistics to uncover hidden opportunities for increasing traffic to your site. One way to do this is to identify the top referring sites and see if you can negotiate better placement of the link to your site. This can be quite worthwhile if their audience is the audience you wish to target.
Referring sites may also contribute to your traffic by increasing your popularity rating with search engines. However, before you start investing time in trading links, we recommend you read this article about link popularity - and link importance:
* NetMechanic: Increase Search Engine Rank With Link Popularity [http://www.netmechanic.com/news/vol3/promo_no16.htm]
7. Top Referring Search Engines
If most of the traffic coming to your site from search engines isn't being sent from Yahoo, Google, AOL and MSN, you may need to review your search engine placement. Most e-business sites receive a majority of their search engine traffic from these four because they have the majority of search engine audience share.
* See how the search engines compare:
Search Engine Watch: Nielsen//NetRatings Search Engine Ratings [http://www.searchenginewatch.com/reports/article.php/2156451]
* Learn the essentials of search engine marketing with this guide from Danny Sullivan, editor of Search Engine Watch. [http://www.searchenginewatch.com/webmasters/]
8. Top Search Words and Phrases
Paying attention to the keywords and phrases that are and aren't included in your site statistics can pay off. One of our clients had a mention of "winter hats" on his site but did not realize how many people were searching for "winter hats" until we helped him analyze his statistics. This information resulted in the client modifying his site to better serve and sell to his audience.
In the Spring, we did not see "mesh hats" - one of his important products - in his keyword statistics. As a result, we were able to optimize his site for "mesh hats" right in time for Summer. "Mesh hats" are currently one of his best performing products -- and keywords.
9. Errors
Monitoring your error messages is clearly important as it can help you identify problems in your code or on your server.
Use these references to translate your error codes:
* http://www.cybraryn.com/tools/httpstat.htm [http://www.cybraryn.com/tools/httpstat.htm]
[Editor's Note: This link is no longer available.]
* HTTP Status Codes from MSDN[http://msdn.microsoft.com/library/shared/deeptree/bot/bot.asp?dtcnfg=/library/deeptreeconfig.xml]
10. Browsers and Platforms
Browser compatibility issues and backwards support is an ongoing issue for website managers. And, on complex sites, addressing these issues can require a significant investment of time.
Monitoring the browsers and platforms used by your visitors is key to setting and updating your minimum browser requirements.
Use these references to see how the trends on your site compare to others:
* Browser News [http://www.upsdell.com/BrowserNews/stat_trends.htm]
* TheCounter.com [http://www.thecounter.com/stats/2002/July/browser.php]
Site Analysis Resources
References
* Jupiter: U.S. Top 50 Web Properties Unique Visitor Stats [http://jupiterresearch.com/xp/jmm/press/mediaMetrixTop50.xml]
[Editor's: This link is no longer available.]
* WebMonkey: Log File Lowdown [http://hotwired.lycos.com/webmonkey/01/24/index4a.html?tw=e-business]
* How To Know When Google Last Indexed Your Page [http://ihelpyouservices.com/forums/showthread.php?s=&threadid=3628]
* SearchEngineWatch.com: Measuring Link Popularity [http://www.searchenginewatch.com/webmasters/article.php/2167951]
* CIO: Monday is Net’s Busiest Day [http://www2.cio.com/metrics/2002/metric377.html]
Popular Analysis Software and Web-based Tools
* WebTrends Log Analyzer (From $499) [http://www.netiq.com/products/log/default.asp]
[Editor's Note: This link and price are no longer available. The current product is now called WebTrends Analytics.]
* Urchin (From $695) [http://www.urchin.com/]
More Affordable Analysis Option:
* MyComputer.com SuperStats (From $99 per year) [http://www.mycomputer.com/index2.html/]
Higher-End Analysis Tools:
* SPSS: NetGenesis [http://www.spss.com/netgenesis/]
* Accrue Software [http://www.accrue.com]
* HitBox Outsourced Web Analytics [http://www.websidestory.com/]
The Lighter Side of the Web
For those online managers looking to combine their dedication to the Web with some summer recreational activities, we suggest a round of Mini Golf. [http://www.people.fas.harvard.edu/%7epyang/flash/miniputt.swf] [Editor's Note: We're sad to report that this diversion is no longer available.]
* Multiple Browsers - Internet Explorer, Opera, FireFox and Safari.
* Different Browser Versions
* Different Computer Platforms - Windows, Mac, and now Linux.
* Screen Size - From 800 x 600 pixels to 1024 x 768.
* HTML Errors - Mistakes that break your page.
* Browser Bugs - Little known errors cause big problems.
http://www.netmechanic.com/news/vol4/promo_no11.htm
Load Time 64.45 seconds, Detailed Report
height/width problems
HTML Check & Repair 0 errors Detailed Report
Browser Compatibility 6 problems Detailed Report
Spell Check 27 possible errors.
Espa�ol, Italiano, Portugu�s, Detailed Report
A Use these key stats to better manage and market your e-business site:
1. Number of Unique Visitors
Monitor this number to determine if the activity level on your site is increasing, decreasing or staying the same. Then use this information to measure the impact of your marketing efforts.
Remember to filter:
* Traffic created by internal visitors
* Traffic created by search engine spiders
Need help spotting spiders? Try this resource:
* Search Engine World: Spiders Crawlers and Indexers! [http://www.searchengineworld.com/spiders/]
2. Home Page Click-Through Rate
Use your Top Entry Page and Single Page Access stats to see how many visitors entering through your Home page never go beyond it. Knowing this number will help you measure the effectiveness of your current Home page - and future changes to it.
You can also use your log files to track and measure the click-through rate of your other top entry and single access pages - providing you with the opportunity to enhance them accordingly.
* Learn how to calculate your click-through rate [http://www.popinteractive.com/webinsights/20020516.asp]
3. Average Length Per Visit & Page Views Per Visitor
What changes would you make to your site if you knew that your average visitor:
* Spends less than 45 seconds on any single page
* Often only visits a total of three or four pages?
Analyze your stats to determine what the actual usage patterns are for your site, and make decisions about your content accordingly.
Why so fast and so few? Visitors are extremely goal oriented and task driven when they use the Web. They will quickly scan a page to determine if it contains or links to the information they want. If scanning the page does not convince them they're on the right track they will abandon the page -- and frequently the site.
Learn more about average usage habits:
* Nielsen//NetRatings: Global Internet Index Average Usage [http://www.nielsen-netratings.com/news.jsp?section=dat_gi]
4. Least and Most Visited Pages
Monitoring these stats can help you identify usability and navigation problems with your site - and measure the impact of the changes you make to address these problems.
To see how even a small change can increase the traffic to any page, we suggest identifying a key page you would like to drive more visitors to and modifying your Home page to give the link to it more prominence. Then use your stats to monitor the impact.
5. Most Submitted Forms and Scripts
Knowing and understanding your site statistics is great, but conversion is what really counts. Tracking the number of forms submitted over time can help you determine if the work you're doing to increase traffic is actually resulting in more leads and business.
6. Top Referring Sites and URLs
Use these statistics to uncover hidden opportunities for increasing traffic to your site. One way to do this is to identify the top referring sites and see if you can negotiate better placement of the link to your site. This can be quite worthwhile if their audience is the audience you wish to target.
Referring sites may also contribute to your traffic by increasing your popularity rating with search engines. However, before you start investing time in trading links, we recommend you read this article about link popularity - and link importance:
* NetMechanic: Increase Search Engine Rank With Link Popularity [http://www.netmechanic.com/news/vol3/promo_no16.htm]
7. Top Referring Search Engines
If most of the traffic coming to your site from search engines isn't being sent from Yahoo, Google, AOL and MSN, you may need to review your search engine placement. Most e-business sites receive a majority of their search engine traffic from these four because they have the majority of search engine audience share.
* See how the search engines compare:
Search Engine Watch: Nielsen//NetRatings Search Engine Ratings [http://www.searchenginewatch.com/reports/article.php/2156451]
* Learn the essentials of search engine marketing with this guide from Danny Sullivan, editor of Search Engine Watch. [http://www.searchenginewatch.com/webmasters/]
8. Top Search Words and Phrases
Paying attention to the keywords and phrases that are and aren't included in your site statistics can pay off. One of our clients had a mention of "winter hats" on his site but did not realize how many people were searching for "winter hats" until we helped him analyze his statistics. This information resulted in the client modifying his site to better serve and sell to his audience.
In the Spring, we did not see "mesh hats" - one of his important products - in his keyword statistics. As a result, we were able to optimize his site for "mesh hats" right in time for Summer. "Mesh hats" are currently one of his best performing products -- and keywords.
9. Errors
Monitoring your error messages is clearly important as it can help you identify problems in your code or on your server.
Use these references to translate your error codes:
* http://www.cybraryn.com/tools/httpstat.htm [http://www.cybraryn.com/tools/httpstat.htm]
[Editor's Note: This link is no longer available.]
* HTTP Status Codes from MSDN[http://msdn.microsoft.com/library/shared/deeptree/bot/bot.asp?dtcnfg=/library/deeptreeconfig.xml]
10. Browsers and Platforms
Browser compatibility issues and backwards support is an ongoing issue for website managers. And, on complex sites, addressing these issues can require a significant investment of time.
Monitoring the browsers and platforms used by your visitors is key to setting and updating your minimum browser requirements.
Use these references to see how the trends on your site compare to others:
* Browser News [http://www.upsdell.com/BrowserNews/stat_trends.htm]
* TheCounter.com [http://www.thecounter.com/stats/2002/July/browser.php]
Site Analysis Resources
References
* Jupiter: U.S. Top 50 Web Properties Unique Visitor Stats [http://jupiterresearch.com/xp/jmm/press/mediaMetrixTop50.xml]
[Editor's: This link is no longer available.]
* WebMonkey: Log File Lowdown [http://hotwired.lycos.com/webmonkey/01/24/index4a.html?tw=e-business]
* How To Know When Google Last Indexed Your Page [http://ihelpyouservices.com/forums/showthread.php?s=&threadid=3628]
* SearchEngineWatch.com: Measuring Link Popularity [http://www.searchenginewatch.com/webmasters/article.php/2167951]
* CIO: Monday is Net’s Busiest Day [http://www2.cio.com/metrics/2002/metric377.html]
Popular Analysis Software and Web-based Tools
* WebTrends Log Analyzer (From $499) [http://www.netiq.com/products/log/default.asp]
[Editor's Note: This link and price are no longer available. The current product is now called WebTrends Analytics.]
* Urchin (From $695) [http://www.urchin.com/]
More Affordable Analysis Option:
* MyComputer.com SuperStats (From $99 per year) [http://www.mycomputer.com/index2.html/]
Higher-End Analysis Tools:
* SPSS: NetGenesis [http://www.spss.com/netgenesis/]
* Accrue Software [http://www.accrue.com]
* HitBox Outsourced Web Analytics [http://www.websidestory.com/]
The Lighter Side of the Web
For those online managers looking to combine their dedication to the Web with some summer recreational activities, we suggest a round of Mini Golf. [http://www.people.fas.harvard.edu/%7epyang/flash/miniputt.swf] [Editor's Note: We're sad to report that this diversion is no longer available.]
* Multiple Browsers - Internet Explorer, Opera, FireFox and Safari.
* Different Browser Versions
* Different Computer Platforms - Windows, Mac, and now Linux.
* Screen Size - From 800 x 600 pixels to 1024 x 768.
* HTML Errors - Mistakes that break your page.
* Browser Bugs - Little known errors cause big problems.
http://www.netmechanic.com/news/vol4/promo_no11.htm
Load Time 64.45 seconds, Detailed Report
height/width problems
HTML Check & Repair 0 errors Detailed Report
Browser Compatibility 6 problems Detailed Report
Spell Check 27 possible errors.
Espa�ol, Italiano, Portugu�s, Detailed Report
e hënë, 15 tetor 2007
Page Hijack: The 302 Exploit, Redirects and Google
302 Exploit: How somebody else's page can appear instead of your page in the search engines.
By Claus Schmidt.
Abstract:
An explanation of the page hijack exploit using 302 server redirects. This exploit allows any webmaster to have his own "virtual pages" rank for terms that pages belonging to another webmaster used to rank for. Successfully employed, this technique will allow the offending webmaster ("the hijacker") to displace the pages of the "target" in the Search Engine Results Pages ("SERPS"), and hence (a) cause search engine traffic to the target website to vanish, and/or (b) further redirect traffic to any other page of choice.
--------------------------------------------------------------------------------
Published here on March 14, 2005.
Copyright: © 2005 Claus Schmidt, clsc.net
Citations (quotes, not full-text copy) are considered fair use if accompanied by author name and a link to this web site or page.
Bug Track:
2006-09-18: Status: I'm very close to declaring this issue fixed. It does not seem like this is a problem with Google any more - if I'm wrong, please notify me; you'll know where to find me. Yahoo has no problems with this. MSN status is unclear.
2006-01-04: Status: Google is attempting a new fix, being tested Q1 2006. Status for this: Unknown.
2005-08-26: Status: STILL NOT FIXED. Although the engineers at Google have recently made new attempts to fix it, it can still not be considered a solved problem.
2005-05-08: Status: STILL NOT FIXED. Google only hides the wrong URLs artificially when you search using a special syntax ("site:www.example.com"). The wrong URLs are still in the database and they still come up for regular searches.
2005-04-19: Some information from Google here from message #108 and onwards
2005-04-18: Good news: It seems Google is fixing this issue right now
2005-03-24: Added "A short description" and a new example search
2005-03-17: Added a brief section about the "meta refresh" variant of the exploit.
2005-03-16: Edited some paragraphs and added extra information for clarity, as requested by a few nice Slashdot readers. Thanks.
2005-03-15: Some minor quick edits, mostly typos.
2005-03-14: I apologize in advance for typos and such - I did not have much time to write all this.
The Google view:
These three pieces are good if you want an opinion from a Google engineer (Matt Cutts). He does not write all that I write below, but it's always nice to hear another perspective:
Url canonicalization,
The inurl operator, and
302 redirects.Disclaimer
What is it?
A short description
Which engines are vulnerable?
Is it deliberate wrong-doing?
What does it look like?
Example (anonymous)
Who can control your pages in the search engines?
The technical part: How it is done
302 and meta refresh - both methods can be used
What you can - and can not - do about it
Precautions against being hijacked
Precautions against becoming a hijacker
Recommended fix
You can help
Disclaimer
This exploit is published here for one reason only: To make the problem understandable and visible to as many people as possible in order to force action to be taken to prevent further abuse of this exploit. As will be shown below, this action can only be taken by the search engines themselves. Neither clsc.net nor Claus Schmidt will encourage, endorse or justify any use of this exploit. On the contrary, I (as well as the firm) oppose strongly to any kind of hijacking.
What is it?
A page hijack is a technique exploiting the way search engines interpret certain commands that a web server can send to a visitor. In essence, it allows a hijacking website to replace pages belonging to target websites in the Search Engine Results Pages ("SERPs").
When a visitor searches for a term (say, foo) a hijacking webmaster can replace the pages that appear for this search with pages that (s)he controls. The new pages that the hijacking webmaster inserts into the search engine are "virtual pages", meaning that they don't exist as real pages. Technically speaking they are "server side scripts" and not pages, so the searcher is taken directly from the search engine listings to a script that the hijacker controls. The hijacked pages appear to the searcher as copies of the target pages, but with another web address ("URL") than the target pages.
Once a hijack has taken place, a malicious hijacker can redirect any visitor that clicks on the target page listing to any other page the hijacker chooses to redirect to. If this redirect is hidden from the search engine spiders, the hijack can be sustained for an indefinite period of time.
Possible abuses include: Make "adult" pages appear as e.g. CNN pages in the search engines, set up false bank frontends, false storefronts, etc. All the "usual suspects" that is.
A short description
Regarding the Search Engine Result Pages ("the SERPs"), it's not that the listed text (the "snippets") are wrong. The snippets are the right ones, and so is the page size, the headline, the SERP position, and the Google cache. The only thing that can be observed and identified as wrong in the SERPs is the URL used for the individual result.
This is what happens, in basic terms (see "The technical part: How it is done" for the full story). It's a multi-step process with several possible outcomes, sort of like this:
Hijacker manages to get his script listed as the official URL for another webmaster's page.
To Googlebot the script points to the other webmaster's page from now on.
Searchers will see the right results in the SERPs, but the wrong URL will be on the hijacked listing.
Depending on number of successful hijacks (or some other measure of "severity" only known to Google) the search engine traffic to the other webmaster dries up and disappears, because all his pages (not just the hijacked one(s)) are now "contaminated" and no longer show up for relevant searches.
Optional: The hijacker can choose to redirect the traffic from SERPs to other places for any other visitor than Googlebot.
Offended webmaster can do nothing about this as long as the redirect script(s) points Googlebot to the page(s) of the offended webmaster (and Google has the script URL(s) indexed).
While step five is optional, the other steps are not. Although it is optional it does indeed happen, and this is the worst case as it can direct searchers in good faith to misleading, or even dangerous pages.
Step five is not the only case, as hijacking (as defined by "hijacking the URL of another web page in the SERPS") is damaging in the other cases as well. Not all of them will be damaging to the searcher, and not all of them will be damaging to all webmasters, but all are part of this hijacking issue. The hijack is established in step one above, regardless of later outcome.
This whole chain of events can be executed either by using a 302 redirect, a meta refresh with a zero second redirect time, or by using both in combination.
Which engines are vulnerable?
Search engines vulnerable to this exploit have been reported to include Google and MSN Search, probably others as well. The Yahoo! search engine is at the time of writing the only major one which has managed to close the hole.
Below, the emphasis will be on Google as that one is by far the greatest search engine today in terms of usage - and allegedly also in terms of number of pages indexed
Is it deliberate wrong-doing?
I am not a lawyer, I should stress this. Further, the search engines affected by this operate on a worldwide scale, and laws tend to differ a lot among countries especially regarding the Internet.
That said, the answer is: Most likely not. This is a flaw on the technical side of the search engines. Some webmasters do of course exploit this flaw, but almost all cases I've seen are not a deliberate attempt at hijacking. The hijacker and the target are equally innocent as this is something that happens "internally" in the search engines, and in almost all cases the hijacker does not even know that (s)he is hijacking another page.
It is important to stress that this is a search engine flaw. It affects innocent and un-knowing webmasters as these webmasters go about doing their normal routines, and maintaining their pages and links as usual. It is not so that you have to take steps that are in any way outside of the "normal" or "default" in order to either become hijacked or hijack others. On the contrary, page hijacks are accomplished using everyday standard procedures and techniques used by most webmasters.
What does it look like?
The Search Engine Results Pages ("SERPs") will look just like normal results to the searcher when a page hijack has occurred. On the other hand, to a webmaster that knows where one of his pages used to be listed, it will look a little different. The webmaster will be able to identify it because (s)he will see his/her page listed with an URL that does not belong to the site. The URL is the part in green text under listings in Google.
Example (anonymous)
This example is only provided as an example. I am not implying anything whatsoever about intent, as I specifically state that in most cases this is 100% un-intentional and totally unknown to the hijacker, which becomes so only by accident. It is an error that resides within the search engines, and it is the sole fault of the search engines - not any other entity, be it webmasters, individuals, or companies of any kind. So, I have no reason to believe that what you see here is intentional, and I am in fact suggesting that the implied parties are both totally innocent.
Google search: "BBC News"
Anonymous example from Google SERPs:
BBC NEWS | UK | 'Siesta syndrome' costs UK firms
Healthier food and regular breaks are urged in an effort to stop Britain's
workplace "siesta syndrome".
r.example.tld/foo/rAndoMLettERS - 31k - Cached - Similar pages
Real URL for above page: news.bbc.co.uk/1/hi/uk/4240513.stm
By comparing the green URL with the real URL for the page you will see that they are not the same. The listing, the position in the SERPs, the excerpt from the page ("the snippet"), the headline, the cached result, as well as the document size are those of the real page. The only thing that does not belong to the real page is the URL, which is written in green text, and also linked from the headline.
NEW: This search will reveal more examples when you know what to look for:
Google search: "BBC News | UK |"
Do this: Scroll down and look for listings that look exactly like the real BBC listings, i.e. listings with a headline like this:
BBC News | subject | headline
Check that these listings do not have a BBC URL. Usually the redirect URL will have a questionmark in it as well.
It is important to note that the green URL that is listed (as well as the headline link) does not go to a real page. In stead, the link goes straight to a script not controlled by the target page. So, the searcher (thinking (s)he has found relevant information) is sent directly from the search results to a script that is already in place. This script just needs a slight modification to send the searcher (any User-Agent that is not "Googlebot") in any direction the hijacker chooses. Including, but not limited to, all kinds of spoofed or malicious pages.
(In the example above - if you manage to identify the real page in spite of attempts to keep it anonymous - the searcher will end up at the right page with the BBC, exactly as expected (and on the right URL as well). So, in that case there is clearly no malicious intent whatsoever, and nothing suspicious going on).
Who can control your pages in the search engines?
This is the essence of it all. In the example above, clearly the BBC controls whatever is displayed on the domain "news.bbc.co.uk", but BBC normally does not control what is displayed on domains that BBC does not own. So, a mischievous webmaster controlling the "wrong URL" is free to redirect visitors to any URL of his liking once the hijack has taken place. The searcher clicking on the hijacked result (thinking that (s)he will obtain a news story on UK firms) might in fact end up obtaining all kinds of completely unrelated kinds of "information" and/or offers in stead.
As a side-effect, target domains can have so many pages hijacked that the whole domain starts to be flagged as "less valuable" in the search engine. This leads to domain poisoning, whereby all pages on the target domain slips into Google's "supplemental listings" and search engine traffic to the whole domain dries up and vanishes.
And here's the intriguing part: The target (the "hijacked webmaster") has absolutely no methods available to stop this once it has taken place. That's right. Once hijacked, you can not get your pages back. There are no known methods that will work.
The only certain way to get back your pages at this moment seems to be if the hijacker is kind enough to edit his/her script so that it returns a "404 Not Found" status code, and then proceeds to request removal of the script URL from Google. Note that this has to be done for each and every hijack script that point to the target page, and there can be many of them. Even locating these can be very difficult for an experienced searcher, so it's close to impossible for the average webmaster.
The technical part: How it is done
Here is the full recipe with every step outlined. It's extremely simplified to benefit non-tech readers, and hence not 100% accurate in the finer details, but even though I really have tried to keep it simple you may want to read it twice:
Googlebot (the "web spider" that Google uses to harvest pages) visits a page with a redirect script. In this example it is a link that redirects to another page using a click tracker script, but it need not be so. That page is the "hijacking" page, or "offending" page.
This click tracker script issues a server response code "302 Found" when the link is clicked. This response code is the important part; it does not need to be caused by a click tracker script. Most webmaster tools use this response code per default, as it is standard in both ASP and PHP.
Googlebot indexes the content and makes a list of the links on the hijacker page (including one or more links that are really a redirect script)
All the links on the hijacker page are sent to a database for storage until another Googlebot is ready to spider them. At this point the connection breaks between your site and the hijacker page, so you (as webmaster) can do nothing about the following:
Some other Googlebot tries one of these links - this one happens to be the redirect script (Google has thousands of spiders, all are called "Googlebot")
It receives a "302 Found" status code and goes "yummy, here's a nice new page for me"
It then receives a "Location: www.your-domain.tld" header and hurries to your page to get the content.
It heads straight to your page without telling your server on what page it found the link it used to get there (as, obviously, it doesn't know - another Googlebot fetched it)
It has the URL of the redirect script (which is the link it was given, not the page that link was on), so now it indexes your content as belonging to that URL.
It deliberately chooses to keep the redirect URL, as the redirect script has just told it that the new location (That is: The target URL, or your web page) is just a temporary location for the content. That's what 302 means: Temporary location for content.
Bingo, a brand new page is created (never mind that it does not exist IRL, to Googlebot it does)
Some other Googlebot finds your page at your right URL and indexes it.
When both pages arrive at the reception of the "index" they are spotted by the "duplicate filter" as it is discovered that they are identical.
The "duplicate filter" doesn't know that one of these pages is not a page but just a link (to a script). It has two URLs and identical content, so this is a piece of cake: Let the best page win. The other disappears.
Optional: For mischievous webmasters only: For any other visitor than "Googlebot", make the redirect script point to any other page free of choice.
Added: There are many theories about how the last two steps (13-14) might work. One is the duplicate theory - another would be that the mass of redirects pointing to the page as being "temporary" passes the level of links declaring the page as "permanent". This one does not explain which URL will win, however. There are other theories, even quite obscure ones - all seem to have problems the duplicate theory does not have. The duplicate theory is the most consistent, rational, and straight-forward one I've seen so far, but only the Google engineers know the exact way this works.
Here, "best page" is key. Sometimes the target page will win; sometimes the redirect script will win. Specifically, if the PageRank (an internal Google "page popularity measure") of the target page is lower that the PageRank of the hijacking page, it's most likely that the target page will drop out of the SERPs.
However, examples of high PR pages being hijacked by script links from low PR pages have been observed as well. So, sometimes PR is not critical in order to make a hijack. One might even argue that -- as the way Google works is fully automatic -- if it is so "sometimes" then it has to be so "all the time". This implies that the examples we see of high PR pages hijacking low PR pages is just a co-occurrence, PR is not the reason the hijack link wins. This, in turn, means that any page is able to hijack any other page, if the target page is not sufficiently protected (see below).
So, essentially, by doing the right thing (interpreting a 302 as per the RFC), the search engine (in the example, Google) allows another webmaster to convince it's web page spider that your website is nothing but a temporary holding place for content.
Further, this leads to creation of pages in the search engine index that are not real pages. And, if you are the target, you can do nothing about it.
302 and meta refresh - both methods can be used
The method involving a 302 redirect is not the only one that can be used to perform a malicious hijack. Another just as common webmaster tool is also able to hijack a page in the search engine results: The "meta refresh". This is done by inserting the following piece of code in a standard static HTML page:
The effect of this is exactly as with the 302. To be sure, some hijackers have been observed to employ both a 302 redirect and a meta redirect in the 302 response generated by the Apache server. This is not the default Apache setting, as normally the 302 response will include a standard hyperlink in the HTML part of the response (as specified in the RFC).
The casual reader might think "a standard HTML page can't be that dangerous", but that's a false assumption. A server can be configured to treat any kind of file as a script, even if it has a ".html" extension. So, this method has the exact same possibilities for abuse, it's only a little bit more sophisticated.
What you can - and can not - do about it
Note the bolded part of item #4 in the list above. At a very early stage the connection between your page and the hijacking page simply breaks. This means that you can not put a script on your page that identifies if this is taking place. You can not "tell Googlebot" that your URL is the right URL for your page either.
Here are some common misconceptions. The first thoughts of technically skilled webmasters will be along the lines of "banning" something, i.e. detecting the hijack by means of some kind of script and then performing some kind of action. Lets' clear up the misunderstandings first:
You can't ban 302 referrers as such
Why? Because your server will never know that a 302 is used for reaching it. This information is never passed to your server, so you can't instruct your server to react to it.
You can't ban a "go.php?someURL" redirect script
Why? Because your server will never know that a "go.php?someURL" redirect script is used for reaching it. This information is never passed to your server, so you can't instruct your server to react to it.
Even if you could, it would have no effect with Google
Why? Because Googlebot does not carry a referrer with it when it spiders, so you don't know where it's been before it visited you. As already mentioned, Googlebot could have seen a link to your page a lot of places, so it can't "just pick one". Visits by Googlebot have no referrers, so you can't tell Googlebot that one link that points to your site is good while another is bad.
You CAN ban click through from the page holding the 302 script - but it's no good
Yes you can - but this will only hit legitimate traffic, meaning that surfers clicking from the redirect URL will not be able to view your page. It also means that you will have to maintain an ever-increasing list of individual pages linking to your site. For Googlebot (and any other SE spider) those links will still work, as they pass on no referrer. So, if you do this Googlebot will never know it.
You CAN request removal of URLs from Google's index in some cases
This is definitely not for the faint at heart. I will not recommend this, only note that some webmasters seem to have had success with it. If you feel it's not for you, then don't do it. The point here is that you as webmaster could try to get the redirect script deleted from Google.
Google does accept requests for removal, as long as the page you wish to remove has one of these three properties:
It returns a "404 Not Found" status code (or, perhaps even a "410 Gone" status code)
It has this meta tag:
It is disallowed in the "robots.txt" file of the domain it belongs to
Only the first can be influenced by webmasters that do not control the redirect script, and the way to do it will not be appealing to all. Simply, you have to make sure that the target page returns a 404, which means that the target page must be unavailable (with sufficient technical skills you can do this so that it only returns a 404 if there is no referrer). Then you have to request removal of the redirect script URL, i.e. not the URL of the target page. Use extreme caution: If you request that the target page should be removed while it returns a 404 error, then it will be removed from Google's index. You don't want to remove your own page, only the redirect script.
After the request is submitted, Google will spider the URL to examine if the requirements are met. When Googlebot has seen your pages via the redirect script and it has gotten a 404 error you can put your page back up.
Precautions against being hijacked
I have tracked this and related problems with the search engines literally for years. If there was something that you could easily do to fix it as a webmaster, I would have published it a long time ago. That said; the points listed below will most likely make your pages harder to hijack. I will and can not promise immunity, though, and I specifically don't want to spread false hopes by promising that these will help you once a hijack has already taken place. On the other hand, once hijacked you will lose nothing by trying them.
Always redirect your "non-www" domain (example.com) to the www version (www.example.com) - or the other way round (I personally prefer non-www domains, but that's just because it appeals to my personal sense of convenience). The direction is not important. It is important that you do it with a 301 redirect and not a 302, as the 302 is the one leading to duplicate pages. If you use the Apache web server, the way to do this is to insert the following in your root ".htaccess" file:
RewriteCond %{HTTP_HOST} !^www\.example\.com
RewriteRule (.*) http://www.example.com/$1 [R=301,L]Or, for www-to-non-www redirection, use this syntax:
RewriteCond %{HTTP_HOST} !^example\.com
RewriteRule (.*) http://example.com/$1 [R=301,L]Always use absolute internal linking on your web site (i.e. include your full domain name in links that are pointing from one page of your site to another page on your site)
Include a bit of always updated content on your pages (e.g. a timestamp, a random quote, a page counter, or whatever)
Use the meta tag on all your pages
Just like redirecting the non-www version of your domain to the www version, you can make all your pages "confirm their URL artificially" by inserting a 301 redirect from any URL to the exact same URL, and then serve a "200 OK" status code, as usual. This is not trivial, as it will easily throw your server into a loop.
Precautions against becoming a hijacker
Of course you don't want to become a page hijacker by accident. The precautions you can take are:
If you use 302 redirects in any scripts, convert them to 301 redirects in stead
If you don't want to do this or are unable to do it, make sure your redirect scripts are disallowed in your "robots.txt" file (you could also do both).
After putting your redirect script URLs in "robots.txt", request removal of all the script URLs from Google's index - i.e. request removal of all items listed in "robots.txt". Contrary to popular belief, including an URL that is already indexed in "robots.txt" does not remove it from Google's index. It only makes sure that Googlebot does not revisit it. You have to request it removed to get it removed.
If you discover that you are listed as having hijacked a page in Google, make the script in question return a 404 error and then request removal of the script URL from Google's index
Recommended fix
This can not and should not be fixed by webmasters. It is an error that is generated by the search engines, it is only found within the search engines, and hence it must be fixed by the search engines.
The fix I personally recommend is simple: treat cross-domain 302 redirects differently that same-domain 302 redirects. Specifically, treat same-domain 302 redirects exactly as per the RFC, but treat cross-domain 302 redirects just like a normal link.
Meta redirects and other types of redirects should of course be treated the same way: Only according to RFC when it's within one domain - when it's across domains it must be treated like a simple link.
Added: A Slashdot reader made me aware of this:
RFC 2119 (Key words for use in RFCs to Indicate Requirement Levels) defines "SHOULD" as follows:
3. SHOULD This word, or the adjective "RECOMMENDED", mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course.
So, if a search engine has a valid reason not to do as the RFC says it SHOULD, it will actually be conforming to the same RFC by not doing it.
By Claus Schmidt.
Abstract:
An explanation of the page hijack exploit using 302 server redirects. This exploit allows any webmaster to have his own "virtual pages" rank for terms that pages belonging to another webmaster used to rank for. Successfully employed, this technique will allow the offending webmaster ("the hijacker") to displace the pages of the "target" in the Search Engine Results Pages ("SERPS"), and hence (a) cause search engine traffic to the target website to vanish, and/or (b) further redirect traffic to any other page of choice.
--------------------------------------------------------------------------------
Published here on March 14, 2005.
Copyright: © 2005 Claus Schmidt, clsc.net
Citations (quotes, not full-text copy) are considered fair use if accompanied by author name and a link to this web site or page.
Bug Track:
2006-09-18: Status: I'm very close to declaring this issue fixed. It does not seem like this is a problem with Google any more - if I'm wrong, please notify me; you'll know where to find me. Yahoo has no problems with this. MSN status is unclear.
2006-01-04: Status: Google is attempting a new fix, being tested Q1 2006. Status for this: Unknown.
2005-08-26: Status: STILL NOT FIXED. Although the engineers at Google have recently made new attempts to fix it, it can still not be considered a solved problem.
2005-05-08: Status: STILL NOT FIXED. Google only hides the wrong URLs artificially when you search using a special syntax ("site:www.example.com"). The wrong URLs are still in the database and they still come up for regular searches.
2005-04-19: Some information from Google here from message #108 and onwards
2005-04-18: Good news: It seems Google is fixing this issue right now
2005-03-24: Added "A short description" and a new example search
2005-03-17: Added a brief section about the "meta refresh" variant of the exploit.
2005-03-16: Edited some paragraphs and added extra information for clarity, as requested by a few nice Slashdot readers. Thanks.
2005-03-15: Some minor quick edits, mostly typos.
2005-03-14: I apologize in advance for typos and such - I did not have much time to write all this.
The Google view:
These three pieces are good if you want an opinion from a Google engineer (Matt Cutts). He does not write all that I write below, but it's always nice to hear another perspective:
Url canonicalization,
The inurl operator, and
302 redirects.Disclaimer
What is it?
A short description
Which engines are vulnerable?
Is it deliberate wrong-doing?
What does it look like?
Example (anonymous)
Who can control your pages in the search engines?
The technical part: How it is done
302 and meta refresh - both methods can be used
What you can - and can not - do about it
Precautions against being hijacked
Precautions against becoming a hijacker
Recommended fix
You can help
Disclaimer
This exploit is published here for one reason only: To make the problem understandable and visible to as many people as possible in order to force action to be taken to prevent further abuse of this exploit. As will be shown below, this action can only be taken by the search engines themselves. Neither clsc.net nor Claus Schmidt will encourage, endorse or justify any use of this exploit. On the contrary, I (as well as the firm) oppose strongly to any kind of hijacking.
What is it?
A page hijack is a technique exploiting the way search engines interpret certain commands that a web server can send to a visitor. In essence, it allows a hijacking website to replace pages belonging to target websites in the Search Engine Results Pages ("SERPs").
When a visitor searches for a term (say, foo) a hijacking webmaster can replace the pages that appear for this search with pages that (s)he controls. The new pages that the hijacking webmaster inserts into the search engine are "virtual pages", meaning that they don't exist as real pages. Technically speaking they are "server side scripts" and not pages, so the searcher is taken directly from the search engine listings to a script that the hijacker controls. The hijacked pages appear to the searcher as copies of the target pages, but with another web address ("URL") than the target pages.
Once a hijack has taken place, a malicious hijacker can redirect any visitor that clicks on the target page listing to any other page the hijacker chooses to redirect to. If this redirect is hidden from the search engine spiders, the hijack can be sustained for an indefinite period of time.
Possible abuses include: Make "adult" pages appear as e.g. CNN pages in the search engines, set up false bank frontends, false storefronts, etc. All the "usual suspects" that is.
A short description
Regarding the Search Engine Result Pages ("the SERPs"), it's not that the listed text (the "snippets") are wrong. The snippets are the right ones, and so is the page size, the headline, the SERP position, and the Google cache. The only thing that can be observed and identified as wrong in the SERPs is the URL used for the individual result.
This is what happens, in basic terms (see "The technical part: How it is done" for the full story). It's a multi-step process with several possible outcomes, sort of like this:
Hijacker manages to get his script listed as the official URL for another webmaster's page.
To Googlebot the script points to the other webmaster's page from now on.
Searchers will see the right results in the SERPs, but the wrong URL will be on the hijacked listing.
Depending on number of successful hijacks (or some other measure of "severity" only known to Google) the search engine traffic to the other webmaster dries up and disappears, because all his pages (not just the hijacked one(s)) are now "contaminated" and no longer show up for relevant searches.
Optional: The hijacker can choose to redirect the traffic from SERPs to other places for any other visitor than Googlebot.
Offended webmaster can do nothing about this as long as the redirect script(s) points Googlebot to the page(s) of the offended webmaster (and Google has the script URL(s) indexed).
While step five is optional, the other steps are not. Although it is optional it does indeed happen, and this is the worst case as it can direct searchers in good faith to misleading, or even dangerous pages.
Step five is not the only case, as hijacking (as defined by "hijacking the URL of another web page in the SERPS") is damaging in the other cases as well. Not all of them will be damaging to the searcher, and not all of them will be damaging to all webmasters, but all are part of this hijacking issue. The hijack is established in step one above, regardless of later outcome.
This whole chain of events can be executed either by using a 302 redirect, a meta refresh with a zero second redirect time, or by using both in combination.
Which engines are vulnerable?
Search engines vulnerable to this exploit have been reported to include Google and MSN Search, probably others as well. The Yahoo! search engine is at the time of writing the only major one which has managed to close the hole.
Below, the emphasis will be on Google as that one is by far the greatest search engine today in terms of usage - and allegedly also in terms of number of pages indexed
Is it deliberate wrong-doing?
I am not a lawyer, I should stress this. Further, the search engines affected by this operate on a worldwide scale, and laws tend to differ a lot among countries especially regarding the Internet.
That said, the answer is: Most likely not. This is a flaw on the technical side of the search engines. Some webmasters do of course exploit this flaw, but almost all cases I've seen are not a deliberate attempt at hijacking. The hijacker and the target are equally innocent as this is something that happens "internally" in the search engines, and in almost all cases the hijacker does not even know that (s)he is hijacking another page.
It is important to stress that this is a search engine flaw. It affects innocent and un-knowing webmasters as these webmasters go about doing their normal routines, and maintaining their pages and links as usual. It is not so that you have to take steps that are in any way outside of the "normal" or "default" in order to either become hijacked or hijack others. On the contrary, page hijacks are accomplished using everyday standard procedures and techniques used by most webmasters.
What does it look like?
The Search Engine Results Pages ("SERPs") will look just like normal results to the searcher when a page hijack has occurred. On the other hand, to a webmaster that knows where one of his pages used to be listed, it will look a little different. The webmaster will be able to identify it because (s)he will see his/her page listed with an URL that does not belong to the site. The URL is the part in green text under listings in Google.
Example (anonymous)
This example is only provided as an example. I am not implying anything whatsoever about intent, as I specifically state that in most cases this is 100% un-intentional and totally unknown to the hijacker, which becomes so only by accident. It is an error that resides within the search engines, and it is the sole fault of the search engines - not any other entity, be it webmasters, individuals, or companies of any kind. So, I have no reason to believe that what you see here is intentional, and I am in fact suggesting that the implied parties are both totally innocent.
Google search: "BBC News"
Anonymous example from Google SERPs:
BBC NEWS | UK | 'Siesta syndrome' costs UK firms
Healthier food and regular breaks are urged in an effort to stop Britain's
workplace "siesta syndrome".
r.example.tld/foo/rAndoMLettERS - 31k - Cached - Similar pages
Real URL for above page: news.bbc.co.uk/1/hi/uk/4240513.stm
By comparing the green URL with the real URL for the page you will see that they are not the same. The listing, the position in the SERPs, the excerpt from the page ("the snippet"), the headline, the cached result, as well as the document size are those of the real page. The only thing that does not belong to the real page is the URL, which is written in green text, and also linked from the headline.
NEW: This search will reveal more examples when you know what to look for:
Google search: "BBC News | UK |"
Do this: Scroll down and look for listings that look exactly like the real BBC listings, i.e. listings with a headline like this:
BBC News | subject | headline
Check that these listings do not have a BBC URL. Usually the redirect URL will have a questionmark in it as well.
It is important to note that the green URL that is listed (as well as the headline link) does not go to a real page. In stead, the link goes straight to a script not controlled by the target page. So, the searcher (thinking (s)he has found relevant information) is sent directly from the search results to a script that is already in place. This script just needs a slight modification to send the searcher (any User-Agent that is not "Googlebot") in any direction the hijacker chooses. Including, but not limited to, all kinds of spoofed or malicious pages.
(In the example above - if you manage to identify the real page in spite of attempts to keep it anonymous - the searcher will end up at the right page with the BBC, exactly as expected (and on the right URL as well). So, in that case there is clearly no malicious intent whatsoever, and nothing suspicious going on).
Who can control your pages in the search engines?
This is the essence of it all. In the example above, clearly the BBC controls whatever is displayed on the domain "news.bbc.co.uk", but BBC normally does not control what is displayed on domains that BBC does not own. So, a mischievous webmaster controlling the "wrong URL" is free to redirect visitors to any URL of his liking once the hijack has taken place. The searcher clicking on the hijacked result (thinking that (s)he will obtain a news story on UK firms) might in fact end up obtaining all kinds of completely unrelated kinds of "information" and/or offers in stead.
As a side-effect, target domains can have so many pages hijacked that the whole domain starts to be flagged as "less valuable" in the search engine. This leads to domain poisoning, whereby all pages on the target domain slips into Google's "supplemental listings" and search engine traffic to the whole domain dries up and vanishes.
And here's the intriguing part: The target (the "hijacked webmaster") has absolutely no methods available to stop this once it has taken place. That's right. Once hijacked, you can not get your pages back. There are no known methods that will work.
The only certain way to get back your pages at this moment seems to be if the hijacker is kind enough to edit his/her script so that it returns a "404 Not Found" status code, and then proceeds to request removal of the script URL from Google. Note that this has to be done for each and every hijack script that point to the target page, and there can be many of them. Even locating these can be very difficult for an experienced searcher, so it's close to impossible for the average webmaster.
The technical part: How it is done
Here is the full recipe with every step outlined. It's extremely simplified to benefit non-tech readers, and hence not 100% accurate in the finer details, but even though I really have tried to keep it simple you may want to read it twice:
Googlebot (the "web spider" that Google uses to harvest pages) visits a page with a redirect script. In this example it is a link that redirects to another page using a click tracker script, but it need not be so. That page is the "hijacking" page, or "offending" page.
This click tracker script issues a server response code "302 Found" when the link is clicked. This response code is the important part; it does not need to be caused by a click tracker script. Most webmaster tools use this response code per default, as it is standard in both ASP and PHP.
Googlebot indexes the content and makes a list of the links on the hijacker page (including one or more links that are really a redirect script)
All the links on the hijacker page are sent to a database for storage until another Googlebot is ready to spider them. At this point the connection breaks between your site and the hijacker page, so you (as webmaster) can do nothing about the following:
Some other Googlebot tries one of these links - this one happens to be the redirect script (Google has thousands of spiders, all are called "Googlebot")
It receives a "302 Found" status code and goes "yummy, here's a nice new page for me"
It then receives a "Location: www.your-domain.tld" header and hurries to your page to get the content.
It heads straight to your page without telling your server on what page it found the link it used to get there (as, obviously, it doesn't know - another Googlebot fetched it)
It has the URL of the redirect script (which is the link it was given, not the page that link was on), so now it indexes your content as belonging to that URL.
It deliberately chooses to keep the redirect URL, as the redirect script has just told it that the new location (That is: The target URL, or your web page) is just a temporary location for the content. That's what 302 means: Temporary location for content.
Bingo, a brand new page is created (never mind that it does not exist IRL, to Googlebot it does)
Some other Googlebot finds your page at your right URL and indexes it.
When both pages arrive at the reception of the "index" they are spotted by the "duplicate filter" as it is discovered that they are identical.
The "duplicate filter" doesn't know that one of these pages is not a page but just a link (to a script). It has two URLs and identical content, so this is a piece of cake: Let the best page win. The other disappears.
Optional: For mischievous webmasters only: For any other visitor than "Googlebot", make the redirect script point to any other page free of choice.
Added: There are many theories about how the last two steps (13-14) might work. One is the duplicate theory - another would be that the mass of redirects pointing to the page as being "temporary" passes the level of links declaring the page as "permanent". This one does not explain which URL will win, however. There are other theories, even quite obscure ones - all seem to have problems the duplicate theory does not have. The duplicate theory is the most consistent, rational, and straight-forward one I've seen so far, but only the Google engineers know the exact way this works.
Here, "best page" is key. Sometimes the target page will win; sometimes the redirect script will win. Specifically, if the PageRank (an internal Google "page popularity measure") of the target page is lower that the PageRank of the hijacking page, it's most likely that the target page will drop out of the SERPs.
However, examples of high PR pages being hijacked by script links from low PR pages have been observed as well. So, sometimes PR is not critical in order to make a hijack. One might even argue that -- as the way Google works is fully automatic -- if it is so "sometimes" then it has to be so "all the time". This implies that the examples we see of high PR pages hijacking low PR pages is just a co-occurrence, PR is not the reason the hijack link wins. This, in turn, means that any page is able to hijack any other page, if the target page is not sufficiently protected (see below).
So, essentially, by doing the right thing (interpreting a 302 as per the RFC), the search engine (in the example, Google) allows another webmaster to convince it's web page spider that your website is nothing but a temporary holding place for content.
Further, this leads to creation of pages in the search engine index that are not real pages. And, if you are the target, you can do nothing about it.
302 and meta refresh - both methods can be used
The method involving a 302 redirect is not the only one that can be used to perform a malicious hijack. Another just as common webmaster tool is also able to hijack a page in the search engine results: The "meta refresh". This is done by inserting the following piece of code in a standard static HTML page:
The effect of this is exactly as with the 302. To be sure, some hijackers have been observed to employ both a 302 redirect and a meta redirect in the 302 response generated by the Apache server. This is not the default Apache setting, as normally the 302 response will include a standard hyperlink in the HTML part of the response (as specified in the RFC).
The casual reader might think "a standard HTML page can't be that dangerous", but that's a false assumption. A server can be configured to treat any kind of file as a script, even if it has a ".html" extension. So, this method has the exact same possibilities for abuse, it's only a little bit more sophisticated.
What you can - and can not - do about it
Note the bolded part of item #4 in the list above. At a very early stage the connection between your page and the hijacking page simply breaks. This means that you can not put a script on your page that identifies if this is taking place. You can not "tell Googlebot" that your URL is the right URL for your page either.
Here are some common misconceptions. The first thoughts of technically skilled webmasters will be along the lines of "banning" something, i.e. detecting the hijack by means of some kind of script and then performing some kind of action. Lets' clear up the misunderstandings first:
You can't ban 302 referrers as such
Why? Because your server will never know that a 302 is used for reaching it. This information is never passed to your server, so you can't instruct your server to react to it.
You can't ban a "go.php?someURL" redirect script
Why? Because your server will never know that a "go.php?someURL" redirect script is used for reaching it. This information is never passed to your server, so you can't instruct your server to react to it.
Even if you could, it would have no effect with Google
Why? Because Googlebot does not carry a referrer with it when it spiders, so you don't know where it's been before it visited you. As already mentioned, Googlebot could have seen a link to your page a lot of places, so it can't "just pick one". Visits by Googlebot have no referrers, so you can't tell Googlebot that one link that points to your site is good while another is bad.
You CAN ban click through from the page holding the 302 script - but it's no good
Yes you can - but this will only hit legitimate traffic, meaning that surfers clicking from the redirect URL will not be able to view your page. It also means that you will have to maintain an ever-increasing list of individual pages linking to your site. For Googlebot (and any other SE spider) those links will still work, as they pass on no referrer. So, if you do this Googlebot will never know it.
You CAN request removal of URLs from Google's index in some cases
This is definitely not for the faint at heart. I will not recommend this, only note that some webmasters seem to have had success with it. If you feel it's not for you, then don't do it. The point here is that you as webmaster could try to get the redirect script deleted from Google.
Google does accept requests for removal, as long as the page you wish to remove has one of these three properties:
It returns a "404 Not Found" status code (or, perhaps even a "410 Gone" status code)
It has this meta tag:
It is disallowed in the "robots.txt" file of the domain it belongs to
Only the first can be influenced by webmasters that do not control the redirect script, and the way to do it will not be appealing to all. Simply, you have to make sure that the target page returns a 404, which means that the target page must be unavailable (with sufficient technical skills you can do this so that it only returns a 404 if there is no referrer). Then you have to request removal of the redirect script URL, i.e. not the URL of the target page. Use extreme caution: If you request that the target page should be removed while it returns a 404 error, then it will be removed from Google's index. You don't want to remove your own page, only the redirect script.
After the request is submitted, Google will spider the URL to examine if the requirements are met. When Googlebot has seen your pages via the redirect script and it has gotten a 404 error you can put your page back up.
Precautions against being hijacked
I have tracked this and related problems with the search engines literally for years. If there was something that you could easily do to fix it as a webmaster, I would have published it a long time ago. That said; the points listed below will most likely make your pages harder to hijack. I will and can not promise immunity, though, and I specifically don't want to spread false hopes by promising that these will help you once a hijack has already taken place. On the other hand, once hijacked you will lose nothing by trying them.
Always redirect your "non-www" domain (example.com) to the www version (www.example.com) - or the other way round (I personally prefer non-www domains, but that's just because it appeals to my personal sense of convenience). The direction is not important. It is important that you do it with a 301 redirect and not a 302, as the 302 is the one leading to duplicate pages. If you use the Apache web server, the way to do this is to insert the following in your root ".htaccess" file:
RewriteCond %{HTTP_HOST} !^www\.example\.com
RewriteRule (.*) http://www.example.com/$1 [R=301,L]Or, for www-to-non-www redirection, use this syntax:
RewriteCond %{HTTP_HOST} !^example\.com
RewriteRule (.*) http://example.com/$1 [R=301,L]Always use absolute internal linking on your web site (i.e. include your full domain name in links that are pointing from one page of your site to another page on your site)
Include a bit of always updated content on your pages (e.g. a timestamp, a random quote, a page counter, or whatever)
Use the
Just like redirecting the non-www version of your domain to the www version, you can make all your pages "confirm their URL artificially" by inserting a 301 redirect from any URL to the exact same URL, and then serve a "200 OK" status code, as usual. This is not trivial, as it will easily throw your server into a loop.
Precautions against becoming a hijacker
Of course you don't want to become a page hijacker by accident. The precautions you can take are:
If you use 302 redirects in any scripts, convert them to 301 redirects in stead
If you don't want to do this or are unable to do it, make sure your redirect scripts are disallowed in your "robots.txt" file (you could also do both).
After putting your redirect script URLs in "robots.txt", request removal of all the script URLs from Google's index - i.e. request removal of all items listed in "robots.txt". Contrary to popular belief, including an URL that is already indexed in "robots.txt" does not remove it from Google's index. It only makes sure that Googlebot does not revisit it. You have to request it removed to get it removed.
If you discover that you are listed as having hijacked a page in Google, make the script in question return a 404 error and then request removal of the script URL from Google's index
Recommended fix
This can not and should not be fixed by webmasters. It is an error that is generated by the search engines, it is only found within the search engines, and hence it must be fixed by the search engines.
The fix I personally recommend is simple: treat cross-domain 302 redirects differently that same-domain 302 redirects. Specifically, treat same-domain 302 redirects exactly as per the RFC, but treat cross-domain 302 redirects just like a normal link.
Meta redirects and other types of redirects should of course be treated the same way: Only according to RFC when it's within one domain - when it's across domains it must be treated like a simple link.
Added: A Slashdot reader made me aware of this:
RFC 2119 (Key words for use in RFCs to Indicate Requirement Levels) defines "SHOULD" as follows:
3. SHOULD This word, or the adjective "RECOMMENDED", mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course.
So, if a search engine has a valid reason not to do as the RFC says it SHOULD, it will actually be conforming to the same RFC by not doing it.
Hacking for SEO?
Hacking stuff for SEO purposes can be fun and easy, but it’s something I’m against! Nevertheless I’m going to give an example of blog hackability. You should use this information to protect yourself, but you can use it for other purposes.
How to hack your comment into Wordpress
Alot of functions in blogs are based on variables passed in the URL string. As long as the moderator is logged in, he is allowed to do various tasks just by clicking on the right link. Wordpress is one of the safer blog scripts, but it has its vulnerabilities. The instructions below show how you can pass the right commands to auto moderate your comment in someone elses blog.
Find a blog that uses Wordpress where you would want a comment the moderator would never alow.
Make a webpage that contains some info, but also a very small iframe. Keep the URL of the iframe empty for now.
On the blog you want to hack find out what the wp-login.php directory is. Most of the time it’s the same directory as the blog itself.
Enter the comment you want to have moderated and don’t press submit yet. Look in the pagecode for the id of the last comment: your comment will get the next value (in this case 7+1=8). And look for the post id
How to hack your comment into Wordpress
Alot of functions in blogs are based on variables passed in the URL string. As long as the moderator is logged in, he is allowed to do various tasks just by clicking on the right link. Wordpress is one of the safer blog scripts, but it has its vulnerabilities. The instructions below show how you can pass the right commands to auto moderate your comment in someone elses blog.
Find a blog that uses Wordpress where you would want a comment the moderator would never alow.
Make a webpage that contains some info, but also a very small iframe. Keep the URL of the iframe empty for now.
On the blog you want to hack find out what the wp-login.php directory is. Most of the time it’s the same directory as the blog itself.
Enter the comment you want to have moderated and don’t press submit yet. Look in the pagecode for the id of the last comment:
(in this case 10).
Now edit the webpage with the iframe and set the iframe target to:
http://(blog directory)/wp-admin/post.php?action=mailapprovecomment&p=10&comment=8
Submit the first comment and on another post on the blog you make a comment with an enticing reason to visit the URL of your webpage with the hidden iframe. You can also include the link to your webpage in the first comment without doing a second one.
You’re done! Now the following should happen.
The moderator logs in to his control panel and starts moderating his comments.
He sees your comment with the link and visits your page. Unknowingly he also visits his own url through the iframe and approves the comment you want added.
Maybe he finds out but he would only be confused because he could have accidentally pushed the link himself. Cover your tracks by removing the iframe and you’re done.
In stead of point 4. and 5. you can also have the owner of the blog make a comment without realising it.
4. Look at the sourcecode of the comment form. and look for the action=”". Copy the URL to your clipboard. Then look for the comment_post_ID.
5. Make a new page and enter the following:
Place this code in a page that you request as your iframe.
As you see hacking can be easy. Use the force wisely and don’t give in to the dark side! ;)
Now edit the webpage with the iframe and set the iframe target to:
http://(blog directory)/wp-admin/post.php?action=mailapprovecomment&p=10&comment=8
Submit the first comment and on another post on the blog you make a comment with an enticing reason to visit the URL of your webpage with the hidden iframe. You can also include the link to your webpage in the first comment without doing a second one.
You’re done! Now the following should happen.
The moderator logs in to his control panel and starts moderating his comments.
He sees your comment with the link and visits your page. Unknowingly he also visits his own url through the iframe and approves the comment you want added.
Maybe he finds out but he would only be confused because he could have accidentally pushed the link himself. Cover your tracks by removing the iframe and you’re done.
In stead of point 4. and 5. you can also have the owner of the blog make a comment without realising it.
4. Look at the sourcecode of the comment form. and look for the action=”". Copy the URL to your clipboard. Then look for the comment_post_ID.
5. Make a new page and enter the following:
Place this code in a page that you request as your iframe.
As you see hacking can be easy. Use the force wisely and don’t give in to the dark side! ;)
When should I use cloaking?
There are many good and bad ways to use cloaking. Google says there are ways where they approve on its use and they even say a good way to identify the Googlebot is by doing an IP lookup and checking if it is in the googlebot.com domain.
When should you cloak?
Cloaking javascript and CSS
Cloaking to get your competitors links
First some basic questions answered:
What is Cloaking?
Cloaking is showing a search engine other content then normal visitors.
What does Google find plausible reasons to cloak?
When you have a website that is hard to index by a script, you can have an alternative version with the same content available.
Or when you have a password protected area you want Google to index, you can let him in without a password. Just add “noarchive” to your robots metatag, otherwise people just look at the Google cache.
There are more reasons, but the rule is: “If it improves the searchbot crawlability of the same content a user sees”.
Why should I cloak?
Normally you shouldn’t! Make good sites with good code and both users and Google should see the same. If your code is unfriendly for search engines, rewrite the code and don’t cloak.
There are situations where you want to draw a users attention to one place and Googles attention to another. Normally I’d use images to draw a users attention to my “call to action” and headers to draw Googles attention to the most important text. So in most situations you don’t need cloaking for this.
If you’re a good boy, don’t use cloaking! If you’re a bad boy (or girl), do it!
But there are good spammy tactics?
Of course there are many fun ways to use cloaking to your advantage. Most of the time you wan’t both linkable content and search engine friendly content. This could become a compromise and you should always want the best of both. Here are some fun tactics with cloaking.
I hope Matt Cutts doesn’t read this post because there are ways to detect the following tactics in an algorithm, they just don’t detect them yet. My blog is too small to be detected by Matt Cutts (the Google spamcop), but when he does: “Matt please leave a comment!”
Javascript and Stylesheet tactics
With external javascript and stylesheets you can change the appearance and visibility of your content. Google sees this and devalues hidden content.
I tried to disalow my .JS and .CSS files from being indexed in the robots.txt, but Google disobeys this and still reads them. Then I tried IP cloaking (detecting if an IP belonged to a search engine and showed different javascript and stylesheet information) and it worked.
You can hide a block of content, you can show an H1 header as normal inline text, you can hide links and much more all by the use of javascript or css. Currently the detection of cloaking isn’t very sophisticated and algorithmicly not many spammers are caught. When you get caught on cloaking it is mostly because someone ratted on you to Google. If you just cloak the .JS and .CSS files people don’t see any difference between the search engine cache and normal file. The cached version still uses the users version of external files.
Cloaking for linkbuilding
This way is more dangerous because people can detect it more easily and tell Google about it. But many people ask me: “How do I get links from my competitors to my commercial website?” Here is a possible answer to this question!
Make a non-commercial website about a subject your competitors might be willing to link to and point them to it. Don’t let it have any link to you or your commercial activities.
For instance: Results of the top 5 SEO companies competition website.
Email: Congratulations you made 1st place!
Result: They will proudly link to it.
When they start linking you want to divert the linklove to your commercial activities without loosing the links.
Either cloak and place good links to your commercial website or use a 301-redirect to divert all linkpoints.
The 301 redirect shows no cache in Google, thus no trackback to where it is going.
Competitors just don’t see it show up in Google. From their IP the domain shows the normal site, so no reason not to link to it.
The links will remain and could even grow, but the love is diverted!
When should you cloak?
Cloaking javascript and CSS
Cloaking to get your competitors links
First some basic questions answered:
What is Cloaking?
Cloaking is showing a search engine other content then normal visitors.
What does Google find plausible reasons to cloak?
When you have a website that is hard to index by a script, you can have an alternative version with the same content available.
Or when you have a password protected area you want Google to index, you can let him in without a password. Just add “noarchive” to your robots metatag, otherwise people just look at the Google cache.
There are more reasons, but the rule is: “If it improves the searchbot crawlability of the same content a user sees”.
Why should I cloak?
Normally you shouldn’t! Make good sites with good code and both users and Google should see the same. If your code is unfriendly for search engines, rewrite the code and don’t cloak.
There are situations where you want to draw a users attention to one place and Googles attention to another. Normally I’d use images to draw a users attention to my “call to action” and headers to draw Googles attention to the most important text. So in most situations you don’t need cloaking for this.
If you’re a good boy, don’t use cloaking! If you’re a bad boy (or girl), do it!
But there are good spammy tactics?
Of course there are many fun ways to use cloaking to your advantage. Most of the time you wan’t both linkable content and search engine friendly content. This could become a compromise and you should always want the best of both. Here are some fun tactics with cloaking.
I hope Matt Cutts doesn’t read this post because there are ways to detect the following tactics in an algorithm, they just don’t detect them yet. My blog is too small to be detected by Matt Cutts (the Google spamcop), but when he does: “Matt please leave a comment!”
Javascript and Stylesheet tactics
With external javascript and stylesheets you can change the appearance and visibility of your content. Google sees this and devalues hidden content.
I tried to disalow my .JS and .CSS files from being indexed in the robots.txt, but Google disobeys this and still reads them. Then I tried IP cloaking (detecting if an IP belonged to a search engine and showed different javascript and stylesheet information) and it worked.
You can hide a block of content, you can show an H1 header as normal inline text, you can hide links and much more all by the use of javascript or css. Currently the detection of cloaking isn’t very sophisticated and algorithmicly not many spammers are caught. When you get caught on cloaking it is mostly because someone ratted on you to Google. If you just cloak the .JS and .CSS files people don’t see any difference between the search engine cache and normal file. The cached version still uses the users version of external files.
Cloaking for linkbuilding
This way is more dangerous because people can detect it more easily and tell Google about it. But many people ask me: “How do I get links from my competitors to my commercial website?” Here is a possible answer to this question!
Make a non-commercial website about a subject your competitors might be willing to link to and point them to it. Don’t let it have any link to you or your commercial activities.
For instance: Results of the top 5 SEO companies competition website.
Email: Congratulations you made 1st place!
Result: They will proudly link to it.
When they start linking you want to divert the linklove to your commercial activities without loosing the links.
Either cloak and place good links to your commercial website or use a 301-redirect to divert all linkpoints.
The 301 redirect shows no cache in Google, thus no trackback to where it is going.
Competitors just don’t see it show up in Google. From their IP the domain shows the normal site, so no reason not to link to it.
The links will remain and could even grow, but the love is diverted!
Cloaking
Cloaking is a black hat search engine optimization (SEO) technique in which the content presented to the search engine spider is different from that presented to the users' browser. This is done by delivering content based on the IP addresses or the User-Agent HTTP header of the user requesting the page. When a user is identified as a search engine spider, a server-side script delivers a different version of the web page, one that contains content not present on the visible page. The purpose of cloaking is to deceive search engines so they display the page when it would not otherwise be displayed.
The only legitimate uses for cloaking used to be for delivering content to users that search engines couldn't parse, like Adobe Flash [citation needed]. As of 2006, better methods of accessibility, including progressive enhancement are available, so cloaking is not necessary. Cloaking is often used as a spamdexing technique, to try to trick search engines into giving the relevant site a higher ranking; it can also be used to trick search engine users into visiting a site based on the search engine description which site turns out to have substantially different, or even pornographic content. For this reason, major search engines consider cloaking for deception to be a violation of their guidelines, and therefore, they delist sites when deceptive cloaking is reported.[1][2][3][4][5]
Cloaking is a form of the doorway page technique.
A similar technique is also used on the Open Directory Project web directory, for example look at Spreto (The Open Directory Project). It differs in several ways from search engine cloaking:
It is intended to fool human editors, rather than computer search engine spiders.
The decision to cloak or not is often based upon the HTTP referrer, the user agent or the visitor's IP; but more advanced techniques can be also based upon the client's behaviour analysis after a few page requests: the raw quantity, the sorting of, and latency between subsequent HTTP requests sent to a website's pages, plus the presence of a check for robots.txt file, are some of the parameters in which search engines spiders differ heavily from a natural user behaviour. The referrer tells the URL of the page on which a user clicked a link to get to the page. Some cloakers will give the fake page to anyone who comes from a web directory website, since directory editors will usually examine sites by clicking on links that appear on a directory web page. Other cloakers give the fake page to everyone except those coming from a major search engine; this makes it harder to detect cloaking, while not costing them many visitors, since most people find websites by using a search engine.
The only legitimate uses for cloaking used to be for delivering content to users that search engines couldn't parse, like Adobe Flash [citation needed]. As of 2006, better methods of accessibility, including progressive enhancement are available, so cloaking is not necessary. Cloaking is often used as a spamdexing technique, to try to trick search engines into giving the relevant site a higher ranking; it can also be used to trick search engine users into visiting a site based on the search engine description which site turns out to have substantially different, or even pornographic content. For this reason, major search engines consider cloaking for deception to be a violation of their guidelines, and therefore, they delist sites when deceptive cloaking is reported.[1][2][3][4][5]
Cloaking is a form of the doorway page technique.
A similar technique is also used on the Open Directory Project web directory, for example look at Spreto (The Open Directory Project). It differs in several ways from search engine cloaking:
It is intended to fool human editors, rather than computer search engine spiders.
The decision to cloak or not is often based upon the HTTP referrer, the user agent or the visitor's IP; but more advanced techniques can be also based upon the client's behaviour analysis after a few page requests: the raw quantity, the sorting of, and latency between subsequent HTTP requests sent to a website's pages, plus the presence of a check for robots.txt file, are some of the parameters in which search engines spiders differ heavily from a natural user behaviour. The referrer tells the URL of the page on which a user clicked a link to get to the page. Some cloakers will give the fake page to anyone who comes from a web directory website, since directory editors will usually examine sites by clicking on links that appear on a directory web page. Other cloakers give the fake page to everyone except those coming from a major search engine; this makes it harder to detect cloaking, while not costing them many visitors, since most people find websites by using a search engine.
Cloaking
Cloaking is a black hat search engine optimization (SEO) technique in which the content presented to the search engine spider is different from that presented to the users' browser. This is done by delivering content based on the IP addresses or the User-Agent HTTP header of the user requesting the page. When a user is identified as a search engine spider, a server-side script delivers a different version of the web page, one that contains content not present on the visible page. The purpose of cloaking is to deceive search engines so they display the page when it would not otherwise be displayed.
The only legitimate uses for cloaking used to be for delivering content to users that search engines couldn't parse, like Adobe Flash [citation needed]. As of 2006, better methods of accessibility, including progressive enhancement are available, so cloaking is not necessary. Cloaking is often used as a spamdexing technique, to try to trick search engines into giving the relevant site a higher ranking; it can also be used to trick search engine users into visiting a site based on the search engine description which site turns out to have substantially different, or even pornographic content. For this reason, major search engines consider cloaking for deception to be a violation of their guidelines, and therefore, they delist sites when deceptive cloaking is reported.[1][2][3][4][5]
Cloaking is a form of the doorway page technique.
A similar technique is also used on the Open Directory Project web directory, for example look at Spreto (The Open Directory Project). It differs in several ways from search engine cloaking:
It is intended to fool human editors, rather than computer search engine spiders.
The decision to cloak or not is often based upon the HTTP referrer, the user agent or the visitor's IP; but more advanced techniques can be also based upon the client's behaviour analysis after a few page requests: the raw quantity, the sorting of, and latency between subsequent HTTP requests sent to a website's pages, plus the presence of a check for robots.txt file, are some of the parameters in which search engines spiders differ heavily from a natural user behaviour. The referrer tells the URL of the page on which a user clicked a link to get to the page. Some cloakers will give the fake page to anyone who comes from a web directory website, since directory editors will usually examine sites by clicking on links that appear on a directory web page. Other cloakers give the fake page to everyone except those coming from a major search engine; this makes it harder to detect cloaking, while not costing them many visitors, since most people find websites by using a search engine.
The only legitimate uses for cloaking used to be for delivering content to users that search engines couldn't parse, like Adobe Flash [citation needed]. As of 2006, better methods of accessibility, including progressive enhancement are available, so cloaking is not necessary. Cloaking is often used as a spamdexing technique, to try to trick search engines into giving the relevant site a higher ranking; it can also be used to trick search engine users into visiting a site based on the search engine description which site turns out to have substantially different, or even pornographic content. For this reason, major search engines consider cloaking for deception to be a violation of their guidelines, and therefore, they delist sites when deceptive cloaking is reported.[1][2][3][4][5]
Cloaking is a form of the doorway page technique.
A similar technique is also used on the Open Directory Project web directory, for example look at Spreto (The Open Directory Project). It differs in several ways from search engine cloaking:
It is intended to fool human editors, rather than computer search engine spiders.
The decision to cloak or not is often based upon the HTTP referrer, the user agent or the visitor's IP; but more advanced techniques can be also based upon the client's behaviour analysis after a few page requests: the raw quantity, the sorting of, and latency between subsequent HTTP requests sent to a website's pages, plus the presence of a check for robots.txt file, are some of the parameters in which search engines spiders differ heavily from a natural user behaviour. The referrer tells the URL of the page on which a user clicked a link to get to the page. Some cloakers will give the fake page to anyone who comes from a web directory website, since directory editors will usually examine sites by clicking on links that appear on a directory web page. Other cloakers give the fake page to everyone except those coming from a major search engine; this makes it harder to detect cloaking, while not costing them many visitors, since most people find websites by using a search engine.
Quick SEO Checklist
1 The title of a post should always go before the name of your site.
2 Always use permalinks. Wordpress has this functionality built-in so make sure you use it.
3 Your theme should be using heading tags (h1 , h2, h3) for post headings. It doesn’t hurt to double check.
4 Keywords in title, body, and anchor text.
5 Link to reputable sites.
6 Don’t go overboard with outbound links.
7 Add relevant keywords and descriptions to metatags.
8 Clean up dead links.
9 Link to other posts on your site as often as possible.
10 Use alt and title attributes when inserting an image.
11 Limit duplicate content.
12 Make sure your pages can be validated. Search engines like clean, well coded, and easy to navigate sites.
13 Generate a sitemap for search engines.
14 Use robots.txt to prevent search engine crawlers from indexing certain areas of your site - like your admin directory.
SEO Checklist
Keywords
Keyword in title tag (+3)
Keyword in URL (+3)
Keyword density in document (+3)
Keyword in H1 and H2 headings (+3)
Keyword in the beginning of document (+2)
Keyword in ALT tags (+2)
Keyword in Meta tags (+1)
Keyword stuffing (-3)
Links
Anchor text of inbound links (+3)
Origin of inbound links (+3)
Links from similar sites (+3)
Links from .edu and .gov sites (+3)
Anchor text of internal links (+2)
Many outgoing links (-1)
Outbound links to bad neighbors (-3)
Cross-linking (-3)
Meta Tags
Description Meta Tag (+1)
Keywords Meta Tag (+1)
Refresh Meta Tag (-1)
Content
Unique content (+3)
Frequent updates (+3)
Age of content (+2)
Poor coding or design (-2)
Invisible text (-3)
Doorway pages (-3)
Duplicate content (-3)
Other factors
Site accessibility (+3)
Sitemap (+2)
Site size (+2)
Site age (+2)
Top-level domain (+1)
URL length (0)
Hosting downtime (-1)
Flash (-2)
Misused Redirects (-3)
2 Always use permalinks. Wordpress has this functionality built-in so make sure you use it.
3 Your theme should be using heading tags (h1 , h2, h3) for post headings. It doesn’t hurt to double check.
4 Keywords in title, body, and anchor text.
5 Link to reputable sites.
6 Don’t go overboard with outbound links.
7 Add relevant keywords and descriptions to metatags.
8 Clean up dead links.
9 Link to other posts on your site as often as possible.
10 Use alt and title attributes when inserting an image.
11 Limit duplicate content.
12 Make sure your pages can be validated. Search engines like clean, well coded, and easy to navigate sites.
13 Generate a sitemap for search engines.
14 Use robots.txt to prevent search engine crawlers from indexing certain areas of your site - like your admin directory.
SEO Checklist
Keywords
Keyword in title tag (+3)
Keyword in URL (+3)
Keyword density in document (+3)
Keyword in H1 and H2 headings (+3)
Keyword in the beginning of document (+2)
Keyword in ALT tags (+2)
Keyword in Meta tags (+1)
Keyword stuffing (-3)
Links
Anchor text of inbound links (+3)
Origin of inbound links (+3)
Links from similar sites (+3)
Links from .edu and .gov sites (+3)
Anchor text of internal links (+2)
Many outgoing links (-1)
Outbound links to bad neighbors (-3)
Cross-linking (-3)
Meta Tags
Description Meta Tag (+1)
Keywords Meta Tag (+1)
Refresh Meta Tag (-1)
Content
Unique content (+3)
Frequent updates (+3)
Age of content (+2)
Poor coding or design (-2)
Invisible text (-3)
Doorway pages (-3)
Duplicate content (-3)
Other factors
Site accessibility (+3)
Sitemap (+2)
Site size (+2)
Site age (+2)
Top-level domain (+1)
URL length (0)
Hosting downtime (-1)
Flash (-2)
Misused Redirects (-3)
Quick RSS SEO Tips
1. Subscribe to your own feed and claim it on blog engine Technorati
2. Focus your feed with a keyword theme
3. Use keywords in the title tag; keep it under 100 characters
4. Most feed readers display feeds alphabetically, title accordingly
5. Write description tags as if for a directory; keep them under 500 characters
6. Use full paths on links and unique URLs for each item
7. Provide email updates for the non-techies
8. Offer an HTML version of your feed
9. For branding, add logo and images to your feed
Now, let's add some tips from Stephan Spencer and continue with the numbering:
10. Full text, not summaries
11. 20 or MORE items (not just 10)
12. Multiple feeds (by category, latest comments, comments by post)
13. Keyword-rich item [title]
14. Your brand name in the item [title]
15. Your most important keyword in the site [title] container
16. Compelling site [description]
17. Don't put tracking codes into the URLs (e.g. &source=rss)
18. An RSS feed that contains enclosures (i.e. podcasts) can get into additional RSS directories & engines
And to round this off, a summary of my own tips [part 2 here] for using RSS to drive traffic to your site:
19. Get your RSS content (proactively) syndicated on other relevant websites [just the headlines and summaries of course]
20. Submit your RSS feeds to all the RSS search engines and directories
21. Use RSS to add relevant third-party content [again, just headlines and summaries] to your website to gain additional SE weight for your keywords
22. Use RSS to deliver all of your frequently updated content, not just for your latest blog posts
23. Whenever the content in your feed changes, ping the most important search engines and directories [yes, you don't need a blog for this]
2. Focus your feed with a keyword theme
3. Use keywords in the title tag; keep it under 100 characters
4. Most feed readers display feeds alphabetically, title accordingly
5. Write description tags as if for a directory; keep them under 500 characters
6. Use full paths on links and unique URLs for each item
7. Provide email updates for the non-techies
8. Offer an HTML version of your feed
9. For branding, add logo and images to your feed
Now, let's add some tips from Stephan Spencer and continue with the numbering:
10. Full text, not summaries
11. 20 or MORE items (not just 10)
12. Multiple feeds (by category, latest comments, comments by post)
13. Keyword-rich item [title]
14. Your brand name in the item [title]
15. Your most important keyword in the site [title] container
16. Compelling site [description]
17. Don't put tracking codes into the URLs (e.g. &source=rss)
18. An RSS feed that contains enclosures (i.e. podcasts) can get into additional RSS directories & engines
And to round this off, a summary of my own tips [part 2 here] for using RSS to drive traffic to your site:
19. Get your RSS content (proactively) syndicated on other relevant websites [just the headlines and summaries of course]
20. Submit your RSS feeds to all the RSS search engines and directories
21. Use RSS to add relevant third-party content [again, just headlines and summaries] to your website to gain additional SE weight for your keywords
22. Use RSS to deliver all of your frequently updated content, not just for your latest blog posts
23. Whenever the content in your feed changes, ping the most important search engines and directories [yes, you don't need a blog for this]
SEVEN TIPS WHEN WORKING WITH
Want to get a top listing for your local Internet webpage? Here are seven changes that you can implement today that can make a big difference on your search engine rankings. Some of you may have the working knowledge to make these HTML changes, others of you may need a little help. These tips will help make your website search engine friendly.
I live in Ventura County in Southern California. I've spent many hours looking at hundreds of local websites for Real Estate, Chiropractic, and Dentist clients. It is very important for them to show up on search engines for local search terms. I'm going to use my local Chiropractor listings as examples in this HTML tutorial. I looked up websites using Keywords like, "Chiropractor in Ventura County," "Chiropractor in Thousand Oaks," and "Chiropractor offices in Ventura County." I want you to know that with a few minor adjustments to your HTML you can gain a distinctive advantage over other local websites that could mean more business to you in the coming months. Who isn't interested in some new business in today's economy?
Let's examine what top ranked Chiropractor sites are doing to gain Keyword relevancy on search engines. You can look at your competition's HTML tags by going to your browser of choice, clicking "View" in the upper left hand corner of your browser (Internet Explorer or Netscape Navigator), and then scrolling down to the word "Source". It will open another browser window with the HTML language for you to look at.
1. USE YOUR KEYWORDS IN YOUR TITLE TAG
Several search engines use Keywords in the Title tag as part of the their algorithms. Algorithms are rules that search engines use to calculate a ranking of a website. Each engine uses different algorithms. Using a Title tag like;
John Q. Smith Chiropractor
is wasted text unless you think that people are going to be using the name (John Q. Smith) as search term. If you want to attract visitors who are looking for Chiropractor in Ventura County" you may want to use the Keyword phrase, "Chiropractor in Ventura County" in your Title tag.
Here is how you may want to put Keywords in your Title tag.
Chiropractor in Ventura County, Thousand Oaks, and Westlake California, Chiropractor office in Ventura County.
It doesn't matter what order you place the tags in the HEAD area, although some experts recommended that you include the TITLE tag first on the page, before listing any other tags.
2. USE KEYWORDS IN YOUR META TAGS
There's a continuing debate about whether to separate each Keyword in the Meta tag by a comma, or to group related words (i.e, phrases) by commas, or to list all the words in one long string separating each word only by a spaces.
Which method is better? The most common method is separating each word or phrase by a comma. However, many experts contend that the search engines ignore the commas. Their thought is that by eliminating them, you can include more words in the tag. My position is that it won't likely affect your rankings either way.
Caution: Be careful not to repeat the same Keyword more than two or three times in the Meta tag. NEVER repeat the same word twice in a row or you may trigger a search engine's "spam filter." Never include Keywords that do not apply to the content of your page.
Here's what your Meta tag may look like for "Chiropractor in Ventura County".
3. USE KEYWORDS IN YOUR META DESCRIPTION TAGS
Each engine that supports the Meta description tag will shrink it down to 150 to 350 characters depending on the engine. Therefore, include the best portion of your description in the first 150 characters, but go ahead and add additional text to fill it out to about 350 characters.
Here's what your Meta Description tag may look like for "Chiropractor in Ventura County".
4. USE KEYWORDS IN THE BODY OF YOUR TEXT
Many engines look at your first paragraph for Keyword relevance. By carefully wording the text in your first paragraph, you can add to your Keyword relevancy. My suggestion is not to use your Keywords more than five times on any page in the body of your text.
Here is an example of using Keywords in the body of your text. Keywords = " Chiropractor in Ventura County."
When you're looking to purchase Chiropractor in Ventura County you want someone who is an expert in buying and selling Chiropractor in Ventura County. We are here to help. Located in the Thousand Oaks/ Westlake Village area in Ventura County. Our Chiropractor office is here to focus on you and your personal needs.
Some engines will also check the last paragraph of the body of your text. One local Chiropractor uses this text at the bottom of his homepage to gain added Keyword relevancy for his website.
"ABC Chiropractic proudly serves the following areas of Ventura County: Conejo Valley, Agoura Hills, Calabasas, Camarillo, Lake Sherwood, Moorpark, Newbury Park, North Ranch, Simi Valley, Thousand Oaks, and Westlake Village."
TIP: Pages heavy with text in a small font size may not get listed. Avoid using font size lower than 2 as the dominant size for your body copy.
5. USE KEYWORDS IN YOUR IMAGE/ALT TAGS
Some engines will actually look at your Image and Alta tags as part of their Algorithms. On way to take advantage of this is by renaming your Image and Alt tags with your Keywords.
Here is an example.
You have an image named, "image1.jpg"
Change the name to "chiropractor1.jpg" or "venturacounty.jpg"
Here is an example of how your Image tag may look in HTML.

An Alt tag is the text that pops up when you move your mouse over a Hyperlink and some text is displayed in your browser as the mouse rolls over the link.
Example: "Find more information about Chiropractors in Ventura County, Thousand Oaks and Westlake Village California."
Here is an example of how your Alt tag may look in HTML.
6. USE KEYWORDS IN YOUR HYPERLINKS
Some engines will actually look at the words in your Hyperlinks for added relevancy. If you have several pages on your website, consider renaming them with your Keywords. One local top ranking Realtor uses this technique to gain a top ranking on Google.
Each of the Hyperlinks on the home page are packed with Keywords and Keyword phrases. Here are several examples for John Q. Smith of ABC Realty:
TESTIMONIALS: For John Q. Smith of ABC Realty from satisfied home buyers and home sellers in Ventura County, Thousand Oaks and Westlake Village.
WELCOME: From John Q. Smith of ABC Realty in Ventura County, Thousand Oaks and Westlake Village.
Featured homes in Ventura County, Thousand Oaks and Westlake Village. Take a look at Thousand Oaks, Westlake Village and Moorpark, California homes for sale.
ABOUT: John Q. Smith from ABC Realty serving Thousand Oaks, Westlake Village and Ventura County. Get to know all about your real estate and relocation professional in Ventura County California.
These are just a few examples of clever ways that you can add Keywords to your Hyperlinks to gain extra Keyword relevancy for your website. Consider adding some links about your local city with hyperlinks that use your Keywords, such as;
Learn all about Ventura County, Thousand Oaks and Westlake Village.
Information and hot links about Ventura County.
My favorite places to visit in Ventura County.
School information for Ventura County, Thousand Oaks, Westlake Village.
Suggestions_ feature local businesses, landmarks, points of interest.
Be creative. Make your links stand out so people will want to click them. The more time they spend on your site, the better chance at gaining a new client.
7. USE H1 TAGS TO BOOST RELEVANCY.
Some search engines will give words found in your "H1" tag a boost in relevancy. The H1 tag is used to specify page or topic headings. In most browsers H1 is an oversized font that looks out of place on a page.
Here is a smart way that you can gain keyword relevancy both ways by using Cascading Style Sheets (CSS). With CSS, you can specify that the browser display the H1 tag or other tags on your page anyway you please. By using CSS you can get both a boost in relevancy and get better control of your webpage's appearance with this easy step.
In the area of the page, put the following STYLE lines:
This will force all H1 tags on the page to use a 12 pt Arial or Helvetica font in black text. This will also allow you to adjust the point size or font to whatever size may look best on your webpage.
Using this CSS tag, you can now use the H1 tag on all your pages to gain extra relevancy Keyword relevancy without sacrificing the look of your pages. Many experts agree that a majority of the engines give more weight to Keywords that appear within H1 heading tags. Try it, you'll be amazed at the results.
A QUICK CLOSING COMMENT.
This past summer I spent some time with my family at the Palm Desert Marriot Resort. While playing in the pool with my kids, I noticed that several people kept asking me what I do for a living. After some conversation, I realized that the Hotel was host to Mike Ferry Real Estate Trainer. The number one question I was asked that day was, "How can I get my site to show up on search engines for my local area?" By following the above advice you can help your website's ranking on many search engines.
All of this information will make a difference in the way your website is indexed on search engines that are Keyword driven. I would love to get you started with making these changes. If you need any help, you can e-mail me at david@webpositionadvisor.com
A couple of hours with me may mean a couple of new patients to you. You will need a slightly different strategy on some of the other Search Directories.
--------------------------------------------------------------------------------------------
David Dimas is an off-site Internet manager in Ventura County, California. He is a widely read author of Internet Tutorials. His website contains 30 tutorials on Search Engines, Internet Advertising, E-zines, Press Releases, and other "How To," tutorials on Internet marketing. He is also an approved instructor at USCB Extension in Santa Barbara, California for Internet Advertising 101 and Search Engine 101.
===============================================================
Let WebSite 101 Gain Top Listings For Your Business Web Site!
Search Engine Marketing: Special Reports from Page Zero Unleash Amazing Profits with Google AdWords Select! You advertise your product, service, or cause online. You've decided to pay for targeted traffic on a "pay per click" basis. And now you're considering Google AdWords Select. Great decision. But if you don't use the techniques taught in this special report, you could cost yourself a fortune. Use it right, and you'll clean up.
I live in Ventura County in Southern California. I've spent many hours looking at hundreds of local websites for Real Estate, Chiropractic, and Dentist clients. It is very important for them to show up on search engines for local search terms. I'm going to use my local Chiropractor listings as examples in this HTML tutorial. I looked up websites using Keywords like, "Chiropractor in Ventura County," "Chiropractor in Thousand Oaks," and "Chiropractor offices in Ventura County." I want you to know that with a few minor adjustments to your HTML you can gain a distinctive advantage over other local websites that could mean more business to you in the coming months. Who isn't interested in some new business in today's economy?
Let's examine what top ranked Chiropractor sites are doing to gain Keyword relevancy on search engines. You can look at your competition's HTML tags by going to your browser of choice, clicking "View" in the upper left hand corner of your browser (Internet Explorer or Netscape Navigator), and then scrolling down to the word "Source". It will open another browser window with the HTML language for you to look at.
1. USE YOUR KEYWORDS IN YOUR TITLE TAG
Several search engines use Keywords in the Title tag as part of the their algorithms. Algorithms are rules that search engines use to calculate a ranking of a website. Each engine uses different algorithms. Using a Title tag like;
is wasted text unless you think that people are going to be using the name (John Q. Smith) as search term. If you want to attract visitors who are looking for Chiropractor in Ventura County" you may want to use the Keyword phrase, "Chiropractor in Ventura County" in your Title tag.
Here is how you may want to put Keywords in your Title tag.
It doesn't matter what order you place the tags in the HEAD area, although some experts recommended that you include the TITLE tag first on the page, before listing any other tags.
2. USE KEYWORDS IN YOUR META TAGS
There's a continuing debate about whether to separate each Keyword in the Meta tag by a comma, or to group related words (i.e, phrases) by commas, or to list all the words in one long string separating each word only by a spaces.
Which method is better? The most common method is separating each word or phrase by a comma. However, many experts contend that the search engines ignore the commas. Their thought is that by eliminating them, you can include more words in the tag. My position is that it won't likely affect your rankings either way.
Caution: Be careful not to repeat the same Keyword more than two or three times in the Meta tag. NEVER repeat the same word twice in a row or you may trigger a search engine's "spam filter." Never include Keywords that do not apply to the content of your page.
Here's what your Meta tag may look like for "Chiropractor in Ventura County".
3. USE KEYWORDS IN YOUR META DESCRIPTION TAGS
Each engine that supports the Meta description tag will shrink it down to 150 to 350 characters depending on the engine. Therefore, include the best portion of your description in the first 150 characters, but go ahead and add additional text to fill it out to about 350 characters.
Here's what your Meta Description tag may look like for "Chiropractor in Ventura County".
4. USE KEYWORDS IN THE BODY OF YOUR TEXT
Many engines look at your first paragraph for Keyword relevance. By carefully wording the text in your first paragraph, you can add to your Keyword relevancy. My suggestion is not to use your Keywords more than five times on any page in the body of your text.
Here is an example of using Keywords in the body of your text. Keywords = " Chiropractor in Ventura County."
When you're looking to purchase Chiropractor in Ventura County you want someone who is an expert in buying and selling Chiropractor in Ventura County. We are here to help. Located in the Thousand Oaks/ Westlake Village area in Ventura County. Our Chiropractor office is here to focus on you and your personal needs.
Some engines will also check the last paragraph of the body of your text. One local Chiropractor uses this text at the bottom of his homepage to gain added Keyword relevancy for his website.
"ABC Chiropractic proudly serves the following areas of Ventura County: Conejo Valley, Agoura Hills, Calabasas, Camarillo, Lake Sherwood, Moorpark, Newbury Park, North Ranch, Simi Valley, Thousand Oaks, and Westlake Village."
TIP: Pages heavy with text in a small font size may not get listed. Avoid using font size lower than 2 as the dominant size for your body copy.
5. USE KEYWORDS IN YOUR IMAGE/ALT TAGS
Some engines will actually look at your Image and Alta tags as part of their Algorithms. On way to take advantage of this is by renaming your Image and Alt tags with your Keywords.
Here is an example.
You have an image named, "image1.jpg"
Change the name to "chiropractor1.jpg" or "venturacounty.jpg"
Here is an example of how your Image tag may look in HTML.

An Alt tag is the text that pops up when you move your mouse over a Hyperlink and some text is displayed in your browser as the mouse rolls over the link.
Example: "Find more information about Chiropractors in Ventura County, Thousand Oaks and Westlake Village California."
Here is an example of how your Alt tag may look in HTML.
6. USE KEYWORDS IN YOUR HYPERLINKS
Some engines will actually look at the words in your Hyperlinks for added relevancy. If you have several pages on your website, consider renaming them with your Keywords. One local top ranking Realtor uses this technique to gain a top ranking on Google.
Each of the Hyperlinks on the home page are packed with Keywords and Keyword phrases. Here are several examples for John Q. Smith of ABC Realty:
TESTIMONIALS: For John Q. Smith of ABC Realty from satisfied home buyers and home sellers in Ventura County, Thousand Oaks and Westlake Village.
WELCOME: From John Q. Smith of ABC Realty in Ventura County, Thousand Oaks and Westlake Village.
Featured homes in Ventura County, Thousand Oaks and Westlake Village. Take a look at Thousand Oaks, Westlake Village and Moorpark, California homes for sale.
ABOUT: John Q. Smith from ABC Realty serving Thousand Oaks, Westlake Village and Ventura County. Get to know all about your real estate and relocation professional in Ventura County California.
These are just a few examples of clever ways that you can add Keywords to your Hyperlinks to gain extra Keyword relevancy for your website. Consider adding some links about your local city with hyperlinks that use your Keywords, such as;
Learn all about Ventura County, Thousand Oaks and Westlake Village.
Information and hot links about Ventura County.
My favorite places to visit in Ventura County.
School information for Ventura County, Thousand Oaks, Westlake Village.
Suggestions_ feature local businesses, landmarks, points of interest.
Be creative. Make your links stand out so people will want to click them. The more time they spend on your site, the better chance at gaining a new client.
7. USE H1 TAGS TO BOOST RELEVANCY.
Some search engines will give words found in your "H1" tag a boost in relevancy. The H1 tag is used to specify page or topic headings. In most browsers H1 is an oversized font that looks out of place on a page.
Here is a smart way that you can gain keyword relevancy both ways by using Cascading Style Sheets (CSS). With CSS, you can specify that the browser display the H1 tag or other tags on your page anyway you please. By using CSS you can get both a boost in relevancy and get better control of your webpage's appearance with this easy step.
In the area of the page, put the following STYLE lines:
This will force all H1 tags on the page to use a 12 pt Arial or Helvetica font in black text. This will also allow you to adjust the point size or font to whatever size may look best on your webpage.
Using this CSS tag, you can now use the H1 tag on all your pages to gain extra relevancy Keyword relevancy without sacrificing the look of your pages. Many experts agree that a majority of the engines give more weight to Keywords that appear within H1 heading tags. Try it, you'll be amazed at the results.
A QUICK CLOSING COMMENT.
This past summer I spent some time with my family at the Palm Desert Marriot Resort. While playing in the pool with my kids, I noticed that several people kept asking me what I do for a living. After some conversation, I realized that the Hotel was host to Mike Ferry Real Estate Trainer. The number one question I was asked that day was, "How can I get my site to show up on search engines for my local area?" By following the above advice you can help your website's ranking on many search engines.
All of this information will make a difference in the way your website is indexed on search engines that are Keyword driven. I would love to get you started with making these changes. If you need any help, you can e-mail me at david@webpositionadvisor.com
A couple of hours with me may mean a couple of new patients to you. You will need a slightly different strategy on some of the other Search Directories.
--------------------------------------------------------------------------------------------
David Dimas is an off-site Internet manager in Ventura County, California. He is a widely read author of Internet Tutorials. His website contains 30 tutorials on Search Engines, Internet Advertising, E-zines, Press Releases, and other "How To," tutorials on Internet marketing. He is also an approved instructor at USCB Extension in Santa Barbara, California for Internet Advertising 101 and Search Engine 101.
===============================================================
Let WebSite 101 Gain Top Listings For Your Business Web Site!
Search Engine Marketing: Special Reports from Page Zero Unleash Amazing Profits with Google AdWords Select! You advertise your product, service, or cause online. You've decided to pay for targeted traffic on a "pay per click" basis. And now you're considering Google AdWords Select. Great decision. But if you don't use the techniques taught in this special report, you could cost yourself a fortune. Use it right, and you'll clean up.
Over 80 percent of Internet users use Google to find products and services on the world wide web. It stands to reason that if you want to generate mor
Over 80 percent of Internet users use Google to find products and services on the world wide web. It stands to reason that if you want to generate more qualified sales leads you need to rank on the first or second page within the search engines results pages on Google.
As Google gets more and more sophisticated with each algorithm update, search engine optimisation methods have to change to keep getting good results.
Here is a list of ten important tactics that you can use to improve your Google rankings.
Use the right keyword phrase The most important factor in getting quality web traffic is to optimize your web pages for the correct keyword phrase. Getting the most sales is often a balance between search volume for key words and level of competition. Use the Overture keyword tool and Google Adwords tool to research and identify your best keyword phrases. Look for keyword phrases that have high search volume, but low competition. It also pays to see what key words your successful competitors are optimising for.
Put your key word phrase in your title tag Forget about putting your company name in the title tag (unless it is a valuable part of the key words), it's a waste of words and will not help you rank in the search engines. The first three of four words in your title tag should be the key word phrase you are trying to optimize for. Try to limit your total character count to 60 characters.
Write a compelling description tag Title tags are for search engines, description tags are for people. Spend time researching and writing a description tag that compels the reader to click on your listing. Come up with a great offer, use action words. Use shock and awe. Use anything that will get them to click.
Use H1 tags An often neglected search engine optimisation technique is to put your keyword phrase in H1 tags on your page. This tells Google that the following text is about your keyword phrase. Google weights H1 tags nearly as highly as title tags. This tip alone can drastically improve your Google. Similarly, the use of bold and strong html tags can emphasize a key word phrase within the paragraph text where it may not be appropriate to use H1 or H2 tags.
Get the right keyword ratio Aim for a keyword density of around 3-5% of the page contents. Try to work your key word phrase into the page so that it reads naturally. This may take some research and analysis of successful web sites. Remember to use headings and sub-headings that include your key words.
Use alt tags and description tags on images Key words used image file names, alt tags and description tags add to the key word density of any web page. They doesn't make a huge difference, but every little bit helps. In some product classes a Google image search may lead web surfers to your web site.
Get quality one-way incoming links through article marketing and directory submissions As far as Google is concerned; link exchanges are just about dead. They can help to get your web site spidered and indexed more quickly, but these days they add very little in terms of Google search engine rankings unless the link is on a trusted site and that site has excellent page rank. Wikipedia and DMOZ are examples of such trusted sites.
A more effective approach is to write interesting and informative articles and submit them to article directories. Make sure that you use the author bio/resource box to maximum advantage by using your key word phrase in the link anchor text, AND, by pointing the anchor text to the correct page. Your home page may not be the best choice of pages for your selected key phrase. This will also ensure that more pages than just your home page gets indexed.
Use Social book marking If you have something newsworthy, humorous, quirky, unique or shocking to say, submit a link to your web page to sites like digg.com or redit.com. These up and coming web 2.0 power houses can create a buzz overnight driving thousands of interested visitors to your web site.
Note, just because you build it doesn't mean they will come. If it's boring link it will quickly get buried by newer and more interesting stories. But what the hell, it's free to submit links to these sites and you never know your luck.
Record a podcast Google loves podcasts ... it just loves them. Get yourself a decent microphone and some podcasting software like Audacity and go for it. Once you have an interesting and tightly edited podcast, put it on your web site and submit it to all the major podcasting directories like iTunes and Podomatic.
Sign up for Google Adwords Although Google will deny it, anecdotal evidence suggests that Goggle favors Adwords customers over non-Adwords customers. This is true for new web sites that often take months to get spidered and indexed by Google. I have seen brand new sites get spidered, indexed and listed in the search engine results pages in as little as a week.
It doesn't have to cost a lot either. You can set up a low budget campaign of 50 cents to one dollar a day and still get favorable treatment.
As Google gets more and more sophisticated with each algorithm update, search engine optimisation methods have to change to keep getting good results.
Here is a list of ten important tactics that you can use to improve your Google rankings.
Use the right keyword phrase The most important factor in getting quality web traffic is to optimize your web pages for the correct keyword phrase. Getting the most sales is often a balance between search volume for key words and level of competition. Use the Overture keyword tool and Google Adwords tool to research and identify your best keyword phrases. Look for keyword phrases that have high search volume, but low competition. It also pays to see what key words your successful competitors are optimising for.
Put your key word phrase in your title tag Forget about putting your company name in the title tag (unless it is a valuable part of the key words), it's a waste of words and will not help you rank in the search engines. The first three of four words in your title tag should be the key word phrase you are trying to optimize for. Try to limit your total character count to 60 characters.
Write a compelling description tag Title tags are for search engines, description tags are for people. Spend time researching and writing a description tag that compels the reader to click on your listing. Come up with a great offer, use action words. Use shock and awe. Use anything that will get them to click.
Use H1 tags An often neglected search engine optimisation technique is to put your keyword phrase in H1 tags on your page. This tells Google that the following text is about your keyword phrase. Google weights H1 tags nearly as highly as title tags. This tip alone can drastically improve your Google. Similarly, the use of bold and strong html tags can emphasize a key word phrase within the paragraph text where it may not be appropriate to use H1 or H2 tags.
Get the right keyword ratio Aim for a keyword density of around 3-5% of the page contents. Try to work your key word phrase into the page so that it reads naturally. This may take some research and analysis of successful web sites. Remember to use headings and sub-headings that include your key words.
Use alt tags and description tags on images Key words used image file names, alt tags and description tags add to the key word density of any web page. They doesn't make a huge difference, but every little bit helps. In some product classes a Google image search may lead web surfers to your web site.
Get quality one-way incoming links through article marketing and directory submissions As far as Google is concerned; link exchanges are just about dead. They can help to get your web site spidered and indexed more quickly, but these days they add very little in terms of Google search engine rankings unless the link is on a trusted site and that site has excellent page rank. Wikipedia and DMOZ are examples of such trusted sites.
A more effective approach is to write interesting and informative articles and submit them to article directories. Make sure that you use the author bio/resource box to maximum advantage by using your key word phrase in the link anchor text, AND, by pointing the anchor text to the correct page. Your home page may not be the best choice of pages for your selected key phrase. This will also ensure that more pages than just your home page gets indexed.
Use Social book marking If you have something newsworthy, humorous, quirky, unique or shocking to say, submit a link to your web page to sites like digg.com or redit.com. These up and coming web 2.0 power houses can create a buzz overnight driving thousands of interested visitors to your web site.
Note, just because you build it doesn't mean they will come. If it's boring link it will quickly get buried by newer and more interesting stories. But what the hell, it's free to submit links to these sites and you never know your luck.
Record a podcast Google loves podcasts ... it just loves them. Get yourself a decent microphone and some podcasting software like Audacity and go for it. Once you have an interesting and tightly edited podcast, put it on your web site and submit it to all the major podcasting directories like iTunes and Podomatic.
Sign up for Google Adwords Although Google will deny it, anecdotal evidence suggests that Goggle favors Adwords customers over non-Adwords customers. This is true for new web sites that often take months to get spidered and indexed by Google. I have seen brand new sites get spidered, indexed and listed in the search engine results pages in as little as a week.
It doesn't have to cost a lot either. You can set up a low budget campaign of 50 cents to one dollar a day and still get favorable treatment.
e martë, 9 tetor 2007
How Keyword Density, Frequency, Prominence And Proximity Affects Search Engine Rankings
In this article, I explain the difference between keyword density, frequency, prominence and proximity, and how they affect search engine rankings.
Keyword Density
Keyword density refers to the ratio (percentage) of keywords contained within the total number of indexable words within a web page.
The preferred keyword density ratio varies from search engine to search engine. In general, I recommend using a keyword density ratio in the range of 2-8%.
You may like to use this real-time keyword analysis tool to help you optimize a web page's keyword density ratio.
Keyword Frequency
Keyword frequency refers to the number of times a keyword or keyword phrase appears within a web page.
The theory is that the more times a keyword or keyword phrase appears within a web page, the more relevance a search engine is likely to give the page for a search with those keywords.
In general, I recommend that you ensure that the most important keyword or keyword phrase is the most frequently use keywords in a web page.
But be careful not to abuse the system by repeating the same keyword or keyword phrases over and over again.
Now that you all know the importance of keywords, its time to know what “keyword frequency” does to the benefit or rather in most cases, to the detriment of a particular site. Sad, but rather painfully, its true. Most of us lose out on this account & end-up getting no-where. So, I will just explain it in detail for your benefit.
One of the methods that search engines use to determine relevance for a keyword is by counting keyword frequency. This is not like keyword density where a ratio is taken but rather an absolute count of the number of occurrences of your primary keyword or phrase. There are a couple of rules to remember about counting keyword frequency.
First, since keyword frequency is an absolute count it does not matter how many words are on the page, the total number of occurrences will remain the same. So if the term occurs seven times this number will be the same if you have 100 or 1000 words on your page.
Second, counting keyword frequency helps determine relevance as well as Spam. What this means is that if your keyword occurs 70 times in 100 words your page will most likely be considered Spam. If your keyword occurs only once in 100 words more than likely your page will not be considered relevant and thus will rank lower in the search results or not at all.
If your content is well written and targeted toward a specific theme your page will most likely contain the correct number of occurrences of a term. With well thought out content this is natural. In most cases, appropriate content will not require counting keyword frequency to ensure correct distribution.
Keyword Prominence
Keyword prominence refers to how prominent keywords are within a web page.
The general recommendation is to place important keywords at, or near, the start of a web page, sentence, TITLE or META tag.
After going through various aspects of the “keywords”, I would like to finally touch upon the significant role of the “keyword prominence”. This is crucial for the ranking of the web page. Now, it totally depends on you whether it is ranked in the upper tier or the lower one. So, just read on for the better ranking of your sites.
One of the main factors affecting keyword ranking in SEO is keyword prominence. This refers to the location of your chosen keyword in Meta tags and body copy. Any Webmaster or SEO cannot underestimate the importance of keyword prominence. Along with selection and density this is one of the primary considerations for SEO.
Location of keywords in tags and body copy affects the importance of keyword prominence. If the keyword or key phrase occurs at the beginning of a sentence, paragraph, or body copy then more importance is placed on that keyword.
For Meta tags more weight is placed on the first word or term. The importance of keyword prominence in Meta tags cannot be underestimated. The primary term should come in the beginning of the title, description, and keyword tag.
The importance of keyword prominence in body copy is just as significant. When placing keywords in the body of a page they should occur in the beginning, middle, and end of the body copy. These are especially important areas. The search engines will place more weight on the beginning and end, and the keyword should be supported in the middle of the copy.
After you have gone through all these little facts about the keywords, I’m sure that you’re going to benefit a lot. So, just keep visiting this site for more such useful tips on a range of topics.
Keyword Proximity
Keyword proximity refers to the closeness between two or more keywords. In general, the closer the keywords are, the better.
For example:
How Keyword Density Affects Search Engine Rankings
How Keyword Density Affects Rankings In Search Engine
Using the example above, if someone searched for "search engine rankings," a web page containing the first sentence is more likely to rank higher than the second.
The reason is because the keywords are placed closer together. This is assuming that everything else is equal, of course.
Importance of Keywords
November 13th, 2007
If you want to promote your business online, it’s not enough to increase traffic – you would also need to attract site visitors who are likely to become customers. The use of appropriate keywords is one way to achieve that aim. You could achieve this in numerous ways.
Firstly, you can purchase keywords & then you should try to monitor it. Essentially, the major search engines allow you to bid on terms relevant to your business. When a user searches one of these terms, your link will appear higher or lower on the sponsored list, depending upon whether you’re the highest bidder. Each time a user clicks on your sponsored link, you pay the amount that you have bid.
Of course, the more generic the keyword, the more expensive it will be. But don’t fret – as it turns out, more specific (and less expensive) keywords can be the most valuable for local businesses.
However, this is obviously going to cost you a little. Henceforth, i will add another way so that you can maximize your profits without actually investing a single penny! You can regularly check various searches - engines for priorities, which they give to different words. This helps in a way that you would know which kind of topics are “in demand”.
A well-known saying among online marketers is that “content is king.” There’s a reason for that. As online search becomes a bigger and more lucrative business, the algorithms used to determine rankings have grown more complicated, and the extent to which content relevance is analyzed has grown. To complicate matters further, each search engine uses its own formula.
So, with the second method you will be able to maximize amount of traffic to your site by filling it with well-crafted, keyword-rich content. As you might expect, the best overall online marketing strategy includes keyword efforts both paid for and organic. You should invest in the promotion of your site through “pay-per-click” management, while also improving the organic ranking of your site by way of keyword-rich content. It takes a little work, but the payoff can be tremendous.
Keyword Density
Keyword density refers to the ratio (percentage) of keywords contained within the total number of indexable words within a web page.
The preferred keyword density ratio varies from search engine to search engine. In general, I recommend using a keyword density ratio in the range of 2-8%.
You may like to use this real-time keyword analysis tool to help you optimize a web page's keyword density ratio.
Keyword Frequency
Keyword frequency refers to the number of times a keyword or keyword phrase appears within a web page.
The theory is that the more times a keyword or keyword phrase appears within a web page, the more relevance a search engine is likely to give the page for a search with those keywords.
In general, I recommend that you ensure that the most important keyword or keyword phrase is the most frequently use keywords in a web page.
But be careful not to abuse the system by repeating the same keyword or keyword phrases over and over again.
Now that you all know the importance of keywords, its time to know what “keyword frequency” does to the benefit or rather in most cases, to the detriment of a particular site. Sad, but rather painfully, its true. Most of us lose out on this account & end-up getting no-where. So, I will just explain it in detail for your benefit.
One of the methods that search engines use to determine relevance for a keyword is by counting keyword frequency. This is not like keyword density where a ratio is taken but rather an absolute count of the number of occurrences of your primary keyword or phrase. There are a couple of rules to remember about counting keyword frequency.
First, since keyword frequency is an absolute count it does not matter how many words are on the page, the total number of occurrences will remain the same. So if the term occurs seven times this number will be the same if you have 100 or 1000 words on your page.
Second, counting keyword frequency helps determine relevance as well as Spam. What this means is that if your keyword occurs 70 times in 100 words your page will most likely be considered Spam. If your keyword occurs only once in 100 words more than likely your page will not be considered relevant and thus will rank lower in the search results or not at all.
If your content is well written and targeted toward a specific theme your page will most likely contain the correct number of occurrences of a term. With well thought out content this is natural. In most cases, appropriate content will not require counting keyword frequency to ensure correct distribution.
Keyword Prominence
Keyword prominence refers to how prominent keywords are within a web page.
The general recommendation is to place important keywords at, or near, the start of a web page, sentence, TITLE or META tag.
After going through various aspects of the “keywords”, I would like to finally touch upon the significant role of the “keyword prominence”. This is crucial for the ranking of the web page. Now, it totally depends on you whether it is ranked in the upper tier or the lower one. So, just read on for the better ranking of your sites.
One of the main factors affecting keyword ranking in SEO is keyword prominence. This refers to the location of your chosen keyword in Meta tags and body copy. Any Webmaster or SEO cannot underestimate the importance of keyword prominence. Along with selection and density this is one of the primary considerations for SEO.
Location of keywords in tags and body copy affects the importance of keyword prominence. If the keyword or key phrase occurs at the beginning of a sentence, paragraph, or body copy then more importance is placed on that keyword.
For Meta tags more weight is placed on the first word or term. The importance of keyword prominence in Meta tags cannot be underestimated. The primary term should come in the beginning of the title, description, and keyword tag.
The importance of keyword prominence in body copy is just as significant. When placing keywords in the body of a page they should occur in the beginning, middle, and end of the body copy. These are especially important areas. The search engines will place more weight on the beginning and end, and the keyword should be supported in the middle of the copy.
After you have gone through all these little facts about the keywords, I’m sure that you’re going to benefit a lot. So, just keep visiting this site for more such useful tips on a range of topics.
Keyword Proximity
Keyword proximity refers to the closeness between two or more keywords. In general, the closer the keywords are, the better.
For example:
How Keyword Density Affects Search Engine Rankings
How Keyword Density Affects Rankings In Search Engine
Using the example above, if someone searched for "search engine rankings," a web page containing the first sentence is more likely to rank higher than the second.
The reason is because the keywords are placed closer together. This is assuming that everything else is equal, of course.
Importance of Keywords
November 13th, 2007
If you want to promote your business online, it’s not enough to increase traffic – you would also need to attract site visitors who are likely to become customers. The use of appropriate keywords is one way to achieve that aim. You could achieve this in numerous ways.
Firstly, you can purchase keywords & then you should try to monitor it. Essentially, the major search engines allow you to bid on terms relevant to your business. When a user searches one of these terms, your link will appear higher or lower on the sponsored list, depending upon whether you’re the highest bidder. Each time a user clicks on your sponsored link, you pay the amount that you have bid.
Of course, the more generic the keyword, the more expensive it will be. But don’t fret – as it turns out, more specific (and less expensive) keywords can be the most valuable for local businesses.
However, this is obviously going to cost you a little. Henceforth, i will add another way so that you can maximize your profits without actually investing a single penny! You can regularly check various searches - engines for priorities, which they give to different words. This helps in a way that you would know which kind of topics are “in demand”.
A well-known saying among online marketers is that “content is king.” There’s a reason for that. As online search becomes a bigger and more lucrative business, the algorithms used to determine rankings have grown more complicated, and the extent to which content relevance is analyzed has grown. To complicate matters further, each search engine uses its own formula.
So, with the second method you will be able to maximize amount of traffic to your site by filling it with well-crafted, keyword-rich content. As you might expect, the best overall online marketing strategy includes keyword efforts both paid for and organic. You should invest in the promotion of your site through “pay-per-click” management, while also improving the organic ranking of your site by way of keyword-rich content. It takes a little work, but the payoff can be tremendous.
e diel, 7 tetor 2007
Search Engine Success - What is Link Popularity?
If you are like most Internet marketers, the biggest problem you face is bringing potential customers to your site. It does not really matter how great your Web site is, if no one sees it, your online business will not be successful.
Perhaps you have tried many different marketing techniques, but have not seen an appreciable increase in traffic. You probably also submitted your site to the various search engines, but have you considered how the search engines truly work and how their users would actually find you?
This is a rather complex topic. The top search engines use complicated algorithms and criteria to rank Web sites in their searches. However, one such measure used by the leading search engines is link popularity.
So, what exactly is link popularity? Put quite simply, link popularity describes the quantity and quality of other Web sites that link back to your Web site. It serves as a basic measure of a given site’s popularity. Generally speaking, the higher your Web site’s link popularity, the higher its ranking among the search engines. In addition, they usually assign higher scores to higher quality links.
Before getting into exactly how to increase your site’s link popularity, first I need to talk about how search engines work. To locate information stored on the millions of Web pages, search engines employ software robots, called spiders, to build lists of key words and phrases found on the Internet. The process spiders use to build these lists is called web crawling. Each search engine works a little differently, but the initial process is generally the same. They usually start with the most heavily trafficked servers and most popular Web pages. The spider then follows every link found on the sites. The information is then indexed so that search engine users may retrieve it later.
What can I do to increase my Web site’s link popularity?
1. Reciprocal links
One way to increase the number of links to your site is to develop relationships with other Web masters and make arrangements to simply “trade” links with them. Contact the owners of businesses that compliment your own and offer to put their link on your site if they will put your link on theirs. Be sure and explain how a link to your site will benefit their customers. You should send the link to them as HTML code and never send your link as an image unless they will also include a text description with it. Not every Web master you contact will agree to link to your site and you may have to approach many Web masters before you see any significant results.
2. Link exchange services
Another way to generate links is to register with a link exchange service. When you register with a link exchange service, they add your link to the links page that other members place on their Web sites. Most of the link exchange services are free, but you should be careful that the links are relevant to your own site content. Otherwise, you may find your site being dropped by the big search engines, since they have been known to ban sites using “link farms” because they artificially increase a link’s popularity.
Perhaps you have tried many different marketing techniques, but have not seen an appreciable increase in traffic. You probably also submitted your site to the various search engines, but have you considered how the search engines truly work and how their users would actually find you?
This is a rather complex topic. The top search engines use complicated algorithms and criteria to rank Web sites in their searches. However, one such measure used by the leading search engines is link popularity.
So, what exactly is link popularity? Put quite simply, link popularity describes the quantity and quality of other Web sites that link back to your Web site. It serves as a basic measure of a given site’s popularity. Generally speaking, the higher your Web site’s link popularity, the higher its ranking among the search engines. In addition, they usually assign higher scores to higher quality links.
Before getting into exactly how to increase your site’s link popularity, first I need to talk about how search engines work. To locate information stored on the millions of Web pages, search engines employ software robots, called spiders, to build lists of key words and phrases found on the Internet. The process spiders use to build these lists is called web crawling. Each search engine works a little differently, but the initial process is generally the same. They usually start with the most heavily trafficked servers and most popular Web pages. The spider then follows every link found on the sites. The information is then indexed so that search engine users may retrieve it later.
What can I do to increase my Web site’s link popularity?
1. Reciprocal links
One way to increase the number of links to your site is to develop relationships with other Web masters and make arrangements to simply “trade” links with them. Contact the owners of businesses that compliment your own and offer to put their link on your site if they will put your link on theirs. Be sure and explain how a link to your site will benefit their customers. You should send the link to them as HTML code and never send your link as an image unless they will also include a text description with it. Not every Web master you contact will agree to link to your site and you may have to approach many Web masters before you see any significant results.
2. Link exchange services
Another way to generate links is to register with a link exchange service. When you register with a link exchange service, they add your link to the links page that other members place on their Web sites. Most of the link exchange services are free, but you should be careful that the links are relevant to your own site content. Otherwise, you may find your site being dropped by the big search engines, since they have been known to ban sites using “link farms” because they artificially increase a link’s popularity.
e enjte, 4 tetor 2007
Web site Promotion Experts
Glossary of Terms
In Internet marketing alot of strange sounding terms like "viral marketing" get thrown around. Here are our definitions of a few of the words we use to help you understand what we are talking about:
Affiliate Programs - Affiliate programs enable affiliates to leverage their traffic and customer base in order to profit from e-commerce while merchants benefit from increased exposure and sales.
Algorithm - In the context of search engines, it is the mathematical programming system used to determine which web pages are displayed in search results.
Co-branding – This is a system that provides your website's content with the look of your partner’s website creating a seamless transition for the visitor.
Directory - A directory is a web site that focuses on listing web sites by individual topics; it is a quasi table of contents. A search engine lists pages, where a Directory (such as Looksmart or The Open Directory Project) lists websites.
E-mail Campaign - These campaigns contain appealing content concerning your product, and are targeted at a specific market.
Hits - A request for a file on a webserver. Most often these can be graphic files and documents.
Keyword - A singular word or phrase that is typed into a search engine search query. Keyword mainly refers to popular words which relate to any one website. For example, a web site about real estate could focus on keywords such as House, or phrases such as Home for Sale.
Link Exchange - When two websites swap links to point at each other.
Link Popularity - A count of the number of links pointing (inbound links) at a website. Many search engines now count linkage in their algorithms.
META Tags - Author generated source code that is placed in the header section of an HTML document. Current popular meta tags that can affect search engine rankings are keywords and description. The meta KEYWORDS tag is used to group a series of words that relate to a website. These tags can be used by search engines to classify pages for searches. The meta DESCRIPTION is used to describe the document. The meta description is then displayed in search engine results.
Off-line Promotion – This refers to the marketing and promotion of your site in such traditional manners as networking, print advertising, media, event sponsorship, and merchandising.
Paid Placement - A paid placement search engine charges websites on a per visitor basis.
Qualified Traffic – Visitors who are specifically seeking websites with content such as yours.
Referral Program – Referring a customer to your website in a manner outside the realm of the Internet.
Return on Investment - In relation to search engine advertising, it often refers to sales per lead.
Robot - A program that automatically does "some action" without user intervention. In the context of search engines, it usually refers to a program that mimics a browser to download web pages automatically. A spider is a type of robot. See also: Spiders.
Search Engine - A program designed to search a database. In the context of the Internet this refers to a web site that contains a database of information from other websites.
Search Engine Submission - A service that will automatically submit your pages or website to many search engines at once.
Site Optimization – This is the act of creating a page that is specifically intended to rank well on search engines. Basic optimization includes making sure that your META tags are narrowly defined for your site, your robots.txt file is in order, your keywords are optimized for your site, and the structure of your pages meets the various requirements of search engines and spiders.
Spiders - The main program used by search engines to retrieve web pages to include in their database. See also: Robot.
Traffic - A reference to the number of visitors a web site receives.
Unique Visitor - A single individual website visitor. Visitors (or users) can visit multiple pages within a site. Unique users are important because it is an indication of success of a website. If you have high visitor counts, but relatively low page per user counts, that indicates that people are not finding your site attractive enough to sit and read through it. On the other hand, if you have low visitor counts and very high page per user counts, that is an indication your site is providing good information to people and you should do a better job of promotion. High page per user counts indicate good site potential, while low page per user counts indicate you need to rework the site with more content or better displays.
Viral Marketing - Viral marketing is the extremely powerful and unique ability of the Internet to build self-propagating visitor streams, bringing about exponential growth to a company's Web site. This can consist of such things as affiliate programs, co-branding, link exchanges, e-mail campaigns, and off-line promotion.
Q: What is site optimization?
A: Site optimization is the change of a web site's coding and content to match the requirements of a search engine algorithm. This is necessary for the website to be prominently positioned within the results ranking for a given keyword or search phrase.
Q: What is an algorithm?
A: An algorithm is a list of requirements used by search engines to determine the relevancy of a website for a given search phrase or keyword.
Q: Why can’t I use an off-the-shelf program to do my optimization?
A: Search engines are dynamic. They constantly change their algorithm structures and requirements throughout the year, sometimes monthly. It is unlikely that an algorithm written even six months ago is going to have the most current criteria necessary to get your website listed. Also, depending on the algorithm used, some submittal / optimization programs violate the acceptable use policies of the search engines and can actually get your website banned or permanently dropped from the results index.
Q: How do you analyze and handle changes in search engine ranking algorithms?
A: Our software maintains an ongoing log of changes within the search engines as it relates to their algorithm requirements. These changes are then compared against our database of current optimized pages for your site. If necessary, the content of your pages is re-optimized based upon new algorithm requirements, and your pages are re-submitted to these search engines.
Q: How closely do you work with the search engines?
A: SiteNexus has several partnerships / affiliate agreements in place with the major search engines that allow us to provide paid inclusion and direct input of client content within the search engine indexes.
Q: Why should I submit to search engines and directories?
A: Research has shown that 85% of all website traffic comes from search engines. Having your website listed on all of the major search engines is critical to the success of your business.
Q: How long does it take for the search engines to list my site?
A: It is entirely up to each search engine or directory. A few search engines index a site almost immediately, but some engines can take 6-12 weeks or longer.
Q: What is a meta tag?
A: META tags are used by some search engines to determine the ranking of your site in their search results. Therefore, META tags should be optimized in order to obtain a higher search engine ranking.
Q: Do I need meta tags?
A: Yes. The most important "tag" is the title, followed by the description "tag". If these are tags are optimized, there is a much greater chance of customers finding your site on search engines.
Q: Which search engines and directories do you submit to?
A: If you choose our Premium Submission Service, we submit your site to over 30 major search engines and directories. These are the ones that will generate traffic to your site. No junk, just real search engines and directories.
Q: Why isn't my site listed on the search engines?
A: This occurs for a variety of reasons. Many search engine and directory sites require between 6 weeks to 6 months to list a site.
Q: Parts of my site are "under construction". Can I still submit it to the search engines?
A: We recommend that you do not submit sites with "under construction" pages, most search engines and directories will not accept these sites.
Q: What is Paid Placement?
A: Paid placement is a model where the person placing the highest per-click price for a keyword achieves the highest placement or ranking. Users can achieve a high ranking on most major engines within the very same day versus waiting for weeks or months with the regular search engines. You will also receive a guaranteed placement on every keyword you choose without needing to tweak the content of your pages. You'll then keep your ranking until someone outbids you. When this happens, your listing will be pushed down a notch.
Q: What Are the Benefits of Paid Placement?
A: The major benefit of paid placement is that it is highly targeted advertising. You are guaranteed to be near the top of the search results for keywords chosen by you.
Q: What is Viral Marketing?
A: Viral Marketing is the extremely powerful and unique ability of the Internet to build self-propagating visitor streams, bringing about exponential growth to a company's Web site.
Q: What does Viral Marketing consist of?
A: Components of viral marketing can consist of affiliate programs, co-branding, link exchanges, e-mail campaigns, and off-line promotion.
Q: What are the benefits of a Viral Marketing program?
A: The major benefit of a viral marketing program is it's a cost-effective means of reaching new prospective customers or members. This occurs by current customers or members sharing your idea or campaign with other similar people with whom they have a trusting relationship.
Q: I don’t want to keep paying outside companies for web promotion services. Can you train my employees on how to do this?
A: Yes. We can provide customized training to your employees in every area of website promotion. Train at our office, or your place of business.
Q: We have a very unique product that we need to promote to a very specialized customer group. Can you help us to create a marketing plan for our company?
A: Yes. We have helped dozens of companies by creating a unique marketing plan for their individual needs.
In Internet marketing alot of strange sounding terms like "viral marketing" get thrown around. Here are our definitions of a few of the words we use to help you understand what we are talking about:
Affiliate Programs - Affiliate programs enable affiliates to leverage their traffic and customer base in order to profit from e-commerce while merchants benefit from increased exposure and sales.
Algorithm - In the context of search engines, it is the mathematical programming system used to determine which web pages are displayed in search results.
Co-branding – This is a system that provides your website's content with the look of your partner’s website creating a seamless transition for the visitor.
Directory - A directory is a web site that focuses on listing web sites by individual topics; it is a quasi table of contents. A search engine lists pages, where a Directory (such as Looksmart or The Open Directory Project) lists websites.
E-mail Campaign - These campaigns contain appealing content concerning your product, and are targeted at a specific market.
Hits - A request for a file on a webserver. Most often these can be graphic files and documents.
Keyword - A singular word or phrase that is typed into a search engine search query. Keyword mainly refers to popular words which relate to any one website. For example, a web site about real estate could focus on keywords such as House, or phrases such as Home for Sale.
Link Exchange - When two websites swap links to point at each other.
Link Popularity - A count of the number of links pointing (inbound links) at a website. Many search engines now count linkage in their algorithms.
META Tags - Author generated source code that is placed in the header section of an HTML document. Current popular meta tags that can affect search engine rankings are keywords and description. The meta KEYWORDS tag is used to group a series of words that relate to a website. These tags can be used by search engines to classify pages for searches. The meta DESCRIPTION is used to describe the document. The meta description is then displayed in search engine results.
Off-line Promotion – This refers to the marketing and promotion of your site in such traditional manners as networking, print advertising, media, event sponsorship, and merchandising.
Paid Placement - A paid placement search engine charges websites on a per visitor basis.
Qualified Traffic – Visitors who are specifically seeking websites with content such as yours.
Referral Program – Referring a customer to your website in a manner outside the realm of the Internet.
Return on Investment - In relation to search engine advertising, it often refers to sales per lead.
Robot - A program that automatically does "some action" without user intervention. In the context of search engines, it usually refers to a program that mimics a browser to download web pages automatically. A spider is a type of robot. See also: Spiders.
Search Engine - A program designed to search a database. In the context of the Internet this refers to a web site that contains a database of information from other websites.
Search Engine Submission - A service that will automatically submit your pages or website to many search engines at once.
Site Optimization – This is the act of creating a page that is specifically intended to rank well on search engines. Basic optimization includes making sure that your META tags are narrowly defined for your site, your robots.txt file is in order, your keywords are optimized for your site, and the structure of your pages meets the various requirements of search engines and spiders.
Spiders - The main program used by search engines to retrieve web pages to include in their database. See also: Robot.
Traffic - A reference to the number of visitors a web site receives.
Unique Visitor - A single individual website visitor. Visitors (or users) can visit multiple pages within a site. Unique users are important because it is an indication of success of a website. If you have high visitor counts, but relatively low page per user counts, that indicates that people are not finding your site attractive enough to sit and read through it. On the other hand, if you have low visitor counts and very high page per user counts, that is an indication your site is providing good information to people and you should do a better job of promotion. High page per user counts indicate good site potential, while low page per user counts indicate you need to rework the site with more content or better displays.
Viral Marketing - Viral marketing is the extremely powerful and unique ability of the Internet to build self-propagating visitor streams, bringing about exponential growth to a company's Web site. This can consist of such things as affiliate programs, co-branding, link exchanges, e-mail campaigns, and off-line promotion.
Q: What is site optimization?
A: Site optimization is the change of a web site's coding and content to match the requirements of a search engine algorithm. This is necessary for the website to be prominently positioned within the results ranking for a given keyword or search phrase.
Q: What is an algorithm?
A: An algorithm is a list of requirements used by search engines to determine the relevancy of a website for a given search phrase or keyword.
Q: Why can’t I use an off-the-shelf program to do my optimization?
A: Search engines are dynamic. They constantly change their algorithm structures and requirements throughout the year, sometimes monthly. It is unlikely that an algorithm written even six months ago is going to have the most current criteria necessary to get your website listed. Also, depending on the algorithm used, some submittal / optimization programs violate the acceptable use policies of the search engines and can actually get your website banned or permanently dropped from the results index.
Q: How do you analyze and handle changes in search engine ranking algorithms?
A: Our software maintains an ongoing log of changes within the search engines as it relates to their algorithm requirements. These changes are then compared against our database of current optimized pages for your site. If necessary, the content of your pages is re-optimized based upon new algorithm requirements, and your pages are re-submitted to these search engines.
Q: How closely do you work with the search engines?
A: SiteNexus has several partnerships / affiliate agreements in place with the major search engines that allow us to provide paid inclusion and direct input of client content within the search engine indexes.
Q: Why should I submit to search engines and directories?
A: Research has shown that 85% of all website traffic comes from search engines. Having your website listed on all of the major search engines is critical to the success of your business.
Q: How long does it take for the search engines to list my site?
A: It is entirely up to each search engine or directory. A few search engines index a site almost immediately, but some engines can take 6-12 weeks or longer.
Q: What is a meta tag?
A: META tags are used by some search engines to determine the ranking of your site in their search results. Therefore, META tags should be optimized in order to obtain a higher search engine ranking.
Q: Do I need meta tags?
A: Yes. The most important "tag" is the title, followed by the description "tag". If these are tags are optimized, there is a much greater chance of customers finding your site on search engines.
Q: Which search engines and directories do you submit to?
A: If you choose our Premium Submission Service, we submit your site to over 30 major search engines and directories. These are the ones that will generate traffic to your site. No junk, just real search engines and directories.
Q: Why isn't my site listed on the search engines?
A: This occurs for a variety of reasons. Many search engine and directory sites require between 6 weeks to 6 months to list a site.
Q: Parts of my site are "under construction". Can I still submit it to the search engines?
A: We recommend that you do not submit sites with "under construction" pages, most search engines and directories will not accept these sites.
Q: What is Paid Placement?
A: Paid placement is a model where the person placing the highest per-click price for a keyword achieves the highest placement or ranking. Users can achieve a high ranking on most major engines within the very same day versus waiting for weeks or months with the regular search engines. You will also receive a guaranteed placement on every keyword you choose without needing to tweak the content of your pages. You'll then keep your ranking until someone outbids you. When this happens, your listing will be pushed down a notch.
Q: What Are the Benefits of Paid Placement?
A: The major benefit of paid placement is that it is highly targeted advertising. You are guaranteed to be near the top of the search results for keywords chosen by you.
Q: What is Viral Marketing?
A: Viral Marketing is the extremely powerful and unique ability of the Internet to build self-propagating visitor streams, bringing about exponential growth to a company's Web site.
Q: What does Viral Marketing consist of?
A: Components of viral marketing can consist of affiliate programs, co-branding, link exchanges, e-mail campaigns, and off-line promotion.
Q: What are the benefits of a Viral Marketing program?
A: The major benefit of a viral marketing program is it's a cost-effective means of reaching new prospective customers or members. This occurs by current customers or members sharing your idea or campaign with other similar people with whom they have a trusting relationship.
Q: I don’t want to keep paying outside companies for web promotion services. Can you train my employees on how to do this?
A: Yes. We can provide customized training to your employees in every area of website promotion. Train at our office, or your place of business.
Q: We have a very unique product that we need to promote to a very specialized customer group. Can you help us to create a marketing plan for our company?
A: Yes. We have helped dozens of companies by creating a unique marketing plan for their individual needs.
Abonohu te:
Komentet (Atom)
Rreth meje
Arkivi i blogut
-
▼
2007
(91)
-
▼
tetor
(24)
- Webring
- Web portal
- Spamdexing
- Spam in blogs
- Scraper site
- Google bomb
- Link doping
- Link doping
- Link campaign
- Keyword Stuffing
- How to Analyze Your Log Files
- Page Hijack: The 302 Exploit, Redirects and Google
- Hacking for SEO?
- When should I use cloaking?
- Cloaking
- Cloaking
- Quick SEO Checklist
- Quick RSS SEO Tips
- SEVEN TIPS WHEN WORKING WITH
- Over 80 percent of Internet users use Google to fi...
- How Keyword Density, Frequency, Prominence And Pro...
- Search Engine Success - What is Link Popularity?
- Web site Promotion Experts
- What is Social Networking sites
-
▼
tetor
(24)