Notes, Tips, Questions, & Answers...
aka, topics too short for an article, but too important to leave out!
| Question Topics | ||
| Q&A - What's with Google's minus-30 penalty? ...and how can I undo it? | Q&A - How long does Google take to remove a site after receiving a spam report? | |
| Q&A - Are links from international versions of .edu sites also good for rankings? | Q&A - What's the advantage of a Google Sitemap over a regular HTML sitemap? | |
| Q&A - Are programs that automatically generate articles safe to use on my site? | Q&A - Is there any way a new site can compete with an old one? | |
|
Q&A - What's with Google's minus-30 penalty? ...and how can I undo it?
Answer: You're experiencing a textbook case of a bizarre phenomenon that's cropped up recently in Google: the minus 30 ranking penalty. Sites slapped with the -30 ranking penalty often appear in Google just as they did prior to being penalized—with Sitelinks and indented double listings intact, except that for every keyword they previously held position #1 for, they now rank at exactly #31. At first it just seemed like a bizarre coincidence that pages were being dropped exactly 30 spots, but as more and more sites began showing up with this exact penalty it became clear that Google is doing this intentionally. Sites experiencing this penalty are typically those which Google would consider "good"—meaning they're real sites selling actual products or services that provide value to their users and are in top positions for several keywords; but they've committed some infraction in Google's eyes. In some cases they appear to have duplicated too many pages on their site or maybe they have too many of the wrong kinds of links. In any case, Google doesn't consider the infraction to be serious enough for a complete ban or major penalty. Instead they seem to be sending the site a message that their act is looking a bit shady and needs some brightening up. Avoiding the -30 penalty involves doing the same thing that you would do to avoid any other Google penalty—play by the rules, avoid duplicating large parts of your site, and strive to get relevant and legitimate incoming links. As to undoing a -30 penalty, that can take a while. These penalties are becoming more and more challenging to get lifted. Regardless, here's a check list of strategies to speed up getting that penalty lifted:
At least the -30 penalty is less ambiguous than Google's other penalties. This one tells us for sure that a site is being penalized while the other penalties are less exact; making it hard to determine if the drop in ranking is the result of a penalty, better optimization by your competition, or Google simply changing their ranking algorithm. With the -30 penalty, Google is sending a direct message saying, Yes, you have been penalized and it's going to stay that way until you purge your site and its link structure of its over-optimization tactics.
Q&A - How long does Google take to remove a site after receiving a spam report?
Answer: Google may never actually get around to following through on that spam report. We've seen people submit spam reports on sites that were complete spam and Google never penalized or banned the sites being reported. You have to remember that Google almost certainly receives a massive number of spam reports everyday, some legitimate, others just being filed by people trying to harm their competition. Each has to be evaluated, and it can take quite a while before someone actually follows up on your spam report. And once they do, Google rarely removes a site from their index manually. They instead prefer to try and do it automatically by adjusting their ranking and spam-detection algorithms. Most likely their team has reviewed the pages you reported to them. However, unless the offence is exceptionally bad or high-profile enough to be embarrassing to Google, they will most likely try to address it in their engine's algorithm versus using a manual edit of their search results.
Q&A - Are links from international versions of .edu sites also good for rankings?
Answer: Absolutely. While no one is really sure whether the rankings boost from .edu links comes from Google placing more trust on them because these links are more exclusive and hard to obtain, or whether it's simply an issue of educational domains simply being great links because they tend to be older sites with their own set of really good incoming links, the results are clear: Links from educational domains are great for rankings. In the US, institutionally accredited universities (and similar postsecondary institutions) can use the top level domain (TLD) .edu. This domain is reserved for use only by schools—only a very small group of people can get their hands on one. This exclusive nature makes it harder to fake links from these domains, and Google does appear to have placed a greater level of trust on them (here's the rules on who can get an .edu domain). The same applies to educational domain in countries other than the US, although those countries almost always use different TLDs. For example, schools in the UK use .ac.uk. A few other examples include:
These types of domains are also restricted in their respective countries. Other countries like Australia (.edu.au) China (.edu.ch) use a variation on the .edu, while countries like Germany and Italy just use the country's regular TLD but modify the domain name to indicate that it belongs to a university (uni-erfurt.de and unibo.it, for example). And a few countries, like Canada, don't give universities any kind of special domain at all. Still, even in cases where the school doesn't get its own special top level domain (TLD), universities do tend to have old, high-quality sites with excellent inbound links, and are superb places to get a link from whenever you can. Here's some good tips on attracting .edu links to your site. There's also the .k12 namespace, which uses the format k12.
Q&A - What's the advantage of a Google Sitemap over a regular HTML sitemap?
Answer: Your site should utilize both an onsite HTML sitemap and a Google XML Sitemap. Both are an important part of helping your site get and stay indexed by search engines. The regular HTML sitemap is just a webpage that links to all the other pages on your website. It makes sure that any search engine crawling your site can easily and quickly find and index all the pages on your site. We cover the HTML sitemap strategy extensively in Chapter Two of our Unfair Advantage book. The Google XML Sitemap is a format Google designed (and which Yahoo and Microsoft have recently agreed to also use) which allows you to feed a range of information about your site directly to Google, Yahoo and Microsoft. This information includes not just the location of your pages, but also their importance relative to the other pages on your site, how often those pages should be indexed, and more. In order to maximize your site's crawlability, we recommend you take full advantage of both an onsite HTML sitemap and a Google XML Sitemap. It's also important to realize the important role that good incoming links play in making sure your site stays well indexed. Links are the number one factor in getting search engines to index all your pages on a regular basis. Furthermore, neither the HTML sitemap nor the Google XML Sitemap play any role in where your pages will rank. They are simply a means to get your pages into a search engine's index. Where your pages rank depends on your incoming links and other optimization factors.
Q&A - Are programs that automatically generate articles safe to use on my site?
Answer: The site building software you're referring too is commonly called autogen software. As the name suggests, it will automatically generate a site in a few minutes. The problem is that such software is commonly used to spam search engines. While there are ways to use this kind of software in compliance with search engine guidelines, such software has been irresponsibly used to produce so many spam sites that search engines have developed the ability to recognize footprints, or tell-tale signs of an automatically generated site. When detected, such sites are often quickly banished from the search results. Typically, the software comes with a large database of articles as well as the ability for you to input additional articles from article directories like ezinearticles.com or from your own subscription to a private label rights (PLR ) article database. Of course, the problem with most article databases is that they're used by so many other people that you just end up duplicating content that is already found elsewhere on the web. To help solve that problem, autogen software typically offers the ability to modify the articles, usually by swapping out some words and replacing them with synonyms or other similar words. While that can help the duplicate content problem, you're still faced with the issue that search engines detect footprints left by the software. For instance autogen sites tend to reveal themselves by using of similar template code common to most autogenerated sites. That means you'd be well-advised to significantly modify the template interfaces provided by the software to make it harder for search engines to detect that footprint. As you're probably seeing, it can be a lot of work to "automatically" create a site. And even then you still end up with something of fairly low quality. That's not to say that these programs can't be used to make some pretty significant money if you create a lot of pages and avoid getting it banned by search engines. But doing it right requires a lot of expertise and usually falls into the category of unsustainable SEO/SEM that flies in the face of what Google and the other engines are looking for in terms of maintaining a relevant and useful search index. However, if that's your cup of tea, then autogen software may work for you. But if you're aiming to create a site that will do well for the long term, then you'll want to focus on more sustainable strategies and avoid autogen software. If you do decide to test it out, be sure you:
Like the directory software we commented on last month, if a program offers to automate much of the site building or promotion process for you, it's probably the type of software that will get you banned before long. Building a long-term profitable online business means creating a site that customers will find unique and valuable and marketing it by getting people genuinely interested in what you have to offer. And those processes really can't be automated.
Q&A - Is there any way a new site can compete with an old one?
Answer: The age of a domain is an important ranking factor in Google, where older sites generally rank better. You can outrank an older site if you've got the right trusted links and good onpage SEO, but, all things being equal, old sites tend to do better. Since these sites have been around a long time, Google knows they aren't just part of some spammer's short-term "pump-and-dump" ranking strategy, so they're viewed as being more trusted. Beyond Google's preference for old sites, there's also the fact that older sites have had longer to build links. So if your competitor's domain launched in 1996 and yours started in 2004, they've got an 8-year head start on link building and content creation. That's a significant advantage. The Wayback Machine will give you an idea of when search engines first started indexing pages from a site. The drawback to the Wayback Machine is that it only goes back to 1996, so if a site came online in 1994 the Wayback Machine will still show its birthday as 1996. However, that doesn't much matter because you're not really concerned with a site's specific birthdate, just with whether it's a old site or a new site. Any site from the 90's is officially old by Internet standards. To put things in perspective, a site launched in 1996 will not have much advantage over a site launched in 1998 (other than 2 additional years to build content and links). But it will have an often significant advantage over a site launched in 2004. The other problem with the Wayback Machine is that some sites block its crawler. This means that site won't be listed. In that case you can sometimes use Netcraft.com. For example, here's where you can find the NetCraft birthdate for Google.com. If that doesn't work, you can always go with the date their whois info reports, such as that provided by domaintools.com: However, this isn't a very reliable way to get a site's age, since the domain may have been purchased long ago but never put online. For example, we have several domains we bought in 1996 that were never turned into sites. If we were to create a site on them today, search engines would consider their birthday to be 2006, not 1996. | ||
Nuk ka komente:
Posto një koment