advanced SEO FOR EXPERT

What On Earth Is An
Algorithm?
Each search engine has something called an algorithm
which is the formula that each search engine uses to
evaluate web pages and determine their relevance and
value when crawling them for possible inclusion in their
search engine. A crawler is the robot that browses all of
these pages for the search engine.
GOOGLE Algorithm Is Key
Google has a comprehensive and highly developed technology, a
straightforward interface and a wide-ranging array of search tools which
enable the users to easily access a variety of information online.
Google users can browse the web and find information in various languages,
retrieve maps, stock quotes and read news, search for a long lost friend
using the phoneBook listings available on Google for all of US cities and
basically surf the 3 billion odd web pages on the internet!
Google boasts of having world‟s largest archive of Usenet
messages, dating all the way back to 1981. Google‟s
technology can be accessed from any conventional
desktop PC as well as from various wireless platforms
such as WAP and i-mode phones, handheld devices and
other such Internet equipped gadgets.
Page Rank Based On Popularity
The web search technology offered by Google is often the technology of
choice of the world‟s leading portals and websites. It has also benefited the
advertisers with its unique advertising program that does not hamper the
web surfing experience of its users but still brings revenues to the
advertisers.
When you search for a particular keyword or a
phrase, most of the search engines return a list of
page in order of the number of times the keyword
or phrase appears on the website. Google web
search technology involves the use of its
indigenously designed Page Rank Technology and
hypertext-matching analysis which makes several instantaneous calculations
undertaken without any human intervention. Google‟s structural design also
expands simultaneously as the internet expands.
Page Rank technology involves the use of an equation
which comprises of millions of variables and terms and
determines a factual measurement of the significance
of web pages and is calculated by solving an equation
of 500 million variables and more than 3 billion terms.
Unlike some other search engines, Google does not
calculate links, but utilizes the extensive link structure of the web as an
organizational tool. When the link to a Page, let‟s say Page B is clicked from
a Page A, then that click is attributed as a vote towards Page B on behalf of
Page A.
Back Links Are Considered Popularity Votes
Quintessentially, Google calculates the importance of a
page by the number of such „votes‟ it receives. Not
only that, Google also assesses the importance of the
pages that are involved in the voting process.
Consequently, pages that are themselves ahead in
ranking and are important in that way also help to
make other pages important. One thing to note here is
that Google‟s technology does not involve human intervention in anyway and
uses the inherent intelligence of the internet and its resources to determine
the ranking and importance of any page.
Hypertext-Matching Analysis
Unlike its conventional counterparts, Google is a
search engine which is hypertext-based. This means
that it analyzes all the content on each web page and
factors in fonts, subdivisions, and the exact positions
of all terms on the page. Not only that, Google also
evaluates the content of its nearest web pages. This
policy of not disregarding any subject matter pays off in the end and enables
Google to return results that are closest to user queries.
Google has a very simple 3-step procedure in handling a query submitted in
its search box:
1. When the query is submitted and the enter key is pressed, the web
server sends the query to the index servers. Index server is exactly what
its name suggests. It consists of an index much like the index of a book
which displays where is the particular page containing the queried term is
located in the entire book.
2. After this, the query proceeds to the doc servers, and these servers
actually retrieve the stored documents. Page descriptions or “snippets”
are then generated to suitably describe each search result.
3. These results are then returned to the user in less than a one second!
(Normally.)
Approximately once a month, Google updates their
index by recalculating the Page Ranks of each of
the web pages that they have crawled. The period
during the update is known as the Google dance.
Do You Know The GOOGLE Dance?
The Algorithm Shuffle
Because of the nature of Page Rank, the calculations need
to be performed about 40 times and, because the index is
so large, the calculations take several days to complete.
During this period, the search results fluctuate;
sometimes minute-by minute. It is because of these
fluctuations that the term, Google Dance, was coined.
The dance usually takes place sometime during the last third of each month.
Google has two other servers that can be used for searching. The search
results on them also change during the monthly update and they are part of
the Google dance.
For the rest of the month, fluctuations sometimes
occur in the search results, but they should not be
confused with the actual dance. They are due to
Google's fresh crawl and to what is known "Everflux".
Google has two other searchable servers apart from www.google.com. They
are www2.google.com and www3.google.com. Most of the time, the results
on all 3 servers are the same, but during the dance, they are different.
For most of the dance, the rankings that can be
seen on www2 and www3 are the new rankings that
will transfer to www when the dance is over. Even
though the calculations are done about 40 times,
the final rankings can be seen from very early on.
This is because, during the first few iterations, the
calculated figures merge to being close to their final
figures.
You can see this with the Page Rank Calculator by checking the Data box
and performing some calculations. After the first few iterations, the search
results on www2 and www3 may still change, but only slightly.
During the dance, the results from www2 and www3 will sometimes show on
the www server, but only briefly. Also, new results on www2 and www3 can
disappear for short periods. At the end of the dance, the results on www will
match those on www2 and www3.
GOOGLE Dance Tool
This Google Dance Tool allows you to check your
rankings on all three tools www, www2 and
www3 and on all 9 datacenters simultaneously.
The Google Web Directory works in combination of
the Google Search Technology and the Netscape
Open Directory Project which makes it possible to search the Internet
organized by topic. Google displays the pages in order of the rank given to
it using the Page Rank Technology. It not only searches the titles and
descriptions of the websites, but searches the entire content of sites within a
related category, which ultimately delivers a comprehensive search to the
users. Google also has a fully functional web directory which categorizes all
the searches in order.
Submitting your URL to Google
Google is primarily a fully-automatic search engine with
no human-intervention involved in the search process.
It utilizes robots known as „spiders‟ to crawl the web on
a regular basis for new updates and new websites to be
included in the Google Index. This robot software
follows hyperlinks from site to site. Google does not
require that you should submit your URL to its database for inclusion in the
index, as it is done anyway automatically by the „spiders‟. However, manual
submission of URL can be done by going to the Google website and clicking
the related link. One important thing here is that Google does not accept
payment of any sort for site submission or improving page rank of your
website. Also, submitting your site through the Google website does not
guarantee listing in the index.
Cloaking
Sometimes, a webmaster might program the server in
such a way that it returns different content to Google
than it returns to regular users, which is often done to
misrepresent search engine rankings. This process is
referred to as cloaking as it conceals the actual website
and returns distorted web pages to search engines
crawling the site. This can mislead users about what
they'll find when they click on a search result. Google highly disapproves of
any such practice and might place a ban on the website which is found guilty
of cloaking.
Google Guidelines
Here are some of the important tips and tricks that can be employed while
dealing with Google.
Do’s
1. A website should have crystal clear hierarchy and
links and should preferably be easy to navigate.
2. A site map is required to help the users go around
your site and in case the site map has more than
100 links, then it is advisable to break it into
several pages to avoid clutter.
3. Come up with essential and precise keywords and make sure that your
website features relevant and informative content.
4. The Google crawler will not recognize text hidden in the images, so
when describing important names, keywords or links; stick with plain
text.
5. The TITLE and ALT tags should be descriptive and accurate and the
website should have no broken links or incorrect HTML.
6. Dynamic pages (the URL consisting of a „?‟ character) should be kept
to a minimum as not every search engine spider is able to crawl them.
7. The robots.txt file on your web server should be current and should
not block the Googlebot crawler. This file tells crawlers which
directories can or cannot be crawled.
Don’ts
8. When making a site, do not cheat your users, i.e.
those people who will surf your website. Do not
provide them with irrelevant content or present
them with any fraudulent schemes.
9. Avoid tricks or link schemes designed to increase
your site's ranking.
10. Do not employ hidden texts or hidden links.
11. Google frowns upon websites using cloaking technique. Hence, it is
advisable to avoid that.
12. Automated queries should not be sent to Google.
13. Avoid stuffing pages with irrelevant words and content. Also don't
create multiple pages, sub-domains, or domains with significantly
duplicate content.
14.Avoid "doorway" pages created just for search engines or other
"cookie cutter" approaches such as affiliate programs with hardly any
original content.
In the field of search engine optimization (SEO), writing
a strong homepage that will rank high in the engines
and will read well with your site visitors can sometimes
present a challenge, even to some seasoned SEO
professionals. Once you have clearly identified your
exact keywords and key phrases, the exact location on
your homepage where you will place those carefully researched keywords
will have a drastic impact in the end results of your homepage optimization.
One thing we keep most people say is that they don‟t want to change the
looks or more especially the wording on their homepage. Understandably,
some of them went to great lengths and invested either a lot of time and/or
money to make it the best it can be. Being the best it can be for your site
visitors is one thing. But is it the best it can be for the search engines, in
terms of how your site will rank?
If you need powerful rankings in the major search engines and at the same
time you want to successfully convert your visitors and prospects into real
buyers, it's important to effectively write your homepage the proper way the
first time! You should always remember that a powerfully optimized
homepage pleases both the search engines and your prospects.
In randomly inserting keywords and key phrases
into your old homepage, you might run the risk
of getting good rankings, but at the same time it
might jeopardize your marketing flow. That is a
mistake nobody would ever want to do with their
homepage.
Keyword Stuffing & Spamming
Important keywords and descriptions should be used in
your content in visible Meta tags and you should choose
the words carefully and position them near the top and
have proper frequency for such words. However it is
very important to adopt moderation in this. Keyword
stuffing or spamming is a No-No today. Most search
engine algorithms can spot this, bypass the spam and some may even
penalize it.
Dynamic URLs
Several pages in e-commerce and other functional sites are generated
dynamically and have? or & sign in their dynamic URLs. These signs
separate the CGI variables. While Google will crawl these pages, many other
engines will not. One inconvenient solution is to develop static equivalent of
the dynamic pages and have them on your site. Another way to avoid such
dynamic URLs is to rewrite these URLs using a syntax that is accepted by the
crawler and also understood as equivalent to the dynamic URL by the
application server. The Amazon site shows dynamic URLs in such syntax. If
you are using Apache web server, you can use Apache rewrite rules to
enable this conversion.
Re-Direct Pages
Sometimes pages have a Meta refresh tag that
redirects any visitor automatically to another page.
Some search engines refuse to index a page that has
a high refresh rate. The meta refresh tag however
does not affect Google.
Image Maps Without ALT Text
Avoid image maps without text or with links. Image
maps should have alt text (as also required under the
American Disabilities Act, for public websites) and the
home page should not have images as links. Instead
HTML links should be used. This is because search
engines would not read image links and the linked pages
may not get crawled.
Frames
There are some engines whose spiders won‟t work with frames on your site.
A web page that is built using frames is actually a combination of content
from separate “pages” that have been blended into a single page through a
„frameset‟ instruction page. The frameset page does not have any content or
links that would have promoted spidering. The frameset page could block
the spider‟s movement. The workaround is by placing a summary of the
page content and relevant description in the frameset page and also by
placing a link to the home page on it.
Conclusion
If you‟re looking for some simple things that you can do to increase the
position of your sites rank in the search engines or directories, this section
will give you some hard hitting and simple tips that you can put into action
right away.
What Should You Do Now?
It is worth cataloging the basic principles to be enforced to increase website
traffic and search engine rankings.
1. Create a site with valuable content, products or services.
2.Place primary and secondary keywords within the first 25 words in
your page content and spread them evenly throughout the document.
3. Research and use the right keywords/phrases to attract your target
customers.
4. Use your keywords in the right fields and references within your web
page. Like Title, META tags, Headers, etc.
5. Keep your site design simple so that your customers can navigate
easily between web pages, find what they want and buy products and
services.
6.Submit your web pages i.e. every web page and not just the home
page, to the most popular search engines and directory services. Hire
someone to do so, if required. Be sure this is a manual submission. Do
not engage an automated submission service.
7.Keep track of changes in search engine algorithms and processes and
accordingly modify your web pages so your search engine ranking
remains high. Use online tools and utilities to keep track of how your
website is doing.
8. Monitor your competitors and the top ranked websites to see what
they are doing right in the way of design, navigation, content,
keywords, etc.
9. Use reports and logs from your web hosting company to see where
your traffic is coming from. Analyze your visitor location and their
incoming sources whether search engines or links from other sites and
the keywords they used to find you.
10.Make your customer visit easy and give them plenty of ways to
remember you in the form of newsletters, free reports, reduction
coupons etc.
11. Demonstrate your industry and product or service expertise by writing
and submitting articles for your website or for article banks so you are
perceived as an expert in your field.
0 commentaires:
Enregistrer un commentaire