What’s the big deal with Google? What makes it different from Yahoo and Bing? Also, what types of creatures are N Naver, Baidu and Yandex?
All search engines.
In fact, what search engines do to collect information and content across the web and store it in a database.
According to a November 2018 report by netmarketshare, 73% of all searches are powered by Google. In second place Bing has 7.91%.
A big difference in scary, yes.
So… what really makes one search engine so different from another?
The size of their data;
How they decide which pages are appropriate;
Market specialty / content type.
Google controls the market because of the size of its index, and the way it calculates page links: e.g., its algorithm (more on that less.)
In short, it has proven itself to be far and far from answering people’s questions about relevant content. Now it’s great to see how everyone in the world – except China – gets things online.
Of course, there are very specific search engines. Naver, Baidu and Yandex are examples of search engines that look at specific markets – Korea, China and Russia respectively.
Fact: YouTube is also a search engine!
How does Google work?
So we know who the main player in the game is; now we need to know how it works.
Here is a simplified version of the entire search process, divided into three parts: crawling, indexing and retrieving queries.
Or, in plain English: find pages, save them and show them to searchers.
Imagine that you need to explore a foreign land. You start in a small town and drive down the road that connects you to the next city. You take the next road to the next city, and the next road after that. If you drive down all the possible routes from the start, you will eventually find all the cities.
This is how Google works except that cities are web pages, and the roads that link backlinks.
So: Google starts with a single web page. It finds links to these web pages and then follows them to find other web pages. Then find all the links on those pages, the following pages, and the next. Finally, they are able to find all the best on the web.
And how does Google do this?
It uses a computer program popularly known as a page to “crawl,” or find, pages and links.
SIDENOTE. Crawlers are sometimes called “spiders” – because they, too, crawl the web.
In SEO, we want to do everything we can to make the spider job easier. This, in turn, makes it easier for our web pages to be betrayed.
After finding pages on the web, spiders then extract the data from them and store them, or “index,” the data on Google’s website – to be displayed in search results.
Here’s the fun part: spiders don’t actually view web pages the way people do.
Here’s how you and I can see the Ahrefs blog:
Here’s how Googlebot (Google web crawler / spider) sees Ahrefs blog:
example of ahrefsblog
These are the types of data spiders that collect and store: date of page creation, title and meta description, keywords, links to and from and other specific information in that search engine algorithm.
Take a look – think of a website you like, and add it to this page to see what it looks like in a spider. Very different, huh?
In SEO, we want to ensure that the data Google identifies us after crawling our pages is as accurate as possible. This makes it much easier for them to appear in search results where we want.
When you submit a search query to Google, search in its database the web pages most relevant to your query and display them as search results. This compatibility is determined by its algorithm.
Unfortunately, no one really knows how Google’s algorithm prioritizes… with the exception of one thing universally agreed upon:
Number of quality backlinks to targeted page.
Google places great emphasis on backlinks as a means of authority and importance.
Even if you are just starting out, there are many ways to be thankful before the game when it comes to backlinks. We will cover this later.