In the previous module, we discussed the central characteristic of search engines that makes them different from directories.
Search engine data is compiled by computer programs called robots or spiders that search the Web (and some search services
search other areas of the Internet, as well) for documents, index them, and then store the results in a database.
The following SlideShow shows you the sequence of operations:
Robots are also called spiders or crawlers.
Most people use the terms Web index, search engine, and search service interchangeably to refer to a site or service that allows you to define a
search query that will retrieve specific information online. IN 2018 there are 4 primary search engines.
Google, Bing, Yahoo, duckduckgo.com. The search engines listed below existed during the dotcom era and are no longer being used.
When people refer to sites such as AltaVista or Excite as search engines, they are not exactly correct.
These sites are actually commercial services that provide you with an interface and a search engine (the software that actually searches the database) with which to search a database of Web documents (or portions of Web documents)
Each commercial service has its own search engine searching software and indexing robot.
The combination of a robot-generated database and a search engine is also referred to as a Web index
Although it may seem that a search engine will always overpower a directory through the sheer size of its automated database, there are a couple of
of individual search engines that you should know about, the percentage
of all Web documents that are searched, overlap between search engine services, and how they deal with synonyms and homonyms.