(Wikkipedia.ca is not associated with wikipedia.org)
WIKKIPEDIA is a misspelling of WIKIPEDIA
Currently, Wikipedia has tens of thousands of users, though much of the content that users see is produced by a relatively small group of people: perhaps about 4,200 users, or 0.1%. These users have been responsible for about 44% of regularly-read content, with this domination increasing, according to one 2007 research estimate based on words read. Criminal Checks
Wikipedia is a peer-directed project to create a group of online encyclopedias in every major language. Founded in 2001, Wikipedia went "live" on January 15th of that year and grew exponentially in its first 4 to 5 years. It is the world's largest encyclopedia project and one of the most popular sites on the Internet. The English-language Wikipedia is the world's largest single wiki and now contains more than 2.7 million individual articles.
Wikipedia is a multilingual, Web-based, free-content encyclopedia project. The name "Wikipedia" is a portmanteau (a combination of portions of two words and their meanings) of the words wiki (a type of collaborative Web site) and encyclopedia.
Wikipedia's articles provide links to guide the user to related pages with additional information.
Wikipedia refers to two of its pivotal features in its slogan, "the free encyclopedia that anyone can edit." Indeed, virtually any person on the Internet may create or edit a Wikipedia article, thanks to the use of wiki software. Contributors may edit Wikipedia anonymously or register user accounts.
A Web search engine is a tool designed to search for information on the World Wide Web. The search results are usually presented in a list and are commonly called hits. The iInformation may consist of web pages, images, information and other types of files.
Some search engines also mine data available in newsbooks, databases, or open directories. Unlike Web directories, which are maintained by human editors, search engines operate algorithmically or are a mixture of algorithmic and human input.
Web search engines work by storing information about many web pages, which they retrieve from the WWW itself. These pages are retrieved by a Web crawler (sometimes also known as a spider) — an automated Web browser which follows every link it sees. Exclusions can be made by the use of robots.txt. The contents of each page are then analyzed to determine how it should be indexed (for example, words are extracted from the titles, headings, or special fields called meta tags). Data about web pages are stored in an index database for use in later queries.
Some search engines, such as Google, store all or part of the source page (referred to as a cache) as well as information about the web pages, whereas others, such as AltaVista, store every word of every page they find. This cached page always holds the actual search text since it is the one that was actually indexed, so it can be very useful when the content of the current page has been updated and the search terms are no longer in it. This problem might be considered to be a mild form of linkrot, and Google's handling of it increases usability by satisfying user expectations that the search terms will be on the returned webpage.
This satisfies the principle of least astonishment since the user normally expects the search terms to be on the returned pages. Increased search relevance makes these cached pages very useful, even beyond the fact that they may contain data that may no longer be available elsewhere.
This site is about WIKIPEDIA AND SEARCH ENGINES
Ce site est sur WIKIPEDIA ET MOTEURS DE RECHERCHE
(Wikkipedia.ca is not associated in any way with wikipedia.org)
(Wikkipedia.ca n'est pas associée à wikipedia.org)
Sources: Public sources
Sources: Les sources publiques
Website owner - Titulaire de la web : www.crediblewebsites.com