The internet has become an integrated part of our life. As the need for information becomes greater with the development of society, internet rose to fill the role of information provider. The internet serves as a platform to share and publish information from anywhere in the world. This level of access is one reason why unprecedented amount of hypertext and hypermedia is accumulated on the web. The problem posed by this effect is the difficulty to avoid unwanted and find the necessary information from the web.
More about Web Browser
The web browser is a software application installed on the user’s computer to acquire, interpret and present information from the World Wide Web. The first web browser developed by the inventor of the internet, sir Tim Bernes lee, in the 1990’s is called WorldWide Web (later became nexus). However, the Mosaic (later Netscape) browser developed by Marc Andressen revolutionized the browsers by making it more user friendly.
Basic operation of the web browser is as follows. Web resource is located using a specific identification called the Universal Resource Locator (URL). First part of the URL called “Universal Resource Identifier” determines how the URL will be interpreted. This is usually the protocol of the resource the browser is trying to access, such as http, https or FTP. Once the information is retrieved from the source, browser component called “layout engine” convert the http into HTML markup to display the interactive hypertext hypermedia document. Browsers may offer additional features like flash videos and Java applets by installing respective plug-ins to the browser, enabling the content to be viewed even if the content is not hypertext.
More about Search Engine
The search engine is a web application to search and locate the information or resources on the World Wide Web. With the growth of the resources on the www, indexing the contents in an easily accessible manner became more and more difficult. The solution presented for this problem is the web search engine.
Web search engine operates on the following three steps. Web crawling, Indexing and searching. Web crawling is the process of collecting information and data available on the World Wide Web. This is normally done with automated software called a web crawler (also known as a spider). The web crawler is a program which executes an algorithm to retrieve information from every web page and follow the related links automatically. The retrieved information will be indexed and stored in databases for later queries. The crawlers retrieve and index information about the contents of the page, such as words from the text, URL for the hyperlinks and special field in the page called meta tags.
When a request or a search query is made for a particular detail or a page on the web, through a web browser, the search engine retrieves related information from the indexed databases and displays the results as a list of related resources on the web browser.
Browser and Search Engine
• The web browser is an application installed on user’s computer, while a search engine is a web application operating on a server connected to the internet.
• The web browser is an application to retrieve and display information from the internet, while a web browser is an application to locate information on the web.