Sunday, October 25, 2009

FCP answers

1.A local area network (LAN) is a computer network covering a small physical area, like a home, office, or small group of
buildings, such as a school, or an airportLANs may have connections with other LANs via leased lines, leased services, or by tunneling across the Internet using virtual
private network technologies. Depending on how the connections are established and secured in a LAN, and the distance involved, a
LAN may also be classified as metropolitan area network (MAN) or wide area networks (WAN).
2.A Web browser is a software application for retrieving, presenting, and traversing information resources on the World Wide Web.
An information resource is identified by a Uniform Resource Identifier (URI) and may be a web page, image, video, or other piece
of content.[1] Hyperlinks present in resources enable users to easily navigate their browsers to related resources.
Although browsers are primarily intended to access the World Wide Web, they can also be used to access information provided by Web
servers in private networks or files in file systems.
3.HTML, which stands for Hyper Text Markup Language, is the predominant markup language for web pages. It provides a means to
create structured documents by denoting structural semantics for text such as headings, paragraphs, lists etc as well as for
links, quotes, and other items. It allows images and objects to be embedded and can be used to create interactive forms
4.In computing, a Uniform Resource Locator (URL) is a subset of the Uniform Resource Identifier (URI) that specifies where an
identified resource is available and the mechanism for retrieving it. In popular usage and in many technical documents and verbal
discussions it is often incorrectly used as a synonym for URI.[1] In popular language, a URI is also referred to as a Web address.
5.Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed, collaborative, hypermedia information
systems.[1] Its use for retrieving inter-linked resources, called hypertext documents, led to the establishment of the World Wide
Web in 1990 by English physicist Tim Berners-Lee. There are two major versions, HTTP/1.0 that uses a separate connection for every
document and HTTP/1.1 that can reuse the same connection to download, for instance, images for the just served page. Hence
HTTP/1.1 may be faster as it takes time to set up the connections.
The standards development of HTTP has been coordinated by the World Wide Web Consortium and the Internet Engineering Task Force
(IETF), culminating in the publication of a series of Requests for Comments (RFCs), most notably RFC 2616 (June 1999), which
defines HTTP/1.1, the version of HTTP in common use.
6.File Transfer Protocol (FTP) is a standard network protocol used to exchange and manipulate files over a TCP/IP based network,
such as the Internet. FTP is built on a client-server architecture and utilizes separate control and data connections between the
client and server applications. Client applications were originally interactive command-line tools with a standardized command
syntax, but graphical user interfaces have been developed for all desktop operating systems in use today. FTP is also often used
as an application component to automatically transfer files for program internal functions. FTP can be used with user-based
password authentication or with anonymous user access
7.A firewall is a part of a computer system or network that is designed to block unauthorized access while permitting authorized
communications. It is a device or set of devices configured to permit, deny, encrypt, decrypt, or proxy all (in and out) computer
traffic between different security domains based upon a set of rules and other criteria.
Firewalls can be implemented in either hardware or software, or a combination of both. Firewalls are frequently used to prevent
unauthorized Internet users from accessing private networks connected to the Internet, especially intranets. All messages entering
or leaving the intranet pass through the firewall, which examines each message and blocks those that do not meet the specified
security criteria.There are several types of firewall techniques:
1.Packet filter: Packet filtering inspects each packet passing through the network and accepts or rejects it based on user-defined
rules. Although difficult to configure, it is fairly effective and mostly transparent to its users. In addition, it is susceptible
to IP spoofing.2.Application gateway: Applies security mechanisms to specific applications, such as FTP and Telnet servers. This is very
effective, but can impose a performance degradation.3.Circuit-level gateway: Applies security mechanisms when a TCP or UDP connection is established. Once the connection has been
made, packets can flow between the hosts without further checking.4.Proxy server: Intercepts all messages entering and leaving the network. The proxy server effectively hides the true network
addresses
8.An Internet Protocol (IP) address is a numerical label that is assigned to devices participating in a computer network utilizing
the Internet Protocol for communication between its nodes.[1] An IP address serves two principal functions in networking: host
identification and location addressing. The role of the IP address has also been characterized as follows: "A name indicates what
we seek. An address indicates where it is. A route indicates how to get there."[2]The Internet Protocol also has the task of routing data packets between networks, and IP addresses specify the locations of the
source and destination nodes in the topology of the routing system. For this purpose, some of the bits in an IP address are used
to designate a subnetwork. The number of these bits is indicated in CIDR notation, appended to the IP address, e.g.,
208.77.188.166/24.
9.The Domain Name System (DNS) is a hierarchical naming system for computers, services, or any resource connected to the Internet
or a private network. It associates various information with domain names assigned to each of the participants. Most importantly,
it translates domain names meaningful to humans into the numerical (binary) identifiers associated with networking equipment for
the purpose of locating and addressing these devices worldwide. An often used analogy to explain the Domain Name System is that it
serves as the "phone book" for the Internet by translating human-friendly computer hostnames into IP addresses The Domain Name
System makes it possible to assign domain names to groups of Internet users in a meaningful way, independent of each user's
physical location
10.Electronic commerce, commonly known as (electronic marketing) e-commerce or eCommerce, consists of the buying and selling of
products or services over electronic systems such as the Internet and other computer networks. The amount of trade conducted
electronically has grown extraordinarily with widespread Internet usage. The use of commerce is conducted in this way, spurring
and drawing on innovations in electronic funds transfer, supply chain management, Internet marketing, online transaction
processing, electronic data interchange (EDI), inventory management systems, and automated data collection systems. Modern
electronic commerce typically uses the World Wide Web at least at some point in the transaction's lifecycle, although it can
encompass a wider range of technologies such as e-mail as well.
11.Electronic mail, often abbreviated as email, e.mail or e-mail, is a method of exchanging digital messages, designed primarily
for human use. E-mail systems are based on a store-and-forward model in which e-mail computer server systems accept, forward,
deliver and store messages on behalf of users, who only need to connect to the e-mail infrastructure, typically an e-mail server,
with a network-enabled device (e.g., a personal computer) for the duration of message submission or retrieval. Originally, e-mail
was always transmitted directly from one user's device to another's; nowadays this is rarely the case.
An electronic mail message consists of two components, the message header, and the message body, which is the email's content. The
message header contains control information, including, minimally, an originator's email address and one or more recipient
addresses. Usually additional information is added, such as a subject header field.The foundation for today's global Internet e-
mail service was created in the early ARPANET and standards for encoding of messages were proposed as early as, for example, in
1973 (RFC 561). An e-mail sent in the early 1970s looked very similar to one sent on the Internet today. Conversion from the
ARPANET to the Internet in the early 1980s produced the core of the current service.
12.The World Wide Web is a system of interlinked hypertext documents accessed via the Internet. With a web browser, one can view
Web pages that may contain text, images, videos, and other multimedia and navigate between them using hyperlinks. Using concepts
from earlier hypertext systems, English physicist Tim Berners-Lee, now the Director of the World Wide Web Consortium, wrote a
proposal in March 1989 for what would eventually become the World Wide Web.[1] He was later joined by Belgian computer scientist
Robert Cailliau while both were working at CERN in Geneva, Switzerland. In 1990, they proposed using "HyperText [...] to link and
access information of various kinds as a web of nodes in which the user can browse at will",[2] and released that web in
December.[3]



13.Modem (from modulator-demodulator) is a device that modulates an analog carrier signal to encode digital information, and also
demodulates such a carrier signal to decode the transmitted information. The goal is to produce a signal that can be transmitted
easily and decoded to reproduce the original digital data. Modems can be used over any means of transmitting analog signals, from
driven diodes to radio.
14.A website is a collection of related web pages, images, videos or other digital assets that are addressed with a common domain
name or IP address in an Internet Protocol-based network. A web site is hosted on at least one web server, accessible via a
network such as the Internet or a private local area network.
A web page is a document, typically written in plain text interspersed with formatting instructions of Hypertext Markup Language
(HTML, XHTML). A web page may incorporate elements from other websites with suitable markup anchors.
Web pages are accessed and transported with the Hypertext Transfer Protocol (HTTP), which may optionally employ encryption (HTTP
Secure, HTTPS) to provide security and privacy for the user of the web page content. The user's application, often a web browser,
renders the page content according to its HTML markup instructions onto a display terminal.
All publicly accessible websites collectively constitute the World Wide Web.
15.A web search engine is a tool designed to search for information on the World Wide Web. The search results are usually
presented in a list and are commonly called hits. The information may consist of web pages, images, information and other types of
files. Some search engines also mine data available in databases or open directories. Unlike Web directories, which are maintained
by human editors, search engines operate algorithmically or are a mixture of algorithmic and human input.
A search engine operates, in the following order
1.Web crawling2.Indexing3.Searching
JUST for REFERENCE[Web search engines work by storing information about many web pages, which they retrieve from the WWW itself. These pages are
retrieved by a Web crawler (sometimes also known as a spider) — an automated Web browser which follows every link it sees.
Exclusions can be made by the use of robots.txt. The contents of each page are then analyzed to determine how it should be indexed
(for example, words are extracted from the titles, headings, or special fields called meta tags). Data about web pages are stored
in an index database for use in later queries. Some search engines, such as Google, store all or part of the source page (referred
to as a cache) as well as information about the web pages, whereas others, such as AltaVista, store every word of every page they
find. This cached page always holds the actual search text since it is the one that was actually indexed, so it can be very useful
when the content of the current page has been updated and the search terms are no longer in it. This problem might be considered
to be a mild form of linkrot, and Google's handling of it increases usability by satisfying user expectations that the search
terms will be on the returned webpage. This satisfies the principle of least astonishment since the user normally expects the
search terms to be on the returned pages. Increased search relevance makes these cached pages very useful, even beyond the fact
that they may contain data that may no longer be available elsewhere.
When a user enters a query into a search engine (typically by using key words), the engine examines its index and provides a
listing of best-matching web pages according to its criteria, usually with a short summary containing the document's title and
sometimes parts of the text. Most search engines support the use of the boolean operators AND, OR and NOT to further specify the
search query. Some search engines provide an advanced feature called proximity search which allows users to define the distance
between keywords.
The usefulness of a search engine depends on the relevance of the result set it gives back. While there may be millions of web
pages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most
search engines employ methods to rank the results to provide the "best" results first. How a search engine decides which pages are
the best matches, and what order the results should be shown in, varies widely from one engine to another. The methods also change
over time as Internet usage changes and new techniques evolve.
Most Web search engines are commercial ventures supported by advertising revenue and, as a result, some employ the practice of
allowing advertisers to pay money to have their listings ranked higher in search results. Those search engines which do not accept
money for their search engine results make money by running search related ads alongside the regular search engine results. The
search engines make money every time someone clicks on one of these ads.]
15.Hypertext Transfer Protocol Secure (HTTPS) is a combination of the Hypertext Transfer Protocol with the SSL/TLS protocol to
provide encryption and secure identification of the server. HTTPS connections are often used for payment transactions on the World
Wide Web and for sensitive transactions in corporate information systems
For more details on this topic, see Transport Layer Security#How it works.The main idea of HTTPS is to create a secure channel over an insecure network. This ensures reasonable protection from
eavesdroppers and man-in-the-middle attacks, provided that adequate cipher suites are used and that the server certificate is
verified and trusted.
The trust inherent in HTTPS is based on major certificate authorities which come pre-installed in browser software (this is
equivalent to saying "I trust certificate authority (e.g. VeriSign/Microsoft/etc.) to tell me who I should trust"). Therefore an
HTTPS connection to a website can be trusted if and only if all of the following are true:
1.The user trusts the certificate authority to vouch only for legitimate websites without misleading names.2.The website provides a valid certificate (an invalid certificate shows a warning in most browsers), which means it was signed by
a trusted authority.3.The certificate correctly identifies the website (e.g. visiting https://somesite and receiving a certificate for "Somesite Inc."
and not "Shomesite Inc." [see #2]).4.Either the intervening hops on the internet are trustworthy, or the user trusts the protocol's encryption layer (TLS or SSL) is
unbreakable by an eavesdropper.

No comments:

Post a Comment