In our site all software are free and we do not put programs have Property Rights
we do not put any crack or serial ,software free only
Bookmark and Share

Friday, March 26, 2010

Microsoft Word in mobile

Of course, the program needs it to explain the program to run and edit files word in your mobile
its can open new word 2007 in your mobile




more details


http://downloadinghost.blogspot.com

Q-FileHide




A program to hide pictures, video and other files in your mobile
Anymore no one will see your files


click here



Deep Freeze



The functionality of the program are those who hear it the first time: the program and his job are keeping the device as it is after you install the program
In other words when you run this software you can install the programs as you like and delete what you want and change what you want, but after restart the machine will return the device as it will delete what you've added and the recovery of deleted the ..
So it is important to the owners of cafes and school computer to protect their computers, as well as matters computers owners to use their children

get file

http://downloadinghost.blogspot.com/



UniversalViewer


Always hold the files and then decompress the aspiration of files can not open them and instead go to search for thousands of programs to open the file and in the end, keep the file idle gossip and say to yourself mourning Fb time to Neverland This program opens a file even media files, flash photos documents even files office much to need originally office, but word documents, etc, and install the program, which has become an area of 4 MP fhg addition it's free is not worth the experience already indispensable

download


http://downloadinghost.blogspot.com/









Easy GIF Animator 5.02

Easy GIF Animator 5.02

Easy and strong program to create images, logos and icons animated GIF with the added visual effectsAnd overlaps on the images and effects updat Art. Making Program for the work of animated GIFYou can try out the program to see the wonderful features








control network





This program will make you see all of it with you or shared with you in network whether lan or wan And see what everyone else is doing and control them as well






Sunday, March 21, 2010

Central processing unit


The Central Processing Unit (CPU) or the processor is the portion of a computer system that carries out the instructions of a computer program, and is the primary element carrying out the computer's functions. This term has been in use in the computer industry at least since the early 1960s . The form, design and implementation of CPUs have changed dramatically since the earliest examples, but their fundamental operation remains much the same.

Early CPUs were custom-designed as a part of a larger, sometimes one-of-a-kind, computer. However, this costly method of designing custom CPUs for a particular application has largely given way to the development of mass-produced processors that are made for one or many purposes. This standardization trend generally began in the era of discrete transistor mainframes and minicomputers and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of these digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in everything from automobiles to cell phones and children's toys.

Computers such as the ENIAC had to be physically rewired in order to perform different tasks, these machines are "fixed-program computers." Since the term "CPU" is generally defined as a software (computer program) execution device, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer.

The idea of program computer was already present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was initially omitted so the machine could be finished sooner. On June 30, 1945, before ENIAC was even completed, mathematician John von Neumann distributed the paper entitled "First Draft of a Report on the EDVAC." It outlined the design of a stored-program computer that would eventually be completed in August 1949 EDVAC was designed to perform a certain number of instructions (or operations) of various types. These instructions could be combined to create useful programs for the EDVAC to run. Significantly, the programs written for EDVAC were stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, which was the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program, or software, that EDVAC ran could be changed simply by changing the contents of the computer's memory.

While von Neumann is most often credited with the design of the stored-program computer because of his design of EDVAC, others before him, such as Konrad Zuse, had suggested and implemented similar ideas. The so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both. Most modern CPUs are primarily von Neumann in design, but elements of the Harvard architecture are commonly seen as well.

As a digital device, a CPU is limited to a set of discrete states, and requires some kind of switching elements to differentiate between and change states. Prior to commercial development of the transistor, electrical relays and vacuum tubes (thermionic valves) were commonly used as switching elements. Although these had distinct speed advantages over earlier, purely mechanical designs, they were unreliable for various reasons. For example, building direct current sequential logic circuits out of relays requires additional hardware to cope with the problem of contact bounce. While vacuum tubes do not suffer from contact bounce, they must heat up before becoming fully operational, and they eventually cease to function due to slow contamination of their cathodes that occurs in the course of normal operation. If a tube's vacuum seal leaks, as sometimes happens, cathode contamination is accelerated. Usually, when a tube failed, the CPU would have to be diagnosed to locate the failed component so it could be replaced. Therefore, early electronic (vacuum tube based) computers were generally faster but less reliable than electromechanical (relay based) computers.

Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the (slower, but earlier) Harvard Mark I failed very rarely . In the end, tube based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs (see below for a discussion of clock rate). Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, limited largely by the speed of the switching devices they were built with.

Hardware


Hardware is a general term for the physical artifacts of a technology. It may also mean the physical components of a computer system, in the form of computer hardware.

Hardware historically meant the metal parts and fittings that were used to make wooden products stronger, more functional, longer lasting and easier to fabricate or assemble.

Modern hardware stores typically sell equipment such as keys, locks, hinges, latches, corners, handles, wire, chains, plumbing supplies, tools, utensils, cutlery and machine parts, especially when they are made of metal

Network science


Network science is a new and emerging scientific discipline that examines the interconnections among diverse physical or engineered networks, information networks, biological networks, cognitive and semantic networks, and social networks. This field of science seeks to discover common principles, algorithms and tools that govern network behavior. The National Research Council defines Network Science as "the study of network representations of physical, biological, and social phenomena leading to predictive models of these phenomena."
The study of networks has emerged in diverse disciplines as a means of analyzing complex relational data. The earliest known paper in this field is the famous Seven Bridges of Königsberg written by Leonhard Euler in 1736. Euler's mathematical description of vertices and edges was the foundation of graph theory, a branch of mathematics that studies the properties of pairwise relations in a network structure. The field of graph theory continued to develop and found applications in chemistry (Sylvester, 1878).

In the 1930s Jacob Moreno, a psychologist in the Gestalt tradition, arrived in the United States. He developed the sociogram and presented it to the public in April 1933 at a convention of medical scholars. Moreno claimed that "before the advent of sociometry no one knew what the interpersonal structure of a group 'precisely' looked like (Moreno, 1953). The sociogram was a representation of the social structure of a group of elementary school students. The boys were friends of boys and the girls were friends of girls with the exception of one boy who said he liked a single girl. The feeling was not reciprocated. This network representation of social structure was found so intriguing that it was printed in the The New York Times(April 3, 1933, page 17). The sociogram has found many applications and has grown into the field of social network analysis.

Probabilistic theory in network science developed as an off-shoot of graph theory with Paul Erdős and Alfréd Rényi's eight famous papers on random graphs. For social networks the exponential random graph model or p* graph is a notational framework used to represent the probability space of a tie occurring in a social network. An alternate approach to network probability structures is the network probability matrix, which models the probability of edges occurring in a network, based on the historic presence or absence of the edge in a sample of networks.

In the 1998, David Krackhardt and Kathleen Carley introduced the idea of a meta-network with the PCANS Model. They suggest that "all organizations are structured along these three domains, Individuals, Tasks, and Resources. Their paper introduced the concept that networks occur across multiple domains and that they are interrelated. This field has grown into another sub-discipline of network science called dynamic network analysis.

More recently other network science efforts have focused on mathematically describing different network topologies. Duncan Watts reconciled empirical data on networks with mathematical representation, describing the small-world network. Albert-László Barabási and Reka Albert developed the scale-free network which is a loosely defined network topology that contains hub vertices with many connections, that grow in a way to maintain a constant ratio in the number of the connections versus all other nodes. Although many networks, such as the internet, appear to maintain this aspect, other networks have long tailed distributions of nodes that only approximate scale free ratios.

Today, network science is an exciting and growing field. Scientists from many diverse fields are working together. Network science holds the promise of increasing collaboration across disciplines, by sharing data, algorithms, and software tools.

File sharing


File sharing is the practice of distributing or providing access to digitally stored information, such as computer programs, multi-media (audio, video), documents, or electronic books. It may be implemented in a variety of storage, transmission, and distribution models. Common methods are manual sharing using removable media, centralized computer file server installations on computer networks, World Wide Web-based hyperlinked documents, and the use of distributed peer-to-peer (P2P) networking.

File sharing is not in and of itself illegal. However, the increasing popularity of the mp3 music format in the late 1990s led to the release and growth of Napster and other software that aided the sharing of electronic files. This in practice led to a huge growth in illegal file sharing: the sharing of copyright protected files without permission.

Although the original Napster service was shut down by court order, it paved the way for decentralized peer-to-peer file sharing networks such as Gnutella, Gnutella2, eDonkey2000, the now-defunct Kazaa network, and BitTorrent.

Many file sharing networks and services, accused of facilitating illegal file sharing, have been shut down[citation needed] due to litigation by groups such as the RIAA and MPAA. During the early 2000s, the fight against copyright infringement expanded into lawsuits against individual users of file sharing software.

The economic impact of illegal file sharing on media industries is disputed. Some studies conclude that unauthorized downloading of movies, music and software is unequivocally damaging the economy, while other studies suggest file sharing is not the primary cause of declines in sales. Illegal file sharing remains widespread, with mixed public opinion about the morality of the practice.

Types of file sharing

Peer to peer networks

Some of the most popular options for file sharing on the Internet are peer-to-peer networks, such as Gnutella, Limewire, Gnutella2 and eDonkey network.

Users can use software that connects to a peer-to-peer network to search for shared files on the computers of other users (i.e., peers) connected to the network. Files of interest can then be downloaded directly from other users on the network. Typically, large files are broken down into smaller chunks, which may be obtained from multiple peers and then reassembled by the downloader. This is done while the peer is simultaneously uploading the chunks it already has to other peers.
[edit] File hosting services

File hosting services are a simple alternative to peer-to-peer software. These are sometimes used together with Internet collaboration tools such as email, forums, blogs, or any other medium in which links to direct downloads from file hosting services can be embedded. These sites typically host files so that others can download them.



sharing


Sharing is the joint use of a resource or space. In its narrow sense, it refers to joint or alternating use of an inherently finite good, such as a common pasture or a shared residence. It is also the process of dividing and distributing. Apart from obvious instances, which we can observe in human activity, we can also find many examples of this happening naturally in nature. When an organism takes in nutrition or oxygen for instance, its internal organs are designed to divide and distribute the energy taken in, to supply parts of its body that need it. Flowers divide and distribute their seeds. In a broader sense, it can also include the free granting of use rights to a good that is capable of being treated as a nonrival good, such as information. Still more loosely, “sharing” can actually mean giving something as an outright gift: for example, to “share” one's food really means to give some of it as a gift.

Video game


A video game is an electronic game that involves interaction with a user interface to generate visual feedback on a video device. The word video in video game traditionally referred to a raster display device. However, with the popular use of the term "video game", it now implies any type of display device. The electronic systems used to play video games are known as platforms; examples of these are personal computers and video game consoles. These platforms range from large mainframe computers to small handheld devices. Specialized video games such as arcade games, while previously common, have gradually declined in use.

The input device used to manipulate video games is called a game controller, and varies across platforms. For example, a dedicated console controller might consist of only a button and a joystick. Another may feature a dozen buttons and one or more joysticks. Early personal computer games often needed a keyboard for gameplay, or more commonly, required the user to buy a separate joystick with at least one button. Many modern computer games allow, or even require, the player to use a keyboard and mouse simultaneously.

Video games typically also use other ways of providing interaction and information to the player. Audio is almost universal, using sound reproduction devices, such as speakers and headphones. Other feedback may come via haptic peripherals, such as vibration or force feedback, with vibration sometimes used to simulate force feedback.

History



Early games used interactive electronic devices with various display formats. The earliest example is from 1947—a "Cathode ray tube Amusement Device" was filed for a patent on January 25, 1947 by Thomas T. Goldsmith Jr. and Estle Ray Mann, and issued on December 14, 1948 as U.S. Patent 2455992.
Inspired by radar display tech, it consisted of an analog device that allowed a user to control a vector-drawn dot on the screen to simulate a missile being fired at targets, which were drawings fixed to the screen.

Other early examples include:

* The NIMROD computer at the 1951 Festival of Britain
* OXO a tic-tac-toe Computer game by Alexander S. Douglas for the EDSAC in 1952
* Tennis for Two, an interactive game engineered by William Higinbotham in 1958
* Spacewar!, written by MIT students Martin Graetz, Steve Russell, and Wayne Wiitanen's on a DEC PDP-1 computer in 1961.

Each game used different means of display: NIMROD used a panel of lights to play the game of Nim, OXO used a graphical display to play tic-tac-toe Tennis for Two used an oscilloscope to display a side view of a tennis court, and Spacewar! used the DEC PDP-1's vector display to have two spaceships battle each other.

In 1971, Computer Space, created by Nolan Bushnell and Ted Dabney, was the first commercially-sold, coin-operated video game. It used a black-and-white television for its display, and the computer system was made of 74 series TTL chips. The game was featured in the 1973 science fiction film Soylent Green. Computer Space was followed in 1972 by the Magnavox Odyssey, the first home console. Modeled after a late 1960s prototype console developed by Ralph H. Baer called the "Brown Box", it also used a standard television. These were followed by two versions of Atari's Pong; an arcade version in 1972 and a home version in 1975.[10] The commercial success of Pong led numerous other companies to develop Pong clones and their own systems, spawning the video game industry.

Computer software


Computer software, or just software is a general term primarily used for digitally stored data such as computer programs and other kinds of information read and written by computers. Today, this includes data that has not traditionally been associated with computers, such as film, tapes and records. The term was coined in order to contrast to the old term hardware (meaning physical devices); in contrast to hardware, software is intangible, meaning it "cannot be touched". Software is also sometimes used in a more narrow sense, meaning application software only.

Examples:

* Application software, such as word processors which perform productive tasks for users.
* Firmware, which is software programmed resident to electrically programmable memory devices on board mainboards or other types of integrated hardware carriers.
* Middleware, which controls and co-ordinates distributed systems.
* System software such as operating systems, which govern computing resources and provide convenience for users.
* Software testing is a domain independent of development and programming. Software testing consists of various methods to test and declare a software product fit before it can be launched for use by either an individual or a group.
* Testware, which is an umbrella term or container term for all utilities and application software that serve in combination for testing a software package but not necessarily may optionally contribute to operational purposes. As such, testware is not a standing configuration but merely a working environment for application software or subsets thereof.
* Video games (except the hardware part)
* Websites

Hosts file


The hosts file is a computer file used in an operating system to map hostnames to IP addresses. This method is one of several system facilities to address network nodes on a computer network. On some operating systems, the host file content is used preferentially over other methods, such as the Domain Name System (DNS), but many systems implement name service switches to provide customization. Unlike DNS, the hosts file is under the control of the local computer's administrator

The ARPANET, the predecessor of the Internet, had no distributed host name database, such as the modern Domain Name System for retrieving a host's network node address. Each network node maintained its own map of the network nodes as needed and assigned them names that were memorable to the system's users. There was no method for ensuring that all references to a given node on a network were using the same name, nor was there a way to read some other system's hosts file to automatically obtain a copy.

The small size of the ARPANET made the use of hosts files practical. Network nodes typically had one address and could have many names. As individual TCP/IP computer networks gained popularity, however, the maintenance of the hosts file became a larger burden on system administrators as networks and network nodes were being added to the system with increasing frequency.

Standardization efforts, such as the format specification of the file HOSTS.TXT in RFC 952, and distribution protocols, e.g., the hostname server described in RFC 953, helped with these problems, but the centralized and monolithic nature of host files eventually necessitated the creation of the distributed Domain Name System.


Image


Images may be two-dimensional, such as a photograph, screen display, and as well as a three-dimensional, such as a statue. They may be captured by optical devices—such as cameras, mirrors, lenses, telescopes, microscopes, etc. and natural objects and phenomena, such as the human eye or water surfaces.

The word image is also used in the broader sense of any two-dimensional figure such as a map, a graph, a pie chart, or an abstract painting. In this wider sense, images can also be rendered manually, such as by drawing, painting, carving, rendered automatically by printing or computer graphics technology, or developed by a combination of methods, especially in a pseudo-photograph.

A volatile image is one that exists only for a short period of time. This may be a reflection of an object by a mirror, a projection of a camera obscura, or a scene displayed on a cathode ray tube. A fixed image, also called a hard copy, is one that has been recorded on a material object, such as paper or textile by photography or digital processes.

A mental image exists in an individual's mind: something one remembers or imagines. The subject of an image need not be real; it may be an abstract concept, such as a graph, function, or "imaginary" entity. For example, Sigmund Freud claimed to have dreamt purely in aural-images of dialogues. The development of synthetic acoustic technologies and the creation of sound art have led to a consideration of the possibilities of a sound-image made up of irreducible phonic substance beyond linguistic or musicological analysis.

A still image is a single static image, as distinguished from a moving image (see below). This phrase is used in photography, visual media and the computer industry to emphasize that one is not talking about movies, or in very precise or pedantic technical writing such as a standard.

A film still is a photograph taken on the set of a movie or television program during production, used for promotional purposes.

Geography


Java lies between Sumatra to the west and Bali to the east. Borneo lies to the north and Christmas Island to the south. It is the world's 13th largest island.

Java is almost entirely of volcanic origin; it contains no fewer than thirty-eight mountains forming an east-west spine which have at one time or another been active volcanoes. The highest volcano in Java is Mount Semeru (3,676 m). The most active volcano in Java and also in Indonesia is Mount Merapi (2,968 m). See Volcanoes of Java. Further mountains and highlands help to split the interior into a series of relatively isolated regions suitable for wet-rice cultivation; the rice lands of Java are among the richest in the world. Java was the first place where Indonesian coffee was grown, starting in 1699. Today, Coffea arabica is grown on the Ijen Plateau by small-holders and larger plantations.

The area of Java is approximately 139,000 km2. The island's longest river is the 600 km long Bengawan Solo River. The river rises from its source in central Java at the Lawu volcano, then flows north and eastwards to its mouth in the Java Sea near the city of Surabaya. The island is administratively divided into four provinces (Banten, West Java, Central Java, and East Java), one special region (Yogyakarta), and one special capital district (Jakarta).

Java


Java (Indonesian: Jawa) is an island of Indonesia and the site of its capital city, Jakarta. Once the center of powerful Hindu-Buddhist kingdoms, Islamic sultanates, and the core of the colonial Dutch East Indies, Java now plays a dominant role in the economic and political life of Indonesia. Home to a population of 130 million in 2006[1], it is the most populous island in the world, ahead of Honshū, the main island of Japan. Java is also one of the most densely populated regions on Earth.

Formed mostly as the result of volcanic events, Java is the 13th largest island in the world and the fifth largest island in Indonesia. A chain of volcanic mountains forms an east-west spine along the island. It has three main languages, though Javanese is dominant and is the native language of 60 million people in Indonesia, most of whom live on Java. Most residents are bilingual, with Indonesian as their second language. While the majority of Javanese are Muslim, Java has a diverse mixture of religious beliefs and cultures.


Technology


The complex communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. While the hardware can often be used to support other software systems, it is the design and the rigorous standardization process of the software architecture that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been delegated to the Internet Engineering Task Force (IETF). The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. Resulting discussions and final standards are published in a series of publications, each called a Request for Comments (RFC), freely available on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices (BCP) when implementing Internet technologies.

The Internet Standards describe a framework known as the Internet Protocol Suite. This is a model architecture that divides methods into a layered system of protocols (RFC 1122, RFC 1123). The layers correspond to the environment or scope in which their services operate. At the top is the Application Layer, the space for the application-specific networking methods used in software applications, e.g., a web browser program. Below this top layer, the Transport Layer connects applications on different hosts via the network (e.g., client-server model) with appropriate data exchange methods. Underlying these layers are the core networking technologies, consisting of two layers. The Internet Layer enables computers to identify and locate each other via Internet Protocol (IP) addresses, and allows them to connect to one-another via intermediate (transit) networks. Lastly, at the bottom of the architecture, is a software layer, the Link Layer, that provides connectivity between hosts on the same local network link, such as a local area network (LAN) or a dial-up connection. The model, also known as TCP/IP, is designed to be independent of the underlying hardware which the model therefore does not concern itself with in any detail. Other models have been developed, such as the Open Systems Interconnection (OSI) model, but they are not compatible in the details of description, nor implementation, but many similarities exist and the TCP/IP protocols are usually included in the discussion of OSI networking.

The most prominent component of the Internet model is the Internet Protocol (IP) which provides addressing systems (IP addresses) for computers on the Internet. IP enables internetworking and essentially establishes the Internet itself. IP Version 4 (IPv4) is the initial version used on the first generation of the today's Internet and is still in dominant use. It was designed to address up to ~4.3 billion (109) Internet hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion which is estimated to enter its final stage in approximately 2011. A new protocol version, IPv6, was developed in the mid 1990s which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 is currently in commercial deployment phase around the world and Internet address registries (RIRs) have begun to urge all resource managers to plan rapid adoption and conversion.

IPv6 is not interoperable with IPv4. It essentially establishes a "parallel" version of the Internet not directly accessible with IPv4 software. This means software upgrades or translator facilities are necessary for every networking device that needs to communicate on the IPv6 Internet. Most modern computer operating systems are already converted to operate with both versions of the Internet Protocol. Network infrastructures, however, are still lagging in this development. Aside from the complex physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe how to exchange data over the network. Indeed, the Internet is defined by its interconnections and routing policies.
Structure

The Internet structure and its usage characteristics have been studied extensively. It has been determined that both the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks. Similar to the way the commercial Internet providers connect via Internet exchange points, research networks tend to interconnect into large subnetworks such as GEANT, GLORIAD, Internet2 (successor of the Abilene Network), and the UK's national research and education network JANET. These in turn are built around smaller networks (see also the list of academic computer network organizations).

Many computer scientists describe the Internet as a "prime example of a large-scale, highly engineered, yet highly complex system". The Internet is extremely heterogeneous; for instance, data transfer rates and physical characteristics of connections vary widely. The Internet exhibits "emergent phenomena" that depend on its large-scale organization. For example, data transfer rates exhibit temporal self-similarity. The principles of the routing and addressing methods for traffic in the Internet reach back to their origins the 1960s when the eventual scale and popularity of the network could not be anticipated. Thus, the possibility of developing alternative structures is investigated.

Internet


The Internet is a global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP) to serve billions of users worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks of local to global scope that are linked by a broad array of electronic and optical networking technologies. The Internet carries a vast array of information resources and services, most notably the inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to support electronic mail.

Most traditional communications media, such as telephone and television services, are reshaped or redefined using the technologies of the Internet, giving rise to services such as Voice over Internet Protocol (VoIP) and IPTV. Newspaper publishing has been reshaped into Web sites, blogging, and web feeds. The Internet has enabled or accelerated the creation of new forms of human interactions through instant messaging, Internet forums, and social networking sites.

The origins of the Internet reach back to the 1960s when the United States funded research projects of its military agencies to build robust, fault-tolerant and distributed computer networks. This research and a period of civilian funding of a new U.S. backbone by the National Science Foundation spawned worldwide participation in the development of new networking technologies and led to the commercialization of an international network in the mid 1990s, and resulted in the following popularization of countless applications in virtually every aspect of modern human life. As of 2009, an estimated quarter of Earth's population uses the services of the Internet.

The Internet has no centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own standards. Only the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System, are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely-affiliated international participants that anyone may associate with by contributing technical expertise.

Uploading and downloading


In computer networks, to download means to receive data to a local system from a remote system, or to initiate such a data transfer. Examples of a remote system from which a download might be performed include a webserver, FTP server, email server, or other similar systems. A download can mean either any file that is offered for downloading or that has been downloaded, or the process of receiving such a file.

The inverse operation, uploading, can refer to the sending of data from a local system to a remote system such as a server or another client with the intent that the remote system should store a copy of the data being transferred, or the initiation of such a process. The words first came into popular usage among computer users with the increased popularity of Bulletin Board Systems (BBSs), facilitated by the widespread distribution and implementation of dial-up access the in the 1970s.

the use of the terms uploading and downloading often imply that the data sent or received is to be stored permanently, or at least stored more than temporarily. In contrast, the term downloading is distinguished from the related concept of streaming, which indicates the receiving of data that is used near immediately as it is received, while the transmission is still in progress and which may not be stored long-term, whereas in a process described using the term downloading, this would imply that the data is only usable when it has been received in its entirety.

Increasingly, websites that offer streaming media or media displayed in-browser, such as YouTube, and which place restrictions on the ability of users to save these materials to their computers after they have been received, say that downloading is not permitted.[1] In this context, "download" implies specifically "receive and save" instead of simply "receive".


Webs (web hosting)


Webs began in 2001 as Freewebs, just after the dot-com bubble, as a start-up launched by Haroon and Zeki Mokhtarzada after their last year at the University of Maryland, College Park. Their goal was to make the Internet accessible to anyone and to protect user's rights to free speech. As of April 2007, the site recorded 18 million unique visitors a month. Webs sites can include blogs, forums, wikis, calendars, guestbooks, webstore, photo gallery, links, web forms, widgets, games, puzzles, videos, and designer templates. The site includes paid features such as removal of on-site advertisements as well as the inclusion of other features.

On November 14, 2008, Freewebs changed their name to Webs, but users' URLs remained in the freewebs.com domain unless they chose to change over.

Security


The Web has become criminals' preferred pathway for spreading malware. Cybercrime carried out on the Web can include identity theft, fraud, espionage and intelligence gathering. Web-based vulnerabilities now outnumber traditional computer security concerns, and as measured by Google, about one in ten web pages may contain malicious code. Most Web-based attacks take place on legitimate websites, and most, as measured by Sophos, are hosted in the United States, China and Russia. The most common of all malware threats is SQL injection attacks against websites. Through HTML and URIs the Web was vulnerable to attacks like cross-site scripting (XSS) that came with the introduction of JavaScript and were exacerbated to some degree by Web 2.0 and Ajax web design that favors the use of scripts. Today by one estimate, 70% of all websites are open to XSS attacks on their users.

Proposed solutions vary to extremes. Large security vendors like McAfee already design governance and compliance suites to meet post-9/11 regulations, and some, like Finjan have recommended active real-time inspection of code and all content regardless of its source. Some have argued that for enterprise to see security as a business opportunity rather than a cost center, "ubiquitous, always-on digital rights management" enforced in the infrastructure by a handful of organizations must replace the hundreds of companies that today secure data and networks. Jonathan Zittrain has said users sharing responsibility for computing safety is far preferable to locking down the Internet.

Linking


Over time, many web resources pointed to by hyperlinks disappear, relocate, or are replaced with different content. This makes hyperlinks obsolete, a phenomenon referred to in some circles as link rot and the hyperlinks affected by it are often called dead links. The ephemeral nature of the Web has prompted many efforts to archive web sites. The Internet Archive, active since 1996, is one of the best-known efforts.
Dynamic updates of web pages
Main article: Ajax (programming)

JavaScript is a scripting language that was initially developed in 1995 by Brendan Eich, then of Netscape, for use within web pages. The standardized version is ECMAScript. To overcome some of the limitations of the page-by-page model described above, some web applications also use Ajax (asynchronous JavaScript and XML). JavaScript is delivered with the page that can make additional HTTP requests to the server, either in response to user actions such as mouse-clicks, or based on lapsed time. The server's responses are used to modify the current page rather than creating a new page with each response. Thus the server only needs to provide limited, incremental information. Since multiple Ajax requests can be handled at the same time, users can interact with a page even while data is being retrieved. Some web applications regularly poll the server to ask if new information is available.
WWW prefix

Many web addresses begin with www, because of the long-standing practice of naming Internet hosts (servers) according to the services they provide. The hostname for a web server is often www, as it is ftp for an FTP server, and news or nntp for a USENET news server. These host names appear as Domain Name System (DNS) subdomain names, as in www.example.com. The use of such subdomain names is not required by any technical or policy standard; indeed, the first ever web server was called nxoc01.cern.ch,[21] and many web sites exist without a www subdomain prefix, or with some other prefix such as "www2", "secure", etc. These subdomain prefixes have no consequence; they are simply chosen names. Many web servers are set up such that both the domain by itself (e.g., example.com) and the www subdomain (e.g., www.example.com) refer to the same site, others require one form or the other, or they may map to different web sites.

When a single word is typed into the address bar and the return key is pressed, some web browsers automatically try adding "www." to the beginning of it and possibly ".com", ".org" and ".net" at the end. For example, typing 'apple' may resolve to http://www.apple.com/ and 'openoffice' to http://www.openoffice.org. This feature was beginning to be included in early versions of Mozilla Firefox (when it still had the working title 'Firebird') in early 2003.[22] It is reported that Microsoft was granted a US patent for the same idea in 2008, but only with regard to mobile devices.

The 'http://' or 'https://' part of web addresses does have meaning: These refer to Hypertext Transfer Protocol and to HTTP Secure and so define the communication protocol that will be used to request and receive the page and all its images and other resources. The HTTP network protocol is fundamental to the way the World Wide Web works, and the encryption involved in HTTPS adds an essential layer if confidential information such as passwords or bank details are to be exchanged over the public internet. Web browsers often prepend this 'scheme' part to URLs too, if it is omitted. Despite this, Berners-Lee himself has admitted that the two 'forward slashes' (//) were in fact initially unnecessary[24]. In overview, RFC 2396 defined web URLs to have the following form: ://?#. Here is for example the web server (like www.example.com), and identifies the web page. The web server processes the , which can be data sent via a form, e.g., terms sent to a search engine, and the returned page depends on it. Finally, is not sent to the web server. It identifies the portion of the page which the browser shows first.

In English, www is pronounced by individually pronouncing the name of characters (double-u double-u double-u). Although some technical users pronounce it dub-dub-dub this is not widespread. The English writer Douglas Adams once quipped in The Independent on Sunday (1999): "The World Wide Web is the only thing I know of whose shortened form takes three times longer to say than what it's short for," with Stephen Fry later pronouncing it in his "Podgrammes" series of podcasts as "wuh wuh wuh." In Mandarin Chinese, World Wide Web is commonly translated via a phono-semantic matching to wàn wéi wǎng (万维网), which satisfies www and literally means "myriad dimensional net", a translation that very appropriately reflects the design concept and proliferation of the World Wide Web. Tim Berners-Lee's web-space states that World Wide Web is officially spelled as three separate words, each capitalized, with no intervening hyphens

History of the World Wide Web


In March 1989, Sir Tim Berners-Lee wrote a proposal that referenced ENQUIRE, a database and software project he had built in 1980, and described a more elaborate information management system.

With help from Robert Cailliau, he published a more formal proposal (on November 12, 1990) to build a "Hypertext project" called "WorldWideWeb" (one word, also "W3") as a "web" of "hypertext documents" to be viewed by "browsers", using a client-server architecture. This proposal estimated that a read-only web would be developed within three months and that it would take six months to achieve, "the creation of new links and new material by readers, [so that] authorship becomes universal" as well as "the automatic notification of a reader when new material of interest to him/her has become available". See Web 2.0 and RSS/Atom, which have taken a little longer to mature.

The proposal had been modeled after the Dynatext SGML reader, by Electronic Book Technology, a spin-off from the Institute for Research in Information and Scholarship at Brown University. The Dynatext system, licensed by CERN, was technically advanced and was a key player in the extension of SGML ISO 8879:1986 to Hypermedia within HyTime, but it was considered too expensive and had an inappropriate licensing policy for use in the general high energy physics community, namely a fee for each document and each document alteration.
A NeXT Computer was used by Berners-Lee as the world's first web server and also to write the first web browser, WorldWideWeb, in 1990. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web:[6] the first web browser (which was a web editor as well), the first web server, and the first web pages[7] which described the project itself. On August 6, 1991, he posted a short summary of the World Wide Web project on the alt.hypertext newsgroup. This date also marked the debut of the Web as a publicly available service on the Internet. The first server outside Europe was set up at SLAC in December 1992. The crucial underlying concept of hypertext originated with older projects from the 1960s, such as the Hypertext Editing System (HES) at Brown University--- among others Ted Nelson and Andries van Dam--- Ted Nelson's Project Xanadu and Douglas Engelbart's oN-Line System (NLS). Both Nelson and Engelbart were in turn inspired by Vannevar Bush's microfilm-based "memex," which was described in the 1945 essay "As We May Think".[citation needed]

Berners-Lee's breakthrough was to marry hypertext to the Internet. In his book Weaving The Web, he explains that he had repeatedly suggested that a marriage between the two technologies was possible to members of both technical communities, but when no one took up his invitation, he finally tackled the project himself. In the process, he developed a system of globally unique identifiers for resources on the Web and elsewhere: the Universal Document Identifier (UDI) later known as Uniform Resource Locator (URL) and Uniform Resource Identifier (URI); and the publishing language HyperText Markup Language (HTML); and the Hypertext Transfer Protocol (HTTP).

The World Wide Web had a number of differences from other hypertext systems that were then available. The Web required only unidirectional links rather than bidirectional ones. This made it possible for someone to link to another resource without action by the owner of that resource. It also significantly reduced the difficulty of implementing web servers and browsers (in comparison to earlier systems), but in turn presented the chronic problem of link rot. Unlike predecessors such as HyperCard, the World Wide Web was non-proprietary, making it possible to develop servers and clients independently and to add extensions without licensing restrictions. On April 30, 1993, CERN announced that the World Wide Web would be free to anyone, with no fees due. Coming two months after the announcement that the Gopher protocol was no longer free to use, this produced a rapid shift away from Gopher and towards the Web. An early popular web browser was ViolaWWW, which was based upon HyperCard.

Scholars generally agree that a turning point for the World Wide Web began with the introduction of the Mosaic web browser in 1993, a graphical browser developed by a team at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign (NCSA-UIUC), led by Marc Andreessen. Funding for Mosaic came from the U.S. High-Performance Computing and Communications Initiative, a funding program initiated by the High Performance Computing and Communication Act of 1991, one of several computing developments initiated by U.S. Senator Al Gore. Prior to the release of Mosaic, graphics were not commonly mixed with text in web pages, and the Web's popularity was less than older protocols in use over the Internet, such as Gopher and Wide Area Information Servers (WAIS). Mosaic's graphical user interface allowed the Web to become, by far, the most popular Internet protocol.

The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee after he left the European Organization for Nuclear Research (CERN) in October, 1994. It was founded at the Massachusetts Institute of Technology Laboratory for Computer Science (MIT/LCS) with support from the Defense Advanced Research Projects Agency (DARPA)—which had pioneered the Internet—and the European Commission. By the end of 1994, while the total number of websites was still minute compared to present standards, quite a number of notable websites were already active, many of whom are the precursors or inspiration for today's most popular services.

Connected by the existing Internet, other websites were created around the world, adding international standards for domain names and the HTML. Since then, Berners-Lee has played an active role in guiding the development of web standards (such as the markup languages in which web pages are composed), and in recent years has advocated his vision of a Semantic Web. The World Wide Web enabled the spread of information over the Internet through an easy-to-use and flexible format. It thus played an important role in popularizing use of the Internet. Although the two terms are sometimes conflated in popular use, World Wide Web is not synonymous with Internet. The Web is an application built on top of the Internet.

World Wide Web


The Web" and "WWW" redirect here. For other uses, see WWW (disambiguation).
The Web's historic logo designed by Robert Cailliau

The World Wide Web, abbreviated as WWW and commonly known as The Web, is a system of interlinked hypertext documents contained on the Internet. With a web browser, one can view web pages that may contain text, images, videos, and other multimedia and navigate between them by using hyperlinks. Using concepts from earlier hypertext systems, British engineer and computer scientist Sir Tim Berners Lee, now the Director of the World Wide Web Consortium, wrote a proposal in March 1989 for what would eventually become the World Wide Web. He was later joined by Belgian computer scientist Robert Cailliau while both were working at CERN in Geneva, Switzerland. In 1990, they proposed using "HyperText [...] to link and access information of various kinds as a web of nodes in which the user can browse at will", and released that web in December.

"The World-Wide Web (W3) was developed to be a pool of human knowledge, which would allow collaborators in remote sites to share their ideas and all aspects of a common project." If two projects are independently created, rather than have a central figure make the changes, the two bodies of information could form into one cohesive piece of work.