Universal Plug and Play

Universal Plug and Play (UPnP) is a set of networking protocols promulgated by the UPnP forum. The goals of UPnP are to allow devices to connect seamlessly and to simplify the implementation of networks in the home (data sharing, communications, and entertainment) and in corporate environments for simplified installation of computer components. UPnP achieves this by defining and publishing UPnP device control protocols (DCP) built upon open, Internet-based communication standards.

The term UPnP is derived from plug and play, a technology for dynamically attaching devices directly to a computer, although UPnP is not directly related to the earlier plug-and-play technology. UPnP devices are "plug-and-play" in that when connected to a network they automatically announce their network address and supported device and services types, enabling clients that recognize those types to immediately begin using the device.

Overview

The UPnP architecture allows peer-to-peer networking of PCS, networked home appliance, CEwirelless devices. It is a distributed, open architecture protocol based on established standards such as TCP/IP, UDP, HTTP, XML, and SOAP. devices and

The UPnP architecture supports zero-configuration networking. A UPnP compatible device from any vendor can dynamically join a network, obtain an IP address, announce its name, convey its capabilities upon request, and learn about the presence and capabilities of other devices. DHCPDNS servers are optional and are only used if they are available on the network. Devices can leave the network automatically without leaving any unwanted state information behind. and

UPnP was published as a 73-part international standard, ISO/IEC 29341, in December, 2008.

Other UPnP features include:

Media and device independence
UPnP technology can run on many media that support IP including ethernet, fire wire, IR (IRda), home wiring (G.Hn) and RF (Bluetooth, WiFi). No special device driver support is necessary; common protocols are used instead.
UI Control
UPnP architecture enables devices to present a user interface through a web browser
Operating system and programming language independence
Any operating system and any programming language can be used to build UPnP products. UPnP does not specify or constrain the design of an API for applications running on control points; OS vendors may create APIs that suit their customer's needs.
Programmatic control
UPnP architecture also enables conventional application programmatic control.
Extensibility
Each UPnP product can have device-specific services layered on top of the basic architecture. In addition to combining services defined by UPnP Forum in various ways, vendors can define their own device and service types, and can extend standard devices and services with vendor-defined actions, state variables, data structure elements, and variable values.

AddressingRata Tengah

The foundation for UPnP networking is IP addressing. Each device must have a Dynamic Host Configuration Protocol (DHCP) client and search for a DHCP server when the device is first connected to the network. If no DHCP server is available, that is, the network is unmanaged, the device must assign itself an address. The process by which a UPnP device assigns itself an address is known within the UPnP Device Architecture as "AutoIP". In UPnP Device Architecture Version 1.0, AutoIP is defined within the specification itself; in UPnP Device Architecture Version 1.1, AutoIP references IETF RFC 3927. If during the DHCP transaction, the device obtains a domain name, for example, through a DNS server or via DNS fordwarding, the device should use that name in subsequent network operations; otherwise, the device should use its IP address.

Discovery

Given an IP address, the first step in UPnP networking is Discovery. The UPnP discovery protocol, defined in Section 1 of the UPnP Device Architecture, is known as the Simple Service Discovery Protocol (SSDP). When a device is added to the network, SSDP allows that device to advertise its services to control points on the network. Similarly, when a control point is added to the network, SSDP allows that control point to search for devices of interest on the network. The fundamental exchange in both cases is a discovery message containing a few, essential specifics about the device or one of its services, for example, its type, identifier, and a pointer to more detailed information.

Description

After a control point has discovered a device, the control point still knows very little about the device. For the control point to learn more about the device and its capabilities, or to interact with the device, the control point must retrieve the device's description from the URL provided by the device in the discovery message. The UPnP description for a device is expressed in XML and includes vendor-specific, manufacturer information like the model name and number, serial number, manufacturer name, URLs to vendor-specific web sites, etc. The description also includes a list of any embedded devices or services, as well as URLs for control, eventing, and presentation. For each service, the description includes a list of the commands, or actions, to which the service responds, and parameters, or arguments, for each action; the description for a service also includes a list of variables; these variables model the state of the service at run time, and are described in terms of their data type, range, and event characteristics

Control

Having retrieved a description of the device, the control point can send actions to a device's service. To do this, a control point sends a suitable control message to the control URL for the service (provided in the device description). Control messages are also expressed in XML using the Simple Object Access Protocol (SOAP). Much like function calls, the service returns any action-specific values in response to the control message. The effects of the action, if any, are modeled by changes in the variables that describe the run-time state of the service.

Event notification

The next step in UPnP networking is event notification, or "eventing". The event notification protocol defined in the UPnP Device Architecture is known as GENA, an acronym for "General Event Notification Architecture". A UPnP description for a service includes a list of actions the service responds to and a list of variables that model the state of the service at run time. The service publishes updates when these variables change, and a control point may subscribe to receive this information. The service publishes updates by sending event messages. Event messages contain the names of one or more state variables and the current value of those variables. These messages are also expressed in XML. A special initial event message is sent when a control point first subscribes; this event message contains the names and values for all evented variables and allows the subscriber to initialize its model of the state of the service. To support scenarios with multiple control points, eventing is designed to keep all control points equally informed about the effects of any action. Therefore, all subscribers are sent all event messages, subscribers receive event messages for all "evented" variables that have changed, and event messages are sent no matter why the state variable changed (either in response to a requested action or because the state the service is modeling changed).

Presentation

The final step in UPnP networking is presentation. If a device has a URL for presentation, then the control point can retrieve a page from this URL, load the page into a web browser, and depending on the capabilities of the page, allow a user to control the device and/or view device status. The degree to which each of these can be accomplished depends on the specific capabilities of the presentation page and device.

UPnP AV standards

UPnP AV stands for UPnP Audio and Video. On 12 July 2006 the UPnP Forum announced the release of version 2 of the UPnP Audio and Video specifications (UPnP AV v2), with new MediaServer version 2.0 and MediaRenderer version 2.0 classes. These enhancements are created by adding capabilities to the UPnP AV Media Server and MediaRenderer device classes that allow a higher level of interoperability between MediaServers and MediaRenderers from different manufacturers. Some of the early devices complying with these standards were marketed by Phillips under the Streamium brand name.

The UPnP AV standards have been referenced in specifications published by other organizations including Digital Living Network Alliance Networked Device Interoperability Guidelines,International Electrotechnical Commision IEC 62481-1, and Cable Televison Laboratories Open CAble Home Networking Protocol

UPnP AV components

  • UPnP MediaServer DCP - which is the UPnP-server (a 'master' device) that media library information and streams media-data (like audio/video/picture/files) to UPnP-clients on the network.
  • UPnP MediaServer ControlPoint - which is the UPnP-client (a 'slave' device) that can auto-detect UPnP-servers on the network to browse and stream media/data-files from them.
  • UPnP MediaRenderer DCP - which is a 'slave' device that can render (play) content.
  • UPnP RenderingControl DCP - control MediaRenderer settings; volume, brightness, RGB, sharpness, and more).
  • UPnP Remote User Interface (RUI) client/server - which sends/receives control-commands between the UPnP-client and UPnP-server over network, (like record, schedule, play, pause, stop, etc.).
    • Web4CE (CEA 2014) for UPnP Remote UI- CEA-2014 standard designed by Consumer Electronic Association's R7 Home Network Committee. Web-based protocol and framework for Remote User Interface on UPnP networks and the internet interface (display and control options) as a web page to display on any other device connected to the home network. That means that you can control a home networking browser-based communications method for CE Devices on a UPnP home network using ethernet and a special version of HTML called CE-HTML. (Web4CE). This standard allows a UPnP-capable home network device to provide its device through any
  • QoS (Quality of Service) - is an important (but not mandatory) service function for use with UPnP AV (Audio and Video). QoS(Quality of Service) refers to control mechanisms that can provide different priority to different users or data flows, or guarantee a certain level of performance to a data flow in accordance with requests from the application program. Since UPnP AV is mostly to deliver streaming media that is often near-real time or real-time audio/video data which it is critical to be delivered within a specific time or the stream is interrupted.Quality of Services guarantees are especially important if the network capacity is limited, for example public networks, like the internet.
  • Quality of Services for UPnP consist of Sink Device (client-side/front-end) and Source Device (server-side/back-end) service functions. With classes such as; Traffic Class that indicates the kind of traffic in the traffic stream, (for example, audio or video). Traffic Identifier (TID) which identifies data packets as belonging to a unique traffic stream. Traffic Specification (TSPEC) which contains a set of parameters that define the characteristics of the traffic stream, (for example operating requirement and scheduling). Traffic Stream (TS) which is a unidirectional flow of data that originates at a source device and terminates at one or more sink device(s).

NAT traversal

One solution for Network Address Transmission traversal, called the Internet Gateway Protocol, is implemented via UPnP. Many routers and firewalls expose themselves as Internet Gateway Devices, allowing any local UPnP controller to perform a variety of actions, including retrieving the external IP address of the device, enumerate existing port mappings, and adding and removing port mappings. By adding a port mapping, a UPnP controller behind the IGD can enable traversal of the IGD from an external address to an internal client.

Problems with UPnP

Lack of Default Authentication

The UPnP protocol, as default, does not implement any authentication, so UPnP device implementations must implement their own authentication mechanisms, or implement the Device Security Service.There also exists a non-standard solution called UPnP-UP (Universal Plug and Play - User Profile) which proposes an extension to allow user authentication and authorization mechanisms for UPnP devices and applications.

Unfortunately, many UPnP device implementations lack authentication mechanisms, and by default assume local systems and their users are completely trustworthy.

Most notably, routers and firewalls running the UPnP IGD protocol are vulnerable to attack since the framers of the IGD implementation omitted a standard authentication method. For example, Adobe Flash programs are capable of generating a specific type of HTTP request. This allows a router implementing the UPnP IGD protocol to be controlled by a malicious web site when someone with a UPnP-enabled router simply visits that web site.The following changes can be made silently by code embedded in an Adobe Flash object hosted on a malicious website

  • Port fordward internal services (ports) to the router external facing side (i.e. expose computers behind a firewall to the Internet).
  • Port fordward the router's web administration interface to the external facing side.
  • Port forwarding to any external server located on the Internet, effectively allowing an attacker to attack an Internet host via the router, while hiding their IP address.
  • Change DNS server settings so that when victims believe they are visiting a particular site (such as an on-line bank), they are redirected to a malicious website instead.
  • Change the DNS server settings so that when a victim receives any software updates (from a source that isn't properly verified via some other mechanism, such as a checking a digital certificate has been signed by a trusted source), they download malicious code instead.
  • Change administrative credentials to the router/firewall.
  • Change PPP settings.
  • Change IP settings for all interfaces.
  • Change WiFi settings.
  • Terminate connections.

This only applies to the " firewall-hole-punching"-feature of UPnP ; it does not apply when the IGD does not support UPnP or UPnP has been disabled on the IGD. Also, not all routers can have such things as DNS server settings altered by UPnP because much of the specification (including LAN Host Configuration) is optional for UPnP enabled routers

Other Issues

  • UPnP uses HTTP over UDP (known as HTTPU and HTTPMU for unicast and multicast), even though this is not standardized and is specified only in an Internet-Draft that expired in 2001.
  • UPnP does not have a lightweight authentication protocol, while the available security protocols are complex. As a result, some UPnP devices ship with UPnP turned off by default as a security measure.

Future developments

UPnP continues to be actively developed. In fall 2008, the UPnP forum ratified the successor to UPnP 1.0, UPnP 1.1.

The standard DPWS was a candidate successor for UPnP, but UPnP 1.1 was selected by the forum.

UPnP InternetGatewaydevice's WANIPConnection service do have competitive solution known as NAT-PMP, is an IETF draft introduced by Apple inc. in 2005. However, NAT-PMP is focused only in NAT traversal. UPnP InternetGatewayDevice is currently being evolved to version 2 which preliminary content can be found from


Read Users' Comments (0)

Plug and play

In computing, plug and play is a term used to describe the characteristic of a computer bust, or device specification, which facilitates the discovery of a hardware component in a system, without the need for physical device configuration, or user intervention in resolving resource conflicts.

Plug and play refers to both the traditional boot-time assignment of device resources and driver identification, as well as to hotplug systems such as USB and firewire

History of Device Configuration

In the beginnings of computing technology, the hardware logic was just a collection of building blocks, and the relationships between them had to be completely redesigned to accommodate different calculating operations. These changes were usually done by connecting some wires between modules and disconnecting others. The very earliest of mechanical computing devices such as the IBM punchcard accounting, tabulating and interpreting machines were programmed entirely in this manner, by the use of a quick-swap control panel wired to route signals between configuration sockets.

As general purpose computing devices developed, these connections and disconnections were instead used to specify locations in the system address spacecentral processing unit. If two or more of the same device were installed in one computer, it would be necessary to assign the second device to a separate, non-overlapping region of the system address space so that both could be accessible at the same time. where an expansion device should appear, in order for the device to be accessible by the

Some early microcomputing devices such as the Apple II required the end-user to physically cut some wires and solder others together to make these configuration changes. The changes were intended to be mostly permanent for the life of the hardware.

Over time the need developed for more frequent changes and for easier changes to be made by unskilled computer users. Rather than cutting and soldering connections, the header and jumper was developed. The header consists of two or more vertical pins arranged in an evenly-spaced grid. The jumper is a small conductive strip of metal clipped across the header pins. The conductive jumper strip is commonly encased in a plastic shell to help prevent electrical shorting between adjacent jumpers.

Jumpers have the unfortunate property of being easy to misplace if not needed, and are difficult to grasp in order to remove them from headers. To help make these changes easier, the DIP switch DIP switch was developed, also known as a dual in-line package switch. The DIP switch has small either rocker or sliding switches enclosed in a plastic shell and usually numbered for easy reference. DIP switches usually come in units of four or eight switches; longer rows of switches can be made by combining two or more units. DIP switches are particularly useful where a long string of jumpers would be closely packed together or where four or more jumpers would be used in combination to configure one device function. DIP switches also have a particular advantage for configuration settings which are likely to be changed more frequently than once every few years. (Because of the inconvenience of setting them, jumpers are typically used for settings that are not expected to need to be changed unless the device is removed from one computer and installed in another, an infrequent occurrence for internal devices in consumer desktop PCs.)

As computing devices spread further out into the general population, there was ever greater pressure developing to automate this configuration process. One of the first major industry efforts towards self-configuration was done by IBM with the creation of their Personal System/2 line of computers using the micro channel architecture (MCA). This took a giant leap forward, as expansion devices had absolutely no jumpers or DIP switches.

However, IBM's first attempt at self-configuration had a few major problems. In an attempt to simplify device setup, every piece of hardware was issued with a disk containing a special file used to auto-configure the hardware to work with the computer. (If the device required one or more drivers for specific operating systems, they were usually included on the same disk.) Without this disk the hardware would be completely useless and the computer would not boot at all until the unconfigured device was removed.

MCA also suffered for being a proprietary technology. Unlike their previous PC bus design, the AT bus, IBM did not publicly release specifications for MCA and actively pursued patents to block third parties from selling unlicensed implementations of it, and the developing PC Clone market did not want to pay royalties to IBM in order to use this new technology. The PC clone makers instead developed EISA, an extension to the existing old non-PnP AT bus standard, which they also further standardized and renamed ISA (to avoid IBM's "AT" trademark). With few vendors other than IBM supporting it with computers or cards, MCA eventually failed in the marketplace. Most vendors of PC-compatibles stayed largely with ISA and manual configuration, while EISA offered the same type of auto-configuration featured in MCA. (EISA cards required a configuration file as well.)

In time, many ISA cards incorporated, through proprietary and varied techniques, hardware to self-configure or to provide for software configuration; often the card came with a configuration program on disk that could automatically set the software-configurable (but not itself self-configuring) hardware. Some cards had both jumpers and software-configuration, with some settings controlled by each; this compromise reduced the number of jumpers that had to be set, while avoiding great expense for certain settings, e.g. nonvolatile registers for a base address setting. The problems of required jumpers continued on but slowly diminished as more and more devices, both ISA and other types, included extra self-configuration hardware. However, these efforts still did not solve the problem of making sure the end-user has the appropriate software driver for the hardware.


Read Users' Comments (0)

Internet Protocol Suite

The Internet Protocol Suite (commonly known as TCP/IP) is the set of communications protocols Internet and other similar networks. It is named from two of the most important protocols in it: the Transmission Control Protocol (TCP) and the Internet Protocol (IP), which were the first two networking protocols defined in this standard. Today's IP networking represents a synthesis of several developments that began to evolve in the 1960s and 1970s, namely the Internet and LANs, which emerged in the mid- to late-1980s, together with the advent of the World Wide Web in the early 1990s. used for the

The Internet Protocol Suite, like many protocol suites, may be viewed as a set of layers. Each layer solves a set of problems involving the transmission of data, and provides a well-defined service to the upper layer protocols based on using services from some lower layers. Upper layers are logically closer to the user and deal with more abstract data, relying on lower layer protocol to translate data into forms that can eventually be physically transmitted.

The TCP/IP model consists of four layers.From lowest to highest, these are the link layer, the internet layer, the transport layer, and the application layer

History

The Internet Protocol Suite resulted from research and development conducted by the Defense Advanced Research Projects Agency (DARPA) in the early 1970s. After initiating the pioneering APARNET in 1969, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf, the developer of the existing ARPANET network control program (NCP) protocol, joined Kahn to work on open-architecture interconnection models with the goal of designing the next protocol generation for the ARPANET.

By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and, instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf credits Hubert Zimmerman and Louis Pouzin, designer of the CYCLADES network, with important influences on this design.

The design of the network included the recognition that it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. Using a simple design, it became possible to connect almost any network to the ARPANET, irrespective of their local characteristics, thereby solving Kahn's initial problem. One popular saying has it that TCP/IP, the eventual product of Cerf and Kahn's work, will run over "two tin cans and a string."

A computer called a router is provided with an interface to each network, and forwards packets back and forth between them. Requirements for routers are defined in (Request for Comments 1812).

The idea was worked out in more detailed form by Cerf's networking research group at Stanford in the 1973–74 period, resulting in the first TCP specification . (The early networking work at Xerox PARC, which produced the PARC Universal Packet protocol suite, much of which existed around the same period of time, was also a significant technical influence; people moved between the two.)

DARPA then contracted with BBN Technologies,Stanford University,and the University College London to develop operational versions of the protocol on different hardware platforms. Four versions were developed: TCP v1, TCP v2, a split into TCP v3 and IP v3 in the spring of 1978, and then stability with TCP/IP v4 — the standard protocol still in use on the Internet today.

In 1975, a two-network TCP/IP communications test was performed between Stanford and University College London (UCL). In November, 1977, a three-network TCP/IP test was conducted between sites in the US, UK, and Norway. Several other TCP/IP prototypes were developed at multiple research centres between 1978 and 1983. The migration of the ARPANET to TCP/IP was officially completed on January 1, 1983, when the new protocols were permanently activated.

In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In 1985, the Internet Architecture Board held a three day workshop on TCP/IP for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use.

Layers in the Internet Protocol Suite

The concept of layers

The TCP/IP suite uses encapsulation to provide abstraction of protocols and services. Such encapsulation usually is aligned with the division of the protocol suite into layers of general functionality. In general, an application (the highest level of the model) uses a set of protocols to send its data down the layers, being further encapsulated at each level.

This may be illustrated by an example network scenario, in which two Internet host computers communicate across local network boundaries constituted by their internetworking gateways (routers).

TCP/IP stack operating on two hosts connected via two routers and the corresponding layers used at each hop
Encapsulation of application data descending through the protocol stack.

The functional groups of protocols and methods are the application layer, the transport layer, the internet layer, and the link layer. It should be noted that this model was not intended to be a rigid reference model into which new protocols have to fit in order to be accepted as a standard.

Different authors have interpreted the RFCs differently regarding the question whether the Link Layer (and the TCP/IP model) covers physical layer issues, or if a hardware layer is assumed below the Link Layer. Some authors have tried to use other names for the Link Layer, such as network interface layer, in view to avoid confusion with the data link layer of the seven layer OSI model. Others have attempted to map the Internet Protocol model onto the OSI Model. The mapping often results in a model with five layers where the Link Layer is split into a Data Link Layer on top of a Physical Layer. In literature with a bottom-up approach to Internet communication, in which hardware issues are emphasized, those are often discussed in terms of Physical Layer and Data Link Layer.

The Internet Layer is usually directly mapped into the OSI Model's network layer, a more general concept of network functionality. The Transport Layer of the TCP/IP model, sometimes also described as the host-to-host layer, is mapped to OSI Layer 4 (Transport Layer), sometimes also including aspects of OSI Layer 5 functionality. OSI's application layer,presentation layer, and the remaining functionality of the Session Layer are collapsed into TCP/IP's Application Layer. The argument is that these OSI layers do usually not exist as separate processes and protocols in Internet applications.

However, the Internet protocol stack has never been altered by the Internet Engineering Task Force from the four layers defined in RFC 1122. The IETF makes no effort to follow the OSI model although RFCs sometimes refer to it. The IETF has repeatedly stated that Internet protocol and architecture development is not intended to be OSI-compliant.

R RFC 2439, addressing Internet architecture, contains a section entitled: "Layering Considered Harmful".

Implementations

Most operating systems in use today, including all consumer-targeted systems, include a TCP/IP implementation.

Unique implementations include Lightweight TCP/IP, an open source stack designed for embedded systems and KA9Q NOS, a stack and associated protocols for amateur packet raiopersonal computer connected via serial lines. systems and

Read Users' Comments (0)

What is Broadband

In telecommunication

Broadband in telecommunication refers to a signaling method that includes or handles a relatively wide range (or band) of frequencies, which may be divided into channels or frequency bins. Broadband is always a relative term, understood according to its context. The wider the bandwith, the greater the information-carrying capacity. In radio, for example, a very narrow-band signal will carry Morse code; a broader band will carry speech; a still broader band is required to carry music audio frequency required for realistic sound reproduction. A analog modem over the same telephone line a bandwidth of several megabits per second can be handled by ASDL, which is described as broadband (relative to a modem over a telephone line, although much less than can be achieved over a fiber optic circuit). without losing the high antenna described as "normal" may be capable of receiving a certain range of channels; one described as "broadband" will receive more channels. In data communications an will transmit a bandwidth of 56 kilobits per seconds (kbit/s) over a

In data communications

Broadband in data can refer to broadband network or broadband internet and may have the same meaning as above, so that data transmission over a fiber optic cable would be referred to as broadband as compared to a telephone modem operating at 56000 bite per second. However, a worldwide standard for what level of bandwidth and network speeds actually constitute Broadband has not been determined.

However, broadband in data commuication is frequently used in a more technical sense to refer to data transmission where multiple pieces of data are sent simultaneously to increase the effective rate of transmission, regardless of data signaling rate. In network engineering this term is used for methods where two or more signals share a medium.Broadband Internet access, often shortened to just broadband, is a high data rate Internet access—typically contrasted with dial-up access using a 56k modem.

Dial-up modems are limited to a bitrate of less than 56 kbit/s (kilobits per second) and require the full use of a telephone line—whereas broadband technologies supply more than double this rate and generally without disrupting telephone use.

In DSL

The various forms of DSL services are broadband in the sense that digital information is sent over a high-bandwidth channel (located above the baseband voice channel on a single pair of wires).

In Ethernet

A baseband transmission sends one type of signal using a medium's full bandwidth, as in 100BASE-T Ethernet. Ethernet, however, is the common interface to broadband modems such as DSL data links, and has a high data rate itself, so is sometimes referred to as broadband. Ethernet provided over cable modem is a common alternative to DSL.

In power-line communication

Power Line have also been used for various types of data communication. Although some systems for remote control are based on narrowband signaling, modern high-speed systems use broadband signaling to achieve very high data rates. One example is the ITU-T G.hn standard, which provides a way to create a high-speed (up to 1 Gigabit/s). LAN using existing home wiring (including power lines, but also phone lines and coaxial cable).

In video

Broadband in analog video distribution is traditionally used to refer to systems such as cable television, where the individual channels are modulated on carriers at fixed frequencies.In this context, baseband is the term's antonym, referring to a single channel of analog video, typically in composite form with an audio subcarrier.The act of demodulating converts broadband video to baseband video.

However, broadband video in the context of streaming internet video has come to mean video files that have bitrates high enough to require broadband internet access in order to view them.

Broadband video is also sometimes used to describe IPTV Video on demand

Read Users' Comments (0)

Internet Popularity

A CEA study in 2006 found that dial-up Internet access is on a notable decline in the U.S. In 2000, dial-up Internet connections accounted for 74% of all U.S. residential Internet connections. The US demographic pattern for (dial-up modem users per capita) has been more or less mirrored in Canada and Australia for the past 20 years.

Dial-up modem use in the US had dropped to 60% by 2003, and in 2006 stood at 36%. Voiceband modems were once the most popular means of internet access in the U.S., but with the advent of new ways of accessing the Internet, the traditional 56K modem is losing popularity.

Read Users' Comments (0)

Voice Modem

Voice modems are regular modems that are capable of recording or playing audio over the telephone line. They are used for telephony applications. See Voice modem command set for more details on voice modems. This type of modem can be used as FXO card for Private branch exchange systems (compare V.92).

Read Users' Comments (0)

Deep-space Telecommunications

Many modern modems have their origin in dees space telecommunication systems of the 1960s.

Differences with deep space telecom modems vs landline modems

  • digital modulation formats that have high doppler immunity are typically used
  • waveform complexity tends to be low, typically binary phase shift keying
  • error correction varies mission to mission, but is typically much stronger than most landline modems

Read Users' Comments (0)

Home networking

Although the name modem is seldom used in this case, modems are also used for high-speed home networking applications, specially those using existing home wiring. One example is the G.HnlTU-T, which provides a high-speed (up to 1 Gbit/s) Local Area Network using existing home wiring (power lines, phone lines and coaxial cables). G.hn devices use OFDM to modulate a digital signal for transmission over the wire.

Read Users' Comments (0)

Broadband

ASDL modems, a more recent development, are not limited to the telephone's voiceband audio frequencies. Some ASDL modems use coded orthogonal frequency division modulation (DMT).

Cable Modems use a range of frequencies originally intended to carry RF television channels. Multiple cable modems attached to a single cable can use the same frequency band, using a low-level media access protocol to allow them to work together within the same channel. Typically, 'up' and 'down' signals are kept separate using frequency division multiple access.

New types of broadband modems are beginning to appear, such as doubleway satellite and power line modems.

Broadband modems should still be classed as modems, since they use complex waveforms to carry digital data. They are more advanced devices than traditional dial-up modems as they are capable of modulating/demodulating hundreds of channels simultaneously.

Many broadband modems include the functions of a router (with ethernet and WiFiports) and other features such as DHCP, NAT and firewall features.

When broadband technology was introduced, networking and routers were unfamiliar to consumers. However, many people knew what a modem was as most internet access was through dial-up. Due to this familiarity, companies started selling broadband modems using the familiar term modem rather than vaguer ones like adapter or transceiver.

Many broadband modems must be configured in bridge mode before they can use a router.

Read Users' Comments (0)

Mobile Modems and Routers

Modems which use mobile phone lines (UMTS,HDSPA,EVDO,WiMax), are known as Cellular Modems. Cellular modems can be embedded inside a laptop or appliance, or they can be external to it. External cellular modems are datacards and cellular routers. The datacard is a PC Card or Express Card which slides into a PCMIA/PC Card/Express Card slot on a computer. The most famous brand of cellular modem datacards is the Air Card made by Sierra Wireless. (Many people just refer to all makes and models as AirCards, when in fact this is a trademarked brand name.) Nowadays, there are USB cellular modems as well that use a USB port on the laptop instead of a PC Card or Express Card slot. A cellular routers may or may not have an external datacard (AirCard) that slides into it. Most cellular routers do allow such datacards or USB modems, except for the WAAV,Inc.CM3 mobile broadband cellular routers. Cellular Routers may not be modems per se, but they contain modems or allow modems to be slid into them. The difference between a cellular routers and a cellular modems is that a cellular router normally allows multiple people to connect to it (since it can route, or support multipoint to multipoint connections), while the modem is made for one connection.

Most of the GSM cellular modems come with an integrated SIM Cardholder (i.e., Sierra 881, etc.) The CDMA (EVDO) versions do not use SIM Cards, but use Electronic Serial Number instead.

The cost of using a cellular modem varies from country to country. Some carriers implement flat rate plans for unlimited data transfers. Some have caps (or maximum limits) on the amount of data that can be transferred per month. Other countries have plans that charge a fixed rate per data transferred—per megabyte or even kilobyte of data downloaded; this tends to add up quickly in today's content-filled world, which is why many people are pushing for flat data rates.

The faster data rates of the newest cellular modem technologies (UMTS,HDSPA,EVDO,WiMax) are also considered to be broadband cellular modems and compete with other Broadband modems below.

Read Users' Comments (0)

WiFi and WiMax

Wireless Data Modem are used in the WiFi and WiMax standards, operating at microwave frequency

WiFi is principally used in laptops or notebook for Internet connections (wireless access point) and wireless application protocol(WAP).

Read Users' Comments (0)

Radio Modems

Direct Broadcast Satelitte, Wi-Fi, and mobile phones all use modems to communicate, as do most other wireless services today. Modern telecommunications and data networks also make extensive use of radio modems where long distance data links are required. Such systems are an important part of the PSTN, and are also in common use for high-speed computer networks links to outlying areas where fibre is not economical.

Even where a cable is installed, it is often possible to get better performance or make other parts of the system simpler by using radio frequencies and modulation techniques through a cable. Coaxial cable has a very large bandwidth, however signal attenuation becomes a major problem at high data rates if a digital signal is used. By using a modem, a much larger amount of digital data can be transmitted through a single piece of wire. Digital cable television and cable Internet services use radio frequency modems to provide the increasing bandwidth needs of modern households. Using a modem also allows for frequency-division multiple access to be used, making full-duplex digital communication with many users possible using a single wire.

Wireless modems come in a variety of types, bandwidths, and speeds. Wireless modems are often referred to as transparent or smart. They transmit information that is modulated onto a carrier frequency to allow many simultaneous wireless communication links to work simultaneously on different frequencies.

Transparent modems operate in a manner similar to their phone line modem cousins. Typically, they were half duplex, meaning that they could not send and receive data at the same time. Typically transparent modems are polled in a round robin manner to collect small amounts of data from scattered locations that do not have easy access to wired infrastructure. Transparent modems are most commonly used by utility companies for data collection.

Smart modems come with a media access controller inside which prevents random data from colliding and resends data that is not correctly received. Smart modems typically require more bandwidth than transparent modems, and typically achieve higher data rates. The IEEE 802.11 standard defines a short range modulation scheme that is used on a large scale throughout the world.

Read Users' Comments (0)

List of Dial-up Speeds

Note that the values given are maximum values, and actual values may be slower under certain conditions (for example, noisy phone lines).For a complete list see the companion article list if device bandwith . Please note baud == symbols per second.

Connection Bitrate (kbit/s)
110 baud Bell 101 modem 0.1
300 baud
0.3
1200 bps (600 baud)
1.2
2400 bps (600 baud) 2.4
2400 bps (1,200 baud) 2.4
4800 bps (1,600 baud) 4.8
9600 bps (2,400 baud) 9.6
14.4 kbps (2,400 baud) 14.4
28.8 kbps (3,200 baud) 28.8
33.6 kbps (3,429 baud) 33.6
56 kbps (8,000/3,429 baud) 56.0/33.6
56 kbps (8,000/8,000 baud) 56.0/48.0
Bonding modem (two 56k modems) 112.0/96.0
Hardware compression (variable) 56.0-220.0
Hardware compression (variable) 56.0-320.0
Server-side web compression (variable) 100.0-1,000.0

Read Users' Comments (0)

Compression by the ISP

As telephone-based 56k modems began losing popularity, some Internet Service Providers such as Netzero and Juno started using pre-compression to increase the throughput & maintain their customer base. As example, the Netscape ISP uses a compression program that squeezes images, text, and other objects at the server, just prior to sending them across the phone line. The server-side compression operates much more efficiently than the on-the-fly compression of V.44-enabled modems. Typically website text is compacted to 4% thus increasing effective throughput to approximately 1,300 kbit/s. The accelerator also pre-compresses Flash executables and images to approximately 30% and 12%, respectively.

The drawback of this approach is a loss in quality, where the graphics become heavily compacted and smeared, but the speed is dramatically improved such that web pages load in less than 5 seconds, and the user can manually choose to view the uncompressed images at any time. The ISPs employing this approach advertise it as "DSL speeds over regular phone lines" or simply "high speed dial-up".

Read Users' Comments (0)

Using Compression to Exceed 56k

Today's V.42, V.42bis and V.44 standards allow the modem to transmit data faster than its basic rate would imply. For instance, a 53.3 kbit/s connection with V.44 can transmit up to 53.3*6 == 320 kbit/s using pure text. However, the compression ratio tends to vary due to noise on the line, or due to the transfer of already-compressed files (ZIP files, JPEG images, MP3 audio, MPEG video).At some points the modem will be sending compressed files at approximately 50 kbit/s, uncompressed files at 160 kbit/s, and pure text at 320 kbit/s, or any value in between.

In such situations a small amount of memory in the modem, a buffer, is used to hold the data while it is being compressed and sent across the phone line, but in order to prevent overflow of the buffer, it sometimes becomes necessary to tell the computer to pause the datastream. This is accomplished through hardware flow control using extra lines on the modem–computer connection. The computer is then set to supply the modem at some higher rate, such as 320 kbit/s, and the modem will tell the computer when to start or stop sending data.

Read Users' Comments (0)

PCM and Digital Lines

In the late 1990s Rockwell and U.S. Robotics introduced new technology based upon the digital transmission used in modern telephony networks. The standard digital transmission in modern networks is 64 kbit/s but some networks use a part of the bandwidth for remote office signaling (eg to hang up the phone), limiting the effective rate to 56 kbit/s DSo. This new technology was adopted into ITU standards V.90 and is common in modern computers. The 56 kbit/s rate is only possible from the central office to the user site (downlink) and in the United States, government regulation limits the maximum power output to only 53.3 kbit/s. The uplink (from the user to the central office) still uses V.34 technology at 33.6k.

Later in V.92, the digital PCM technique was applied to increase the upload speed to a maximum of 48 kbit/s, but at the expense of download rates. For example a 48 kbit/s upstream rate would reduce the downstream as low as 40 kbit/s, due to echo on the telephone line. To avoid this problem, V.92 modems offer the option to turn off the digital upstream and instead use a 33.6 kbit/s analog connection, in order to maintain a high digital downstream of 50 kbit/s or higher.V.92 also adds two other features. The first is the ability for users who have call waiting to put their dial-up internet connection on hold for extended periods of time while they answer a call. The second feature is the ability to quickly connect to one's ISP. This is achieved by remembering the analog and digital characteristics of the telephone line, and using this saved information to reconnect at a fast pace.

Read Users' Comments (0)

ASVD (Digital Simultaneous Voice and Data)

The V.61 Standard introduced Analog Simultaneous Voice and Data (ASVD). This technology allowed user's of v.61 modems to engage in point-to-point voice conversations with each other while their respective modems communicated.

In 1995, the first DSVD (Digital Simultaneous Voice and Data) modems became available to consumers, and the standard was ratified as v.70 by the International Telecommunication Union (ITU) in 1996.

Two DSVD modems can establish a completely digital link between each other over standard phone lines. Sometimes referred to as "the poor man's ISDN," and employing a similar technology, v.70 compatible modems allow for a maximum speed of 33.6 kbps between peers. By using a majority of the bandwidth for data and reserving part for voice transmission, DSVD modems allow users to pick up a telephone handset interfaced with the modem, and initiate a call to the other peer.

One practical use for this technology was realized by early two player video gamers, who could hold voice communication with each other while in game over the PSTN.

Advocates of DSVD envisioned whiteboard sharing and other practical applications for the standard, however, with advent of cheaper 56kbps analog modems intended for internet connectivity, peer-to-peer data transmission over the PSTN became quickly irrelevant. Also, the standard was never expanded to allow for the making or receiving of arbitrary phone calls while the modem was in use, due to the cost of infrastructure upgrades to telcos, and the advent of ISDN and DSL technologies which effectively accomplished the same goal.

Today, Multi-Tech is the only known company to continue to support a v.70 compatible modem. While their device also offers v.92 at 56kbps, it remains significantly more expensive than comparable modems sans v.70 support.

Read Users' Comments (0)

V.34/28.8k and 33.6k

Any interest in these systems was destroyed during the lengthy introduction of the 28,800 bit/s V.34standard. While waiting, several companies decided to release hardware and introduced modems they referred to as V.FAST. In order to guarantee compatibility with V.34 modems once the standard was ratified (1994), the manufacturers were forced to use more flexible parts, generally a DSP and microcontroller, as opposed to purpose-designed ASIC modem chips.

Today, the ITU standard V.34 represents the culmination of the joint efforts. It employs the most powerful coding techniques including channel encoding and shape encoding. From the mere 4 bits per symbol (9.6 kbit/s), the new standards used the functional equivalent of 6 to 10 bits per symbol, plus increasing baud rates from 2,400 to 3,429, to create 14.4, 28.8, and 33.6 kbit/s modems. This rate is near the theoretical Shannon limit. When calculated, the Shannon capacity of a narrowband line is \scriptstyle Bandwidth * log_2 (1 + P_u/P_n), with \scriptstyle P_u/P_n the signal-to-noise ratio. Narrowband phone lines have a bandwidth from 300-3,100 Hz, so using \scriptstyle P_u/P_n=10,000: capacity is approximately 35 kbit/s.

Without the discovery and eventual application of trellis modulation, maximum telephone rates would have been limited to 3,429 baud * 4 bit/symbol == approximately 14 kbit/s using traditional QAM.

Read Users' Comments (0)

Breaking the Barrier

In 1980, Gottfried Ungerboeck from IBM Zurich Research Laboratory applied powerful channel coding techniques to search for new ways to increase the speed of modems. His results were astonishing but only conveyed to a few colleagues. Finally in 1982, he agreed to publish what is now a landmark paper in the theory of information coding.By applying powerful parity check coding to the bits in each symbol, and mapping the encoded bits into a two-dimensional diamond pattern, Ungerboeck showed that it was possible to increase the speed by a factor of two with the same error rate. The new technique was called mapping by set partitions (now known as trellis modulation).

Error Correcting Codes, which encode code words (sets of bits) in such a way that they are far from each other, so that in case of error they are still closest to the original word (and not confused with another) can be thought of as analogous to sphere packing or packing pennies on a surface: the greater two bit sequences are from one another, the easier it is to correct minor errors.

The industry was galvanized into new research and development. More powerful coding techniques were developed, commercial firms rolled out new product lines, and the standards organizations rapidly adopted to new technology. The tipping point occurred with the introduction of the Supra FAXModem14400 in 1991. Rockwell had introduced a new chipset supporting not only V.32 and MNP, but the newer 14,400 bit/s V3.2bis and the higher-compression V4.2bis as well, and even included 9,600 bit/s fax capability. Supra, then known primarily for their hard drive systems, used this chipset to build a low-priced 14,400 bit/s modem which cost the same as a 2,400 bit/s modem from a year or two earlier (about US$300). The product was a runaway best-seller, and it was months before the company could keep up with demand.

V.32bis was so successful that the older high-speed standards had little to recommend them. USR fought back with a 16,800 bit/s version of HST, while AT&T introduced a one-off 19,200 bit/s method they referred to as V.32ter (also known as V.32 terbo or tertiary), but neither non-standard modem sold well.

Read Users' Comments (0)

Compressions and Error Correction

Operations at these speeds pushed the limits of the phone lines, resulting in high error rates. This led to the introduction of error correction systems built into the modems, made most famous with Microcom's MNP systems. A string of MNP standards came out in the 1980s, each increasing the effective data rate by minimizing overhead, from about 75% theoretical maximum in MNP 1, to 95% in MNP 4. The new method called MNP 5 took this a step further, adding data compression to the system, thereby increasing the data rate above the modem's rating. Generally the user could expect an MNP5 modem to transfer at about 130% the normal data rate of the modem. Details of MNP were later released and became popular on a series of 2,400-bit/s modems, and ultimately led to the development of V4.2 and V4.2bis ITU standards. V.42 and V.42bis were non-compatible with MNP but were similar in concept: Error correction and compression.

Another common feature of these high-speed modems was the concept of fallback, or speed hunting, allowing them to talk to less-capable modems. During the call initiation the modem would play a series of signals into the line and wait for the remote modem to respond to them. They would start at high speeds and progressively get slower and slower until they heard an answer. Thus, two USR modems would be able to connect at 9,600 bit/s, but, when a user with a 2,400-bit/s modem called in, the USR would fallback to the common 2,400-bit/s speed. This would also happen if a V.32 modem and a HST modem were connected. Because they used a different standard at 9,600 bit/s, they would fall back to their highest commonly supported standard at 2,400 bit/s. The same applies to V.32bis and 14,400 bit/s HST modem, which would still be able to communicate with each other at only 2,400 bit/s.

Read Users' Comments (0)

Echo Cancellation

Echo Cancellation was the next major advance in modem design. Local telephone lines use the same wires to send and receive, which results in a small amount of the outgoing signal bouncing back. This signal can confuse the modem. Is the signal it is receiving a data transmission from the remote modem, or its own transmission bouncing back? This was why earlier modems split the signal frequencies into answer and originate; each modem simply didn't listen to its own transmitting frequencies. Even with improvements to the phone system allowing higher speeds, this splitting of available phone signal bandwith still imposed a half-speed limit on modems.

Echo cancellation got around this problem. Measuring the echo delays and magnitudes allowed the modem to tell if the received signal was from itself or the remote modem, and create an equal and opposite signal to cancel its own. Modems were then able to send over the whole frequency spectrum in both directions at the same time, leading to the development of 4,800 and 9,600 bit/s modems.

Increases in speed have used increasingly complicated communications theory. 1,200 and 2,400 bit/s modems used the phase shift key (PSK) concept. This could transmit two or three bits per symbol. The next major advance encoded four bits into a combination of amplitude and phase, known as Quadrature Amplitude Modulation (QAM). Best visualized as a constellation diagram, the bits are mapped onto points on a graph with the x (real) and y (quadrature) coordinates transmitted over a single carrier.

The new V.27ter and V.32 standards were able to transmit 4 bits per symbol, at a rate of 1,200 or 2,400 baud, giving an effective bit rate of 4,800 or 9,600 bit/s. The carrier frequency was 1,650 Hz. For many years, most engineers considered this rate to be the limit of data communications over telephone networks.

Read Users' Comments (0)

Increasing Speed (again)

Many other standards were also introduced for special purposes, commonly using a high-speed channel for receiving, and a lower-speed channel for sending. One typical example was used in the French Minitel system, in which the user's terminals spent the majority of their time receiving information. The modem in the Minitel terminal thus operated at 1,200 bit/s for reception, and 75 bit/s for sending commands back to the severs.

Three U.S. companies became famous for high-speed versions of the same concept.TelebitTrailblazer modem in 1984, which used a large number of 36 bit/s channels to send data one-way at rates up to 18,432 bit/s. A single additional channel in the reverse direction allowed the two modems to communicate how much data was waiting at either end of the link, and the modems could change direction on the fly. The Trailblazer modems also supported a feature that allowed them to spoof the UUCP g protocol, commonly used on Unix systems to send e-mail, and thereby speed UUCP up by a tremendous amount. Trailblazers thus became extremely common on Unix systems, and maintained their dominance in this market well into the 1990s. introduced its

U.S Robotics (USR) introduced a similar system, known as HST, although this supplied only 9,600 bit/s (in early versions at least) and provided for a larger backchannel. Rather than offer spoofing, USR instead created a large market among Fidonet users by offering its modems to BBS syspops at a much lower price, resulting in sales to end users who wanted faster file transfers. Hayes was forced to compete, and introduced its own 9,600-bit/s standard, Express 96 (also known as Ping-Pong), which was generally similar to Telebit's PEP. Hayes, however, offered neither protocol spoofing nor sysop discounts, and its high-speed modems remained rare.

Read Users' Comments (0)

Increasing Speed

The 300 bit/s modems used audio frequency-shift keying to send data. In this system the stream of 1s and 0s in computer data is translated into sounds which can be easily sent on the phone lines. In the Bell 103 system the originating modem sends 0s by playing a 1,070 Hz tone, and 1s at 1,270 Hz, with the answering modem putting its 0s on 2,025 Hz and 1s on 2,225 Hz. These frequencies were chosen carefully, they are in the range that suffer minimum distortion on the phone system, and also are not harmonics of each other.

In the 1,200 bit/s and faster systems, phase-shift keying was used. In this system the two tones for any one side of the connection are sent at the similar frequencies as in the 300 bit/s systems, but slightly out of phase. By comparing the phase of the two signals, 1s and 0s could be pulled back out, for instance if the signals were 90 degrees out of phase, this represented two digits, 1, 0, at 180 degrees it was 1, 1. In this way each cycle of the signal represents two digits instead of one. 1,200 bit/s modems were, in effect, 600 symbols per second modems (600 baud modems) with 2 bits per symbol.

Voiceband modems generally remained at 300 and 1,200 bit/s (V2.1 and V.22) into the mid 1980s. A V.22bis 2,400-bit/s system similar in concept to the 1,200-bit/s Bell 212 signalling was introduced in the U.S., and a slightly different one in Europe. By the late 1980s, most modems could support all of these standards and 2,400-bit/s operation was becoming common.

For more information on baud rates versus bit rates, see the companion article list of device bandwidths.

Read Users' Comments (0)

Dial-up Modem

A standard modem of today contains two functional parts: an analog section for generating the signals and operating the phone, and a digital section for setup and control. This functionality is actually incorporated into a single chip, but the division remains in theory. In operation the modem can be in one of two modes, data mode in which data is sent to and from the computer over the phone lines, and command mode in which the modem listens to the data from the computer for commands, and carries them out. A typical session consists of powering up the modem (often inside the computer itself) which automatically assumes command mode, then sending it the command for dialing a number. After the connection is established to the remote modem, the modem automatically goes into data mode, and the user can send and receive data. When the user is finished, the escape sequence, "+++" followed by a pause of about a second, is sent to the modem to return it to command mode, and the command ATH to hang up the phone is sent.

The commands themselves are typically from the Hayes command set, although that term is somewhat misleading. The original Hayes commands were useful for 300 bit/s operation only, and then extended for their 1,200 bit/s modems. Faster speeds required new commands, leading to a proliferation of command sets in the early 1990s. Things became considerably more standardized in the second half of the 1990s, when most modems were built from one of a very small number of chipsets. We call this the Hayes command set even today, although it has three or four times the numbers of commands as the actual standard.

Read Users' Comments (0)