3G

International Mobile Telecommunications-2000 (IMT-2000), better known as 3G or 3rd Generation, is a family of standards for mobile telecommunication defined by the International Telecommunication Union,which includes GSM EDGE, UMTS, and CDMA 2000 as well as DECTWiMax. Services include wide-area wireless voice telephone, video calls, and wireless data, all in a mobile environment. Compared to 2G and 2,5G services, 3G allows simultaneous use of speech and data services and higher data rates (up to 14.0 Mbit/s on the downlink and 5.8 Mbit/s on the uplink with HSPA+). Thus, 3G networks enable network operators to offer users a wider range of more advanced services while achieving greater network capacity through improved spectral effeciency. and

ITU defined the third generation (3G) of mobile telephony standards – IMT-2000 – to facilitate growth, increase bandwidth, and support more diverse applications. For example, GSM (the current most popular cellular phone standard) could deliver not only voice, but also circuit-switched data at download rates up to 14.4 kbps. But to support mobile multimedia applications, 3G had to deliver packet-switched data with better spectral efficiency, at far greater bandwidths

TebalHistory

The first pre-commercial 3G network was launched by NTT DoCoMo in Japan branded FOMA, in May 2001 on a pre-release of W-CDMA technology.The first commercial launch of 3G was also by NTT DoCoMo in Japan on 1 October 2001, although it was initially somewhat limited in scope,broader availability was delayed by apparent concerns over reliability.The second network to go commercially live was by SK Telecom in South Korea on the 1xEV-DO technology in January 2002. By May 2002 the second South Korean 3G network was by KT on EV-DO and thus the Koreans were the first to see competition among 3G operators.

The first European pre-commercial network was at the Isle of Man by Manx Telecom, the operator then owned by British Telecom, and the first commercial network in Europe was opened for business by Telenor in December 2001 with no commercial handsets and thus no paying customers. These were both on the W-CDMA technology.

The first commercial United States 3G network was by Monet Mobile Telecom, on CDMA 2000 1x EV-DO technology, but this network provider later shut down operations. The second 3G network operator in the USA was Verizon Wireless in October 2003 also on CDMA2000 1x EV-DO. AT&T Mobility is also a true 3G network, having completed its upgrade of the 3G network to HSUPA.

The first pre-commercial demonstration network in the southern hemisphere was built in Adelaide,South Australia by m.Net Corporation in February 2002 using UMTS on 2100 MHz. This was a demonstration network for the 2002 IT World Congress. The first commercial 3G network was launched by Hutchison Telecommunications branded as Three in March 2003.

In December 2007, 190 3G networks were operating in 40 countries and 154 HDSPA networks were operating in 71 countries, according to the Global Mobile Suppliers Association (GSA). In Asia, Europe, Canada and the USA, telecommunication companies use W-CDMA technology with the support of around 100 terminal designs to operate 3G mobile networks.

In Europe, mass market commercial 3G services were introduced starting in March 2003 by 3 (Part of Hutchison) in the UK and Italy. The European Union Council suggested that the 3G operators should cover 80% of the European national populations by the end of 2005.

Roll-out of 3G networks was delayed in some countries by the enormous costs of additional spectrum licensing fees. In many countries, 3G networks do not use the same radio frequencies as 2G, so mobile operators must build entirely new networks and license entirely new frequencies; an exception is the United States where carriers operate 3G service in the same frequencies as other services. The license fees in some European countries were particulary high, bolstered by government auctions of a limited number of licenses and sealed bid auctions, and initial excitement over 3G's potential. Other delays were due to the expenses of upgrading equipment for the new systems.

By June 2007 the 200 millionth 3G subscriber had been connected. Out of 3 billion mobile phone subscriptions worldwide this is only 6.7%. In the countries where 3G was launched first - Japan and South Korea - 3G penetration is over 70%.In Europe the leading country is Italy with a third of its subscribers migrated to 3G. Other leading countries by 3G migration include UK, Austria, Australia and Singapore at the 20% migration level. A confusing statistic is counting CDMA 2000 1x RTT customers as if they were 3G customers. If using this definition, then the total 3G subscriber base would be 475 million at June 2007 and 15.8% of all subscribers worldwide.

In Canada, Rogers Wireless was the first to implement 3G technology, with HSDPA services in eastern Canada in early 2007. Their subsidiary Fido Solutions offers 3G as well. Because they were the only incumbent carrier (out of 3) with UMTS/HSDPA capability, for 2 years Rogers was the sole provider of the popular Apple iPhone. Realizing they would miss out on roaming revenue from the 2010 Winter Olympics ,Bell and Telus formed a joint venture and rolled out a shared HDSPA network using Nokia Siemens technology. Bell launched their 3G wireless lineup on 4 November 2009, and Telus followed suit a day later on 5 November 2009. With these launches, the popular iPhone is now available on all 3 incumbent national carriers

Mobitel Iraq is the first mobile 3G operator in Iraq. It was launched commercially on February 2007.

China announced in May 2008, that the telecoms sector was re-organized and three 3G networks would be allocated so that the largest mobile operator, China Mobile, would retain its GSM customer base. China Unicom would retain its GSM customer base but relinquish its CDMA2000 customer base, and launch 3G on the globally leading WCDMA (UMTS) standard. The CDMA2000 customers of China Unicom would go to China Telecom, which would then launch 3G on the CDMA 1x EV-DO standard. This meant that China would have all three main cellular technology 3G standards in commercial use. Finally in January 2009, Ministry of industry and Information Technology of China has awarded licenses of all three standards,TD-SCDMA to China Mobile, WCDMA to China Unicom and CDMA2000 to China Telecom. The launch of 3G occurred on 1 October 2009, to coincide with the 60th Anniversary of the Founding of the People's Republic of China.

In November 2008, Turkey has auctioned four IMT 2000/UMTS standard 3G licenses with 45, 40, 35 and 25 MHz top frequencies.Turkcell has won the 45 MHz band with its €358 million offer followed by Vodavone and Avea leasing the 40 and 35 MHz frequencies respectively for 20 years. The 25 MHz top frequency license remains to be auctioned.

The first African use of 3G technology was a 3G videocall made in Johannesburg on the Vodacom in Mauritius in late March 2006, a 3G service was provided by the new company Wana. network in November 2004. The first commercial launch of 3G in Africa was by EMTEL on the W-CDMA standard. In north African Morroco

T-Mobile, a major Telecommunication services provider has recently rolled out a list of over 120 U.S. cities which will be provided with 3G Network coverage in the year 2009.

In 2008, India entered into 3G Mobile arena with the launch of 3G enabled Mobile services by Mahanagar Telephone Nigam Limited (MTNL).MTNL is the first Mobile operator in India to launch 3G services.

Features

Data Rates

ITU has not provided a clear definition of the data rate users can expect from 3G equipment or providers. Thus users sold 3G service may not be able to point to a standard and say that the rates it specifies are not being met. While stating in commentary that "it is expected that IMT-2000 will provide higher transmission rates: a minimum data rate of 2 Mbit/s for stationary or walking users, and 348 kbit/s in a moving vehicle,the ITU does not actually clearly specify minimum or average rates or what modes of the interfaces qualify as 3G, so various rates are sold as 3G intended to meet customers expectations of broadband data.

Security

3G networks offer a greater degree of security than 2G predecessors. By allowing the UE (User Equipment) to authenticate the network it is attaching to, the user can be sure the network is the intended one and not an impersonator. 3G networks use the KASUMI block crypto instead of the older A5/1 stream cipher. However, a number of serious weaknesses in the KASUMI cipher have been identified

In addition to the 3G network infrastructure security, end-to-end security is offered when application frameworks such as IMS are accessed, although this is not strictly a 3G property.

Applications

3G offers a wide range of applications. These applications are mainly made possible due to the enhanced data rates as a result of the 2Mbps bandwidth availabilities. Some of the applications are.

1. Mobile TV - Due to the high data transfer rate being offered due to 3G, TV can be viewed on Mobile Phones. For this have to tie up with a service provider, through which the content can be accessed. Eg.Apalya for BSNL(India).

2. Video Conferencing - It is possible to conduct a video conferencing using the available network, due to the 2 Mbps bandwidth.

3. Tele-medicine - This is an extended feature of video conferencing where a remote person can be given attention by a doctor located at a distant place.

4. Location Based Services - These are some services which can be accessed on the dependence of the service provider. These include weather updates, live road traffic view, and vehicle tracking.

5. Video on Demand - Videos can be viewed on demand from a service provider. For providing this service, the service provider should have collaborations with content providers such as Perceptknorigin (in India). This is again possible due to high buffering speed possible due to the 3G network.

Evolution from 2G

From 2G to 2.5G

The first major step in the evolution to 3G occurred with the introduction of General Packet Radio Service . So the cellular services combined with GPRS became '2.5G.'

GPRS could provide data rates from 56 kbit/s up to 114 kbit/s. It can be used for services such as Wireless Application Protocol (WAP) access, Multimedia Messaging Service (MMS), and for Internet communication services such as email and World Wide Web access. GPRS data transfer is typically charged per megabyte of traffic transferred, while data communication via traditional circuit switching is billed per minute of connection time, independent of whether the user actually is utilizing the capacity or is in an idle state.

From 2.5G to 2.75G (EDGE)

GPRS networks evolved to EDGE networks with the introduction of 8PSK encoding. Enhanced Data rates for GSM Evolution (EDGE), Enhanced GPRS (EGPRS), or IMT Single Carrier (IMT-SC) is a backward-compatible digital mobile phone technology that allows improved data transmission rates, as an extension on top of standard GSM. EDGE was deployed on GSM networks beginning in 2003—initially by Cingular (now AT&T) in the United States.

EDGE is standardized by 3GPP as part of the GSM family, and it is an upgrade that provides a potential three-fold increase in capacity of GSM/GPRS networks. The specification achieves higher data-rates by switching to more sophisticated methods of coding (8PSK), within existing GSM timeslots.

Evolution towards 4G

Both 3GPP and 3GGP2 are currently working on further extensions to 3G standards, named Long Term Evolution and Ultra Mobile Broadband, respectively. Being based on an all Ip Network Infrastructur and using advanced wireless technologies such as MIMO, these specifications already display features characteristic for IMT - Advanced (4G), the successor of 3G. However, falling short of the bandwidth requirements for 4G (which is 1 Gbit/s for stationary and 100 Mbit/s for mobile operation), these standards are classified as 3.9G or Pre 4G.

3GPP plans to meet the 4G goals with LTE Advanced, whereas Qualcomm has halted development of UMB in favour of the LTE family.

On December 14, 2009 Telia Sonera announced in an official press release that "We are very proud to be the first operator in the world to offer our customers 4G services.With the launch of their network, initially they are offering services in Stockholm, Sweden and Oslo, Norway.

Read Users' Comments (0)

Universal Plug and Play

Universal Plug and Play (UPnP) is a set of networking protocols promulgated by the UPnP forum. The goals of UPnP are to allow devices to connect seamlessly and to simplify the implementation of networks in the home (data sharing, communications, and entertainment) and in corporate environments for simplified installation of computer components. UPnP achieves this by defining and publishing UPnP device control protocols (DCP) built upon open, Internet-based communication standards.

The term UPnP is derived from plug and play, a technology for dynamically attaching devices directly to a computer, although UPnP is not directly related to the earlier plug-and-play technology. UPnP devices are "plug-and-play" in that when connected to a network they automatically announce their network address and supported device and services types, enabling clients that recognize those types to immediately begin using the device.

Overview

The UPnP architecture allows peer-to-peer networking of PCS, networked home appliance, CEwirelless devices. It is a distributed, open architecture protocol based on established standards such as TCP/IP, UDP, HTTP, XML, and SOAP. devices and

The UPnP architecture supports zero-configuration networking. A UPnP compatible device from any vendor can dynamically join a network, obtain an IP address, announce its name, convey its capabilities upon request, and learn about the presence and capabilities of other devices. DHCPDNS servers are optional and are only used if they are available on the network. Devices can leave the network automatically without leaving any unwanted state information behind. and

UPnP was published as a 73-part international standard, ISO/IEC 29341, in December, 2008.

Other UPnP features include:

Media and device independence
UPnP technology can run on many media that support IP including ethernet, fire wire, IR (IRda), home wiring (G.Hn) and RF (Bluetooth, WiFi). No special device driver support is necessary; common protocols are used instead.
UI Control
UPnP architecture enables devices to present a user interface through a web browser
Operating system and programming language independence
Any operating system and any programming language can be used to build UPnP products. UPnP does not specify or constrain the design of an API for applications running on control points; OS vendors may create APIs that suit their customer's needs.
Programmatic control
UPnP architecture also enables conventional application programmatic control.
Extensibility
Each UPnP product can have device-specific services layered on top of the basic architecture. In addition to combining services defined by UPnP Forum in various ways, vendors can define their own device and service types, and can extend standard devices and services with vendor-defined actions, state variables, data structure elements, and variable values.

AddressingRata Tengah

The foundation for UPnP networking is IP addressing. Each device must have a Dynamic Host Configuration Protocol (DHCP) client and search for a DHCP server when the device is first connected to the network. If no DHCP server is available, that is, the network is unmanaged, the device must assign itself an address. The process by which a UPnP device assigns itself an address is known within the UPnP Device Architecture as "AutoIP". In UPnP Device Architecture Version 1.0, AutoIP is defined within the specification itself; in UPnP Device Architecture Version 1.1, AutoIP references IETF RFC 3927. If during the DHCP transaction, the device obtains a domain name, for example, through a DNS server or via DNS fordwarding, the device should use that name in subsequent network operations; otherwise, the device should use its IP address.

Discovery

Given an IP address, the first step in UPnP networking is Discovery. The UPnP discovery protocol, defined in Section 1 of the UPnP Device Architecture, is known as the Simple Service Discovery Protocol (SSDP). When a device is added to the network, SSDP allows that device to advertise its services to control points on the network. Similarly, when a control point is added to the network, SSDP allows that control point to search for devices of interest on the network. The fundamental exchange in both cases is a discovery message containing a few, essential specifics about the device or one of its services, for example, its type, identifier, and a pointer to more detailed information.

Description

After a control point has discovered a device, the control point still knows very little about the device. For the control point to learn more about the device and its capabilities, or to interact with the device, the control point must retrieve the device's description from the URL provided by the device in the discovery message. The UPnP description for a device is expressed in XML and includes vendor-specific, manufacturer information like the model name and number, serial number, manufacturer name, URLs to vendor-specific web sites, etc. The description also includes a list of any embedded devices or services, as well as URLs for control, eventing, and presentation. For each service, the description includes a list of the commands, or actions, to which the service responds, and parameters, or arguments, for each action; the description for a service also includes a list of variables; these variables model the state of the service at run time, and are described in terms of their data type, range, and event characteristics

Control

Having retrieved a description of the device, the control point can send actions to a device's service. To do this, a control point sends a suitable control message to the control URL for the service (provided in the device description). Control messages are also expressed in XML using the Simple Object Access Protocol (SOAP). Much like function calls, the service returns any action-specific values in response to the control message. The effects of the action, if any, are modeled by changes in the variables that describe the run-time state of the service.

Event notification

The next step in UPnP networking is event notification, or "eventing". The event notification protocol defined in the UPnP Device Architecture is known as GENA, an acronym for "General Event Notification Architecture". A UPnP description for a service includes a list of actions the service responds to and a list of variables that model the state of the service at run time. The service publishes updates when these variables change, and a control point may subscribe to receive this information. The service publishes updates by sending event messages. Event messages contain the names of one or more state variables and the current value of those variables. These messages are also expressed in XML. A special initial event message is sent when a control point first subscribes; this event message contains the names and values for all evented variables and allows the subscriber to initialize its model of the state of the service. To support scenarios with multiple control points, eventing is designed to keep all control points equally informed about the effects of any action. Therefore, all subscribers are sent all event messages, subscribers receive event messages for all "evented" variables that have changed, and event messages are sent no matter why the state variable changed (either in response to a requested action or because the state the service is modeling changed).

Presentation

The final step in UPnP networking is presentation. If a device has a URL for presentation, then the control point can retrieve a page from this URL, load the page into a web browser, and depending on the capabilities of the page, allow a user to control the device and/or view device status. The degree to which each of these can be accomplished depends on the specific capabilities of the presentation page and device.

UPnP AV standards

UPnP AV stands for UPnP Audio and Video. On 12 July 2006 the UPnP Forum announced the release of version 2 of the UPnP Audio and Video specifications (UPnP AV v2), with new MediaServer version 2.0 and MediaRenderer version 2.0 classes. These enhancements are created by adding capabilities to the UPnP AV Media Server and MediaRenderer device classes that allow a higher level of interoperability between MediaServers and MediaRenderers from different manufacturers. Some of the early devices complying with these standards were marketed by Phillips under the Streamium brand name.

The UPnP AV standards have been referenced in specifications published by other organizations including Digital Living Network Alliance Networked Device Interoperability Guidelines,International Electrotechnical Commision IEC 62481-1, and Cable Televison Laboratories Open CAble Home Networking Protocol

UPnP AV components

  • UPnP MediaServer DCP - which is the UPnP-server (a 'master' device) that media library information and streams media-data (like audio/video/picture/files) to UPnP-clients on the network.
  • UPnP MediaServer ControlPoint - which is the UPnP-client (a 'slave' device) that can auto-detect UPnP-servers on the network to browse and stream media/data-files from them.
  • UPnP MediaRenderer DCP - which is a 'slave' device that can render (play) content.
  • UPnP RenderingControl DCP - control MediaRenderer settings; volume, brightness, RGB, sharpness, and more).
  • UPnP Remote User Interface (RUI) client/server - which sends/receives control-commands between the UPnP-client and UPnP-server over network, (like record, schedule, play, pause, stop, etc.).
    • Web4CE (CEA 2014) for UPnP Remote UI- CEA-2014 standard designed by Consumer Electronic Association's R7 Home Network Committee. Web-based protocol and framework for Remote User Interface on UPnP networks and the internet interface (display and control options) as a web page to display on any other device connected to the home network. That means that you can control a home networking browser-based communications method for CE Devices on a UPnP home network using ethernet and a special version of HTML called CE-HTML. (Web4CE). This standard allows a UPnP-capable home network device to provide its device through any
  • QoS (Quality of Service) - is an important (but not mandatory) service function for use with UPnP AV (Audio and Video). QoS(Quality of Service) refers to control mechanisms that can provide different priority to different users or data flows, or guarantee a certain level of performance to a data flow in accordance with requests from the application program. Since UPnP AV is mostly to deliver streaming media that is often near-real time or real-time audio/video data which it is critical to be delivered within a specific time or the stream is interrupted.Quality of Services guarantees are especially important if the network capacity is limited, for example public networks, like the internet.
  • Quality of Services for UPnP consist of Sink Device (client-side/front-end) and Source Device (server-side/back-end) service functions. With classes such as; Traffic Class that indicates the kind of traffic in the traffic stream, (for example, audio or video). Traffic Identifier (TID) which identifies data packets as belonging to a unique traffic stream. Traffic Specification (TSPEC) which contains a set of parameters that define the characteristics of the traffic stream, (for example operating requirement and scheduling). Traffic Stream (TS) which is a unidirectional flow of data that originates at a source device and terminates at one or more sink device(s).

NAT traversal

One solution for Network Address Transmission traversal, called the Internet Gateway Protocol, is implemented via UPnP. Many routers and firewalls expose themselves as Internet Gateway Devices, allowing any local UPnP controller to perform a variety of actions, including retrieving the external IP address of the device, enumerate existing port mappings, and adding and removing port mappings. By adding a port mapping, a UPnP controller behind the IGD can enable traversal of the IGD from an external address to an internal client.

Problems with UPnP

Lack of Default Authentication

The UPnP protocol, as default, does not implement any authentication, so UPnP device implementations must implement their own authentication mechanisms, or implement the Device Security Service.There also exists a non-standard solution called UPnP-UP (Universal Plug and Play - User Profile) which proposes an extension to allow user authentication and authorization mechanisms for UPnP devices and applications.

Unfortunately, many UPnP device implementations lack authentication mechanisms, and by default assume local systems and their users are completely trustworthy.

Most notably, routers and firewalls running the UPnP IGD protocol are vulnerable to attack since the framers of the IGD implementation omitted a standard authentication method. For example, Adobe Flash programs are capable of generating a specific type of HTTP request. This allows a router implementing the UPnP IGD protocol to be controlled by a malicious web site when someone with a UPnP-enabled router simply visits that web site.The following changes can be made silently by code embedded in an Adobe Flash object hosted on a malicious website

  • Port fordward internal services (ports) to the router external facing side (i.e. expose computers behind a firewall to the Internet).
  • Port fordward the router's web administration interface to the external facing side.
  • Port forwarding to any external server located on the Internet, effectively allowing an attacker to attack an Internet host via the router, while hiding their IP address.
  • Change DNS server settings so that when victims believe they are visiting a particular site (such as an on-line bank), they are redirected to a malicious website instead.
  • Change the DNS server settings so that when a victim receives any software updates (from a source that isn't properly verified via some other mechanism, such as a checking a digital certificate has been signed by a trusted source), they download malicious code instead.
  • Change administrative credentials to the router/firewall.
  • Change PPP settings.
  • Change IP settings for all interfaces.
  • Change WiFi settings.
  • Terminate connections.

This only applies to the " firewall-hole-punching"-feature of UPnP ; it does not apply when the IGD does not support UPnP or UPnP has been disabled on the IGD. Also, not all routers can have such things as DNS server settings altered by UPnP because much of the specification (including LAN Host Configuration) is optional for UPnP enabled routers

Other Issues

  • UPnP uses HTTP over UDP (known as HTTPU and HTTPMU for unicast and multicast), even though this is not standardized and is specified only in an Internet-Draft that expired in 2001.
  • UPnP does not have a lightweight authentication protocol, while the available security protocols are complex. As a result, some UPnP devices ship with UPnP turned off by default as a security measure.

Future developments

UPnP continues to be actively developed. In fall 2008, the UPnP forum ratified the successor to UPnP 1.0, UPnP 1.1.

The standard DPWS was a candidate successor for UPnP, but UPnP 1.1 was selected by the forum.

UPnP InternetGatewaydevice's WANIPConnection service do have competitive solution known as NAT-PMP, is an IETF draft introduced by Apple inc. in 2005. However, NAT-PMP is focused only in NAT traversal. UPnP InternetGatewayDevice is currently being evolved to version 2 which preliminary content can be found from


Read Users' Comments (0)

Plug and play

In computing, plug and play is a term used to describe the characteristic of a computer bust, or device specification, which facilitates the discovery of a hardware component in a system, without the need for physical device configuration, or user intervention in resolving resource conflicts.

Plug and play refers to both the traditional boot-time assignment of device resources and driver identification, as well as to hotplug systems such as USB and firewire

History of Device Configuration

In the beginnings of computing technology, the hardware logic was just a collection of building blocks, and the relationships between them had to be completely redesigned to accommodate different calculating operations. These changes were usually done by connecting some wires between modules and disconnecting others. The very earliest of mechanical computing devices such as the IBM punchcard accounting, tabulating and interpreting machines were programmed entirely in this manner, by the use of a quick-swap control panel wired to route signals between configuration sockets.

As general purpose computing devices developed, these connections and disconnections were instead used to specify locations in the system address spacecentral processing unit. If two or more of the same device were installed in one computer, it would be necessary to assign the second device to a separate, non-overlapping region of the system address space so that both could be accessible at the same time. where an expansion device should appear, in order for the device to be accessible by the

Some early microcomputing devices such as the Apple II required the end-user to physically cut some wires and solder others together to make these configuration changes. The changes were intended to be mostly permanent for the life of the hardware.

Over time the need developed for more frequent changes and for easier changes to be made by unskilled computer users. Rather than cutting and soldering connections, the header and jumper was developed. The header consists of two or more vertical pins arranged in an evenly-spaced grid. The jumper is a small conductive strip of metal clipped across the header pins. The conductive jumper strip is commonly encased in a plastic shell to help prevent electrical shorting between adjacent jumpers.

Jumpers have the unfortunate property of being easy to misplace if not needed, and are difficult to grasp in order to remove them from headers. To help make these changes easier, the DIP switch DIP switch was developed, also known as a dual in-line package switch. The DIP switch has small either rocker or sliding switches enclosed in a plastic shell and usually numbered for easy reference. DIP switches usually come in units of four or eight switches; longer rows of switches can be made by combining two or more units. DIP switches are particularly useful where a long string of jumpers would be closely packed together or where four or more jumpers would be used in combination to configure one device function. DIP switches also have a particular advantage for configuration settings which are likely to be changed more frequently than once every few years. (Because of the inconvenience of setting them, jumpers are typically used for settings that are not expected to need to be changed unless the device is removed from one computer and installed in another, an infrequent occurrence for internal devices in consumer desktop PCs.)

As computing devices spread further out into the general population, there was ever greater pressure developing to automate this configuration process. One of the first major industry efforts towards self-configuration was done by IBM with the creation of their Personal System/2 line of computers using the micro channel architecture (MCA). This took a giant leap forward, as expansion devices had absolutely no jumpers or DIP switches.

However, IBM's first attempt at self-configuration had a few major problems. In an attempt to simplify device setup, every piece of hardware was issued with a disk containing a special file used to auto-configure the hardware to work with the computer. (If the device required one or more drivers for specific operating systems, they were usually included on the same disk.) Without this disk the hardware would be completely useless and the computer would not boot at all until the unconfigured device was removed.

MCA also suffered for being a proprietary technology. Unlike their previous PC bus design, the AT bus, IBM did not publicly release specifications for MCA and actively pursued patents to block third parties from selling unlicensed implementations of it, and the developing PC Clone market did not want to pay royalties to IBM in order to use this new technology. The PC clone makers instead developed EISA, an extension to the existing old non-PnP AT bus standard, which they also further standardized and renamed ISA (to avoid IBM's "AT" trademark). With few vendors other than IBM supporting it with computers or cards, MCA eventually failed in the marketplace. Most vendors of PC-compatibles stayed largely with ISA and manual configuration, while EISA offered the same type of auto-configuration featured in MCA. (EISA cards required a configuration file as well.)

In time, many ISA cards incorporated, through proprietary and varied techniques, hardware to self-configure or to provide for software configuration; often the card came with a configuration program on disk that could automatically set the software-configurable (but not itself self-configuring) hardware. Some cards had both jumpers and software-configuration, with some settings controlled by each; this compromise reduced the number of jumpers that had to be set, while avoiding great expense for certain settings, e.g. nonvolatile registers for a base address setting. The problems of required jumpers continued on but slowly diminished as more and more devices, both ISA and other types, included extra self-configuration hardware. However, these efforts still did not solve the problem of making sure the end-user has the appropriate software driver for the hardware.


Read Users' Comments (0)

Internet Protocol Suite

The Internet Protocol Suite (commonly known as TCP/IP) is the set of communications protocols Internet and other similar networks. It is named from two of the most important protocols in it: the Transmission Control Protocol (TCP) and the Internet Protocol (IP), which were the first two networking protocols defined in this standard. Today's IP networking represents a synthesis of several developments that began to evolve in the 1960s and 1970s, namely the Internet and LANs, which emerged in the mid- to late-1980s, together with the advent of the World Wide Web in the early 1990s. used for the

The Internet Protocol Suite, like many protocol suites, may be viewed as a set of layers. Each layer solves a set of problems involving the transmission of data, and provides a well-defined service to the upper layer protocols based on using services from some lower layers. Upper layers are logically closer to the user and deal with more abstract data, relying on lower layer protocol to translate data into forms that can eventually be physically transmitted.

The TCP/IP model consists of four layers.From lowest to highest, these are the link layer, the internet layer, the transport layer, and the application layer

History

The Internet Protocol Suite resulted from research and development conducted by the Defense Advanced Research Projects Agency (DARPA) in the early 1970s. After initiating the pioneering APARNET in 1969, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf, the developer of the existing ARPANET network control program (NCP) protocol, joined Kahn to work on open-architecture interconnection models with the goal of designing the next protocol generation for the ARPANET.

By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and, instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf credits Hubert Zimmerman and Louis Pouzin, designer of the CYCLADES network, with important influences on this design.

The design of the network included the recognition that it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. Using a simple design, it became possible to connect almost any network to the ARPANET, irrespective of their local characteristics, thereby solving Kahn's initial problem. One popular saying has it that TCP/IP, the eventual product of Cerf and Kahn's work, will run over "two tin cans and a string."

A computer called a router is provided with an interface to each network, and forwards packets back and forth between them. Requirements for routers are defined in (Request for Comments 1812).

The idea was worked out in more detailed form by Cerf's networking research group at Stanford in the 1973–74 period, resulting in the first TCP specification . (The early networking work at Xerox PARC, which produced the PARC Universal Packet protocol suite, much of which existed around the same period of time, was also a significant technical influence; people moved between the two.)

DARPA then contracted with BBN Technologies,Stanford University,and the University College London to develop operational versions of the protocol on different hardware platforms. Four versions were developed: TCP v1, TCP v2, a split into TCP v3 and IP v3 in the spring of 1978, and then stability with TCP/IP v4 — the standard protocol still in use on the Internet today.

In 1975, a two-network TCP/IP communications test was performed between Stanford and University College London (UCL). In November, 1977, a three-network TCP/IP test was conducted between sites in the US, UK, and Norway. Several other TCP/IP prototypes were developed at multiple research centres between 1978 and 1983. The migration of the ARPANET to TCP/IP was officially completed on January 1, 1983, when the new protocols were permanently activated.

In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In 1985, the Internet Architecture Board held a three day workshop on TCP/IP for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use.

Layers in the Internet Protocol Suite

The concept of layers

The TCP/IP suite uses encapsulation to provide abstraction of protocols and services. Such encapsulation usually is aligned with the division of the protocol suite into layers of general functionality. In general, an application (the highest level of the model) uses a set of protocols to send its data down the layers, being further encapsulated at each level.

This may be illustrated by an example network scenario, in which two Internet host computers communicate across local network boundaries constituted by their internetworking gateways (routers).

TCP/IP stack operating on two hosts connected via two routers and the corresponding layers used at each hop
Encapsulation of application data descending through the protocol stack.

The functional groups of protocols and methods are the application layer, the transport layer, the internet layer, and the link layer. It should be noted that this model was not intended to be a rigid reference model into which new protocols have to fit in order to be accepted as a standard.

Different authors have interpreted the RFCs differently regarding the question whether the Link Layer (and the TCP/IP model) covers physical layer issues, or if a hardware layer is assumed below the Link Layer. Some authors have tried to use other names for the Link Layer, such as network interface layer, in view to avoid confusion with the data link layer of the seven layer OSI model. Others have attempted to map the Internet Protocol model onto the OSI Model. The mapping often results in a model with five layers where the Link Layer is split into a Data Link Layer on top of a Physical Layer. In literature with a bottom-up approach to Internet communication, in which hardware issues are emphasized, those are often discussed in terms of Physical Layer and Data Link Layer.

The Internet Layer is usually directly mapped into the OSI Model's network layer, a more general concept of network functionality. The Transport Layer of the TCP/IP model, sometimes also described as the host-to-host layer, is mapped to OSI Layer 4 (Transport Layer), sometimes also including aspects of OSI Layer 5 functionality. OSI's application layer,presentation layer, and the remaining functionality of the Session Layer are collapsed into TCP/IP's Application Layer. The argument is that these OSI layers do usually not exist as separate processes and protocols in Internet applications.

However, the Internet protocol stack has never been altered by the Internet Engineering Task Force from the four layers defined in RFC 1122. The IETF makes no effort to follow the OSI model although RFCs sometimes refer to it. The IETF has repeatedly stated that Internet protocol and architecture development is not intended to be OSI-compliant.

R RFC 2439, addressing Internet architecture, contains a section entitled: "Layering Considered Harmful".

Implementations

Most operating systems in use today, including all consumer-targeted systems, include a TCP/IP implementation.

Unique implementations include Lightweight TCP/IP, an open source stack designed for embedded systems and KA9Q NOS, a stack and associated protocols for amateur packet raiopersonal computer connected via serial lines. systems and

Read Users' Comments (0)

What is Broadband

In telecommunication

Broadband in telecommunication refers to a signaling method that includes or handles a relatively wide range (or band) of frequencies, which may be divided into channels or frequency bins. Broadband is always a relative term, understood according to its context. The wider the bandwith, the greater the information-carrying capacity. In radio, for example, a very narrow-band signal will carry Morse code; a broader band will carry speech; a still broader band is required to carry music audio frequency required for realistic sound reproduction. A analog modem over the same telephone line a bandwidth of several megabits per second can be handled by ASDL, which is described as broadband (relative to a modem over a telephone line, although much less than can be achieved over a fiber optic circuit). without losing the high antenna described as "normal" may be capable of receiving a certain range of channels; one described as "broadband" will receive more channels. In data communications an will transmit a bandwidth of 56 kilobits per seconds (kbit/s) over a

In data communications

Broadband in data can refer to broadband network or broadband internet and may have the same meaning as above, so that data transmission over a fiber optic cable would be referred to as broadband as compared to a telephone modem operating at 56000 bite per second. However, a worldwide standard for what level of bandwidth and network speeds actually constitute Broadband has not been determined.

However, broadband in data commuication is frequently used in a more technical sense to refer to data transmission where multiple pieces of data are sent simultaneously to increase the effective rate of transmission, regardless of data signaling rate. In network engineering this term is used for methods where two or more signals share a medium.Broadband Internet access, often shortened to just broadband, is a high data rate Internet access—typically contrasted with dial-up access using a 56k modem.

Dial-up modems are limited to a bitrate of less than 56 kbit/s (kilobits per second) and require the full use of a telephone line—whereas broadband technologies supply more than double this rate and generally without disrupting telephone use.

In DSL

The various forms of DSL services are broadband in the sense that digital information is sent over a high-bandwidth channel (located above the baseband voice channel on a single pair of wires).

In Ethernet

A baseband transmission sends one type of signal using a medium's full bandwidth, as in 100BASE-T Ethernet. Ethernet, however, is the common interface to broadband modems such as DSL data links, and has a high data rate itself, so is sometimes referred to as broadband. Ethernet provided over cable modem is a common alternative to DSL.

In power-line communication

Power Line have also been used for various types of data communication. Although some systems for remote control are based on narrowband signaling, modern high-speed systems use broadband signaling to achieve very high data rates. One example is the ITU-T G.hn standard, which provides a way to create a high-speed (up to 1 Gigabit/s). LAN using existing home wiring (including power lines, but also phone lines and coaxial cable).

In video

Broadband in analog video distribution is traditionally used to refer to systems such as cable television, where the individual channels are modulated on carriers at fixed frequencies.In this context, baseband is the term's antonym, referring to a single channel of analog video, typically in composite form with an audio subcarrier.The act of demodulating converts broadband video to baseband video.

However, broadband video in the context of streaming internet video has come to mean video files that have bitrates high enough to require broadband internet access in order to view them.

Broadband video is also sometimes used to describe IPTV Video on demand

Read Users' Comments (0)

Internet Popularity

A CEA study in 2006 found that dial-up Internet access is on a notable decline in the U.S. In 2000, dial-up Internet connections accounted for 74% of all U.S. residential Internet connections. The US demographic pattern for (dial-up modem users per capita) has been more or less mirrored in Canada and Australia for the past 20 years.

Dial-up modem use in the US had dropped to 60% by 2003, and in 2006 stood at 36%. Voiceband modems were once the most popular means of internet access in the U.S., but with the advent of new ways of accessing the Internet, the traditional 56K modem is losing popularity.

Read Users' Comments (0)

Voice Modem

Voice modems are regular modems that are capable of recording or playing audio over the telephone line. They are used for telephony applications. See Voice modem command set for more details on voice modems. This type of modem can be used as FXO card for Private branch exchange systems (compare V.92).

Read Users' Comments (0)

Deep-space Telecommunications

Many modern modems have their origin in dees space telecommunication systems of the 1960s.

Differences with deep space telecom modems vs landline modems

  • digital modulation formats that have high doppler immunity are typically used
  • waveform complexity tends to be low, typically binary phase shift keying
  • error correction varies mission to mission, but is typically much stronger than most landline modems

Read Users' Comments (0)

Home networking

Although the name modem is seldom used in this case, modems are also used for high-speed home networking applications, specially those using existing home wiring. One example is the G.HnlTU-T, which provides a high-speed (up to 1 Gbit/s) Local Area Network using existing home wiring (power lines, phone lines and coaxial cables). G.hn devices use OFDM to modulate a digital signal for transmission over the wire.

Read Users' Comments (0)

Broadband

ASDL modems, a more recent development, are not limited to the telephone's voiceband audio frequencies. Some ASDL modems use coded orthogonal frequency division modulation (DMT).

Cable Modems use a range of frequencies originally intended to carry RF television channels. Multiple cable modems attached to a single cable can use the same frequency band, using a low-level media access protocol to allow them to work together within the same channel. Typically, 'up' and 'down' signals are kept separate using frequency division multiple access.

New types of broadband modems are beginning to appear, such as doubleway satellite and power line modems.

Broadband modems should still be classed as modems, since they use complex waveforms to carry digital data. They are more advanced devices than traditional dial-up modems as they are capable of modulating/demodulating hundreds of channels simultaneously.

Many broadband modems include the functions of a router (with ethernet and WiFiports) and other features such as DHCP, NAT and firewall features.

When broadband technology was introduced, networking and routers were unfamiliar to consumers. However, many people knew what a modem was as most internet access was through dial-up. Due to this familiarity, companies started selling broadband modems using the familiar term modem rather than vaguer ones like adapter or transceiver.

Many broadband modems must be configured in bridge mode before they can use a router.

Read Users' Comments (0)

Mobile Modems and Routers

Modems which use mobile phone lines (UMTS,HDSPA,EVDO,WiMax), are known as Cellular Modems. Cellular modems can be embedded inside a laptop or appliance, or they can be external to it. External cellular modems are datacards and cellular routers. The datacard is a PC Card or Express Card which slides into a PCMIA/PC Card/Express Card slot on a computer. The most famous brand of cellular modem datacards is the Air Card made by Sierra Wireless. (Many people just refer to all makes and models as AirCards, when in fact this is a trademarked brand name.) Nowadays, there are USB cellular modems as well that use a USB port on the laptop instead of a PC Card or Express Card slot. A cellular routers may or may not have an external datacard (AirCard) that slides into it. Most cellular routers do allow such datacards or USB modems, except for the WAAV,Inc.CM3 mobile broadband cellular routers. Cellular Routers may not be modems per se, but they contain modems or allow modems to be slid into them. The difference between a cellular routers and a cellular modems is that a cellular router normally allows multiple people to connect to it (since it can route, or support multipoint to multipoint connections), while the modem is made for one connection.

Most of the GSM cellular modems come with an integrated SIM Cardholder (i.e., Sierra 881, etc.) The CDMA (EVDO) versions do not use SIM Cards, but use Electronic Serial Number instead.

The cost of using a cellular modem varies from country to country. Some carriers implement flat rate plans for unlimited data transfers. Some have caps (or maximum limits) on the amount of data that can be transferred per month. Other countries have plans that charge a fixed rate per data transferred—per megabyte or even kilobyte of data downloaded; this tends to add up quickly in today's content-filled world, which is why many people are pushing for flat data rates.

The faster data rates of the newest cellular modem technologies (UMTS,HDSPA,EVDO,WiMax) are also considered to be broadband cellular modems and compete with other Broadband modems below.

Read Users' Comments (0)

WiFi and WiMax

Wireless Data Modem are used in the WiFi and WiMax standards, operating at microwave frequency

WiFi is principally used in laptops or notebook for Internet connections (wireless access point) and wireless application protocol(WAP).

Read Users' Comments (0)

Radio Modems

Direct Broadcast Satelitte, Wi-Fi, and mobile phones all use modems to communicate, as do most other wireless services today. Modern telecommunications and data networks also make extensive use of radio modems where long distance data links are required. Such systems are an important part of the PSTN, and are also in common use for high-speed computer networks links to outlying areas where fibre is not economical.

Even where a cable is installed, it is often possible to get better performance or make other parts of the system simpler by using radio frequencies and modulation techniques through a cable. Coaxial cable has a very large bandwidth, however signal attenuation becomes a major problem at high data rates if a digital signal is used. By using a modem, a much larger amount of digital data can be transmitted through a single piece of wire. Digital cable television and cable Internet services use radio frequency modems to provide the increasing bandwidth needs of modern households. Using a modem also allows for frequency-division multiple access to be used, making full-duplex digital communication with many users possible using a single wire.

Wireless modems come in a variety of types, bandwidths, and speeds. Wireless modems are often referred to as transparent or smart. They transmit information that is modulated onto a carrier frequency to allow many simultaneous wireless communication links to work simultaneously on different frequencies.

Transparent modems operate in a manner similar to their phone line modem cousins. Typically, they were half duplex, meaning that they could not send and receive data at the same time. Typically transparent modems are polled in a round robin manner to collect small amounts of data from scattered locations that do not have easy access to wired infrastructure. Transparent modems are most commonly used by utility companies for data collection.

Smart modems come with a media access controller inside which prevents random data from colliding and resends data that is not correctly received. Smart modems typically require more bandwidth than transparent modems, and typically achieve higher data rates. The IEEE 802.11 standard defines a short range modulation scheme that is used on a large scale throughout the world.

Read Users' Comments (0)

List of Dial-up Speeds

Note that the values given are maximum values, and actual values may be slower under certain conditions (for example, noisy phone lines).For a complete list see the companion article list if device bandwith . Please note baud == symbols per second.

Connection Bitrate (kbit/s)
110 baud Bell 101 modem 0.1
300 baud
0.3
1200 bps (600 baud)
1.2
2400 bps (600 baud) 2.4
2400 bps (1,200 baud) 2.4
4800 bps (1,600 baud) 4.8
9600 bps (2,400 baud) 9.6
14.4 kbps (2,400 baud) 14.4
28.8 kbps (3,200 baud) 28.8
33.6 kbps (3,429 baud) 33.6
56 kbps (8,000/3,429 baud) 56.0/33.6
56 kbps (8,000/8,000 baud) 56.0/48.0
Bonding modem (two 56k modems) 112.0/96.0
Hardware compression (variable) 56.0-220.0
Hardware compression (variable) 56.0-320.0
Server-side web compression (variable) 100.0-1,000.0

Read Users' Comments (0)

Compression by the ISP

As telephone-based 56k modems began losing popularity, some Internet Service Providers such as Netzero and Juno started using pre-compression to increase the throughput & maintain their customer base. As example, the Netscape ISP uses a compression program that squeezes images, text, and other objects at the server, just prior to sending them across the phone line. The server-side compression operates much more efficiently than the on-the-fly compression of V.44-enabled modems. Typically website text is compacted to 4% thus increasing effective throughput to approximately 1,300 kbit/s. The accelerator also pre-compresses Flash executables and images to approximately 30% and 12%, respectively.

The drawback of this approach is a loss in quality, where the graphics become heavily compacted and smeared, but the speed is dramatically improved such that web pages load in less than 5 seconds, and the user can manually choose to view the uncompressed images at any time. The ISPs employing this approach advertise it as "DSL speeds over regular phone lines" or simply "high speed dial-up".

Read Users' Comments (0)

Using Compression to Exceed 56k

Today's V.42, V.42bis and V.44 standards allow the modem to transmit data faster than its basic rate would imply. For instance, a 53.3 kbit/s connection with V.44 can transmit up to 53.3*6 == 320 kbit/s using pure text. However, the compression ratio tends to vary due to noise on the line, or due to the transfer of already-compressed files (ZIP files, JPEG images, MP3 audio, MPEG video).At some points the modem will be sending compressed files at approximately 50 kbit/s, uncompressed files at 160 kbit/s, and pure text at 320 kbit/s, or any value in between.

In such situations a small amount of memory in the modem, a buffer, is used to hold the data while it is being compressed and sent across the phone line, but in order to prevent overflow of the buffer, it sometimes becomes necessary to tell the computer to pause the datastream. This is accomplished through hardware flow control using extra lines on the modem–computer connection. The computer is then set to supply the modem at some higher rate, such as 320 kbit/s, and the modem will tell the computer when to start or stop sending data.

Read Users' Comments (0)

PCM and Digital Lines

In the late 1990s Rockwell and U.S. Robotics introduced new technology based upon the digital transmission used in modern telephony networks. The standard digital transmission in modern networks is 64 kbit/s but some networks use a part of the bandwidth for remote office signaling (eg to hang up the phone), limiting the effective rate to 56 kbit/s DSo. This new technology was adopted into ITU standards V.90 and is common in modern computers. The 56 kbit/s rate is only possible from the central office to the user site (downlink) and in the United States, government regulation limits the maximum power output to only 53.3 kbit/s. The uplink (from the user to the central office) still uses V.34 technology at 33.6k.

Later in V.92, the digital PCM technique was applied to increase the upload speed to a maximum of 48 kbit/s, but at the expense of download rates. For example a 48 kbit/s upstream rate would reduce the downstream as low as 40 kbit/s, due to echo on the telephone line. To avoid this problem, V.92 modems offer the option to turn off the digital upstream and instead use a 33.6 kbit/s analog connection, in order to maintain a high digital downstream of 50 kbit/s or higher.V.92 also adds two other features. The first is the ability for users who have call waiting to put their dial-up internet connection on hold for extended periods of time while they answer a call. The second feature is the ability to quickly connect to one's ISP. This is achieved by remembering the analog and digital characteristics of the telephone line, and using this saved information to reconnect at a fast pace.

Read Users' Comments (0)