Domain Name Registrations Kept Growing in 2009

The Internet Domain name industry didn't have too bad of a year in 2009, even as the global economic downturn raged. According to the latest Domain Name Industry Brief from VeriSign, the total base of registered Top-Level Domain Names (TLDs) grew in 2009.

VeriSign reported that in 2009, the base of TLDs expanded by 15 million domains names to a total of 192 million domain registration across all TLDs.

Helping to the lead the way were the .com and .net TLDs, which at the end of 2009 accounted for 96.7 million domain names. The 2009 tally represents a 7 percent increase over the total number of .com and .net TLDs at the end of 2008. The company also said that that during the fourth quarter of 2009 alone, it added 7.3 million new .com and .net registrations. VeriSign manages both the .com and .net registries under contract from ICANN.

The growth isn't the only milestone for the .com domain. On March 15, VeriSign will celebrate the 25th anniversary of the first .com name -- Symbolics.com -- which was assigned in 1985.

The .com and .net domain names aren't the only ones that are growing. The total number of country code Top-Level Domains (ccTLDs) also continued to rise in 2009. In total, VeriSign reported that there were 78.6 million ccTLD at the end of 2009, an increase of 7.5 million domain names from 2008.

Overall, there are now more than 240 ccTLDs in use, with China's .cn remaining the most popular ccTLD, followed by Germany's .de and the United Kingdom's .uk.

While China has been the top ccTLD since the third quarter of 2008, the rate of growth in the .cn ccTLD has actually slowed.

"The .cn base, which had been experiencing remarkable growth as high as 467 percent year over year, slowed its growth and ended the fourth quarter with a one percent decline in its base," VeriSign's report stated.

Sitting behind all those domain names is the global DNS (define) system, which VeriSign helps to administer. As domain names have grown, so too has the load on the DNS system. VeriSign reported that during the fourth quarter of 2009, it hit peaks of 61 billion DNS queries per day. Average daily DNS query load amounted to 52 billion per day, which is an increase of 48 percent over the same period in 2008.

In 2009, VeriSign improved its DNS capabilities by way of its $100 million project Titan, an effort to improve capacity by a factor of 10.

source:http:enterprisenetworkingplanet.com

Read Users' Comments (0)

U.S. Preps Major Broadband Plan

U.S. regulators will announce a major Internet policy this week to revolutionize how Americans communicate and play, proposing a dramatic increase in broadband speeds that could let people download a high-definition film in minutes instead of hours.

Dramatically increasing Internet speeds to 25 times the current average is one of the myriad goals to be unveiled in the National Broadband Plan by the the Federal Communications Commission on Tuesday.

The highly anticipated plan will make a series of recommendations to Congress and is aimed at spurring the ever-changing communications industry to bring more and faster online services to Americans as they increasingly turn to the Internet to communicate, pay monthly bills, make travel plans and be entertained by movies and music.

"This is a fairly unique event," said Paul Gallant, an analyst with Concept Capital. "The FCC really has never been asked to design a broad regulatory shift like this. Broadband is important and difficult because it threatens every established communications sector."

Some details of the plan have trickled out in the last few weeks including how to find spectrum to meet an anticipated explosion of handset devices capable of playing movies and music in addition to handling emails and voice calls.

But some carriers like AT&T Inc and Qwest Communications International Inc were irked last month when the agency's chief, Julius Genachowski, announced that the FCC would propose in the plan a goal of 100 Mbps speeds to be in place at 100 million American homes in 10 years. The current average is less than 4 Mbps.

In a sign of tension between the FCC and carriers, Qwest called it "a dream" and AT&T reacted by saying the FCC should resist calls for "extreme forms of regulation."

Since the FCC announcement, Cisco Systems Inc announced it would introduce a router that can handle Internet traffic up to 12 times faster than rival products. Google Inc has also gotten in on the hype, saying it plans to build a super-fast Internet network to show that it can be done. The FCC has praised both announcements.

The plans could also touch off tensions with television broadcasters, who will be asked to give up spectrum to wireless carriers who desperately need it for their mobile devices, such as the iPhone and Blackberry.

The FCC plans to let them share in the profits of auctions structured to redistribute the spectrum.

"We've developed a plan that is a real win-win for everyone involved and we have every expectation that it will work," Genachowski said in an interview with Reuters.

"We've certainly heard from a number of broadcasters who told us they think this is a promising direction and are getting ready to roll up their sleeves with us," he said.

The FCC also wants to make sure that anchor institutions -- government buildings, schools, libraries and healthcare facilities -- get speeds of about 1 gigabit per second by 2020.

The full broadband plan is expected to be released at a Tuesday meeting among the FCC's five members who are expected to discuss the results and recommendations of the roadmap, which was mandated by Congress. Congress may have to pass legislation to enact some portions of the plan.

FCC officials have said some of the goals are aspirational and should be viewed as a "living, breathing" document for the next decade in hopes of helping 93 million Americans without broadband get connected.

Achievable

"It is both aspiration and achievable," Genachowski said.

The Obama administration has touted the plan as a way to create jobs and make energy use more efficient.

"It will be a call to action," said Blair Levin, who heads the FCC's broadband task force which has collected data and comments from the industry, academics and the public as well as from three dozen public workshops.

The FCC has placed most of its attention on broadband policy which Darrell West, director of governance studies at the Brookings Institution, called "the signature issue" since Genachowski took over the helm in late June.

"It means that broadband is going to drive other types of policy decisions and it really sets the parameters for telecommunications and new applications," West said.

FCC officials have said that the plan will not take sides on technology or applications, but they want to lay the groundwork to spur innovation and job creation.

Officials have said the plan will ask Congress to fund up to $16 billion to build an emergency public safety system.

It would also tell lawmakers that a one-time injection of $9 billion could accelerate broadband reach to the 4 percent of Americans who do have access. Otherwise they could let the FCC carry out a 10-year plan to realign an $8 billion U.S. subsidy program for universal broadband access instead of universal phone access.

Experts call the plan ambitious but question if the FCC, which plans to spin off a series of rule-making proposals linked to the plan, can realistically make good on its recommendations.

"There's so little progress on this stuff in Washington," said Rob Atkinson, who heads the Information Technology and Innovation Foundation.

"I think Chairman Genachowski has a real opportunity to bring different warring interests under 50-75 percent of the plan."

Copyright 2010 Reuters
source:internetnews.com

Read Users' Comments (0)

iPad, Apple Computer Tablet Worth USD $ 4 Million


Newer Gadget of the most anticipated Apple finally appeared.
iPad, thus the name of the Apple tablet computer. Apple CEO Steve Jobs introduced the waterfall itself to the device for U.S. $ 499 or about USD 4 million of this.

Yes, this is one of the surprise of the presence iPad. Therefore, previous analysts have cautioned that this gadget will be sold in the range of U.S. $ 1000.

However, Apple seems quite sensitive to the economic recession which is still hanging, so it does not fix the price that was too expensive for the new champ.

Even so, iPad USD 4 million is for the lowest version, which has Wi-Fi connectivity and solid state 16 GB of memory. While for the most expensive version of others, equipped with 3G connectivity and 64 GB of memory dibanderol U.S. $ 829.

If you view the Jobs of the explanation, seem more iPad intended as an entertainment device rather than to work. Because the reader can be used for electronic (e-reader), gaming, surf the Internet to watching videos.

"We want to start the year 2010 by introducing a revolutionary device," Jobs said, quoted from Reuters (28/1/2010).

Interested in direct purchase? Patient first. Therefore, the new Apple iPad planning to dump into the market within the next two months.

source:detikinet.com

Read Users' Comments (0)

Internet Protocol

The Internet Protocol (IP) is a protocol used for communicating data across a packet-switched internetwork using the Internet Protocol Suite, also referred to as TCP/IP.

IP is the primary protocol in the Internet Layer of the Internet Protocol Suite and has the task of delivering distinguished protocol datagrams (packets) from the source host to the destination host solely based on their addresses. For this purpose the Internet Protocol defines addressing methods and structures for datagram encapsulation. The first major version of addressing structure, now referred to as Internet Protocol Version 4 (IPv4) is still the dominant protocol of the Internet, although the successor, Internet Protocol Version 6 (IPv6) is being deployed actively worldwide.

IP encapsulation

Data from an upper layer protocol is encapsulated as packets/datagrams (the terms are basically synonymous in IP). Circuit setup is not needed before a host may send packets to another host that it has previously not communicated with (a characteristic of packet-switched networks), thus IP is a connectionless protocol. This is in contrast to public switched telephone networks that require the setup of a circuit for each phone call (connection-oriented protocol).

Services provided by IP

Because of the abstraction provided by encapsulation, IP can be used over a heterogeneous network, i.e., a network connecting computers may consist of a combination of Ethernet, ATM, FDDI, Wi-Fi, token ring, or others. Each link layer implementation may have its own method of addressing (or possibly the complete lack of it), with a corresponding need to resolve IP addresses to data link addresses. This address resolution is handled by the Address Resolution Protocol (ARP) for IPv4 and Neighbor Discovery Protocol (NDP) for IPv6.

Reliability

The design principles of the Internet protocols assume that the network infrastructure is inherently unreliable at any single network element or transmission medium and that it is dynamic in terms of availability of links and nodes. No central monitoring or performance measurement facility exists that tracks or maintains the state of the network. For the benefit of reducing network complexity, the intelligence in the network is purposely mostly located in the end nodes of each data transmission, cf. end-to-end principle. Routers in the transmission path simply forward packets to next known local gateway matching the routing prefix for the destination address.

As a consequence of this design, the Internet Protocol only provides best effort delivery and its service can also be characterized as unreliable. In network architectural language it is a connection-less protocol, in contrast to so-called connection-oriented modes of transmission. The lack of reliability allows any of the following fault events to occur:

  • data corruption
  • lost data packets
  • duplicate arrival
  • out-of-order packet delivery; meaning, if packet 'A' is sent before packet 'B', packet 'B' may arrive before packet 'A'. Since routing is dynamic and there is no memory in the network about the path of prior packets, it is possible that the first packet sent takes a longer path to its destination.
The only assistance that the Internet Protocol provides in Version 4 (IPv4) is to ensure that the IP packet header is error-free through computation of a checksum at the routing nodes. This has the side-effect of discarding packets with bad headers on the spot. In this case no notification is required to be sent to either end node, although a facility exists in the Internet Control Message Protocol (ICMP) to do so.

IPv6, on the other hand, has abandoned the use of IP header checksums for the benefit of rapid forwarding through routing elements in the network.

The resolution or correction of any of these reliability issues is the responsibility of an upper layer protocol. For example, to ensure in-order delivery the upper layer may have to cache data until it can be passed to the application.

In addition to issues of reliability, this dynamic nature and the diversity of the Internet and its components provide no guarantee that any particular path is actually capable of, or suitable for performing the data transmission requested, even if the path is available and reliable. One of the technical constraints is the size of data packets allowed on a given link. An application must assure that it uses proper transmission characteristics. Some of this responsibility lies also in the upper layer protocols between application and IP. Facilities exist to examine the maximum transmission unit (MTU) size of the local link, as well as for the entire projected path to the destination when using IPv6. The IPv4 internetworking layer has the capability to automatically fragment the original datagram into smaller units for transmission. In this case, IP does provide re-ordering of fragments delivered out-of-order.

Transmission Control Protocol (TCP) is an example of a protocol that will adjust its segment size to be smaller than the MTU. User Datagram Protocol (UDP) and Internet Control Message Protocol (ICMP) disregard MTU size thereby forcing IP to fragment oversized datagrams.

IP addressing and routing

Perhaps the most complex aspects of IP are IP addressing and routing. Addressing refers to how end hosts become assigned IP addresses and how subnetworks of IP host addresses are divided and grouped together. IP routing is performed by all hosts, but most importantly by internetwork routers, which typically use either interior gateway protocols (IGPs) or external gateway protocols (EGPs) to help make IP datagram forwarding decisions across IP connected networks

Read Users' Comments (0)

How broadband satellite Internet works



Satellites have brought Internet access to places where IP communications seemed impossible. In this section, we explain how satellite Internet works. You will understand how bytes of information or simply a mouse click travels all the way from your computer to the satellite, to our NOC and back.

VSAT Systems uses commercial satellite connections as a high-speed digital link between our customers and the U.S. Internet backbone. The main components of a satellite system comprises of the following:

1. Ground-based electronic equipment

  • The VSAT dish: It refers to what most people call their dish. VSAT units are two-way satellite ground stations with dishes that typically range from 0.75m to 1.8m in diameter. VSAT Systems offers VSAT antennas between 1.2m and 2.4m in diameter, depending on the application and location.
  • The indoor modem: A satellite modem facilitates data transfers using a communications satellite as a relay. VSAT Systems end users typically use the iDirect 3100 series Modem.
  • The teleports: The teleport is the earth station that controls communications across the space link. The teleport is the heart of the VSAT Systems satellite Internet system. VSAT Systems has three 6.3m VertexRSI antennae, transmitters, control systems, redundant links to the Internet, plus auxiliary power and HVAC.
  • The Network Operations Center (NOC): The facility which controls all communications over the satellite link. The NOC monitors for power failures, satellite signal issues and other performance issues that may affect the network. The VSAT Systems NOC is located in Akron, Ohio.

2. Satellite equipment

  • The satellite: In a geostationary or geosynchronous orbit 22,236 miles above the earth’s surface, a satellite completes one revolution in exactly the same amount of time that it takes the Earth to rotate one full turn on its axis. Thus, the satellite always appears at the same position above the Earth. This eliminates the need for satellite dishes at the user location to track the satellite, which greatly simplifies their construction and cost. These satellites, used for a variety of purposes like broacast and telecommunications, can also be used to provide Internet access at any location on Earth.
  • Transponder space segment: The communications channels on a satellite that both receive and retransmit data. Modern satellites carry between 36 and 72 separate transponders all running at different frequencies. These frequency segments are used for transmission of data.
  • Internet Backbone: The backbone is a large collection of interconnected, high-capacity, commercial, government, and academic data routes and core routers that carry data. They connect with other countries and continents around the world.

3. Here’s how the process works - in 5 easy to understand steps:

  • End user computer is connected to your network, which in turn is connected to the Internet by VSAT Systems. You open a web browser, and type in a web address. End user computer sends a request for a transfer of data - both transmit and receive.
  • That request is sent from the end user PC, through their home network, to the indoor satellite modem which modulates the signal and passes it to the VSAT dish. The VSAT dish converts this signal to an RF signal and sends it to a satellite located in the geostationary orbit at the speed of light - 186,000 miles per second.
  • The satellite in the geo-stationary orbit receives this signal and sends it to one of the VSAT Systems teleports in Akron, Ohio. This illustrates the fact that although the packets of information travel tremendous distances via the space segment, the packets hop fewer networks due to the large reduction in the number of inter domain and intra domain routers giving an opportunity to minimize latency.
  • The request then goes to VSAT Systems’ NOC, which retrieves the requested website from the web server, across the U.S. Internet backbone.
  • The whole cycle is then reversed and the requested data is available to the user. A 90,000 mile journey, through millions of dollars of infrastructure and sophisticated equipment, all in less than 700 milliseconds.

Read Users' Comments (0)

CES: The coolest laptops of 2010's show




LAS VEGAS--We saw dozens of new laptop models at CES this year, and though the vast majority of them were next-step upgrades of existing models, there were a handful that really grabbed our attention, either because they brought something new to the game, or because they were excellent examples of their category.

We've already rounded up the various slate/tablet devices, so we'll concentrate on traditional laptop-shaped systems (although we'll make an exception for the Lenovo U1 Hybrid, which docks its tablet screen to become a standard Windows 7 machine).

Read Users' Comments (0)

Verizon looks for more revenue in wireless data

Verizon Wireless, the nation's largest wireless provider, is reportedly revamping its existing wireless data prices and is considering implementing a usage-based billing model for its upcoming 4G wireless services as it tries to squeeze out more revenue from wireless data services.

Starting January 18, Verizon is expected to tweak its wireless data plan for what it considers its "3G multimedia" phones so that subscribers will pay the same price as customers using a smartphone. The news of the new pricing rates was reported Wednesday by the blog Broadband Reports, which obtained internal documents about the changes from a Verizon employee.

A Verizon Wireless spokeswoman declined to comment on the new pricing plan.

The news comes just a few days after Verizon Wireless' chief technology officer Dick Lynch was quoted by The Washington Post saying that the wireless operator is also considering implementing usage-based billing for services it will soon introduce on its upcoming 4G wireless network.

These two pieces of news suggest that Verizon Wireless is looking to find the sweet spot in wireless data pricing. As the company's revenue base shifts toward data and away from voice services, Verizon and other wireless operators are looking carefully at how best to maximize their profitability.

It's no secret that prices for mobile voice services are dropping. And as a result, phone companies are competing more aggressively on price. A few months ago Sprint Nextel announced its Any Mobile, Anytime plan that allows subscribers to call any cell phone in the U.S. regardless of carrier for $69.99 a month.

Earlier this week, MetroPCS, a smaller regional operator targeting the prepaid phone market, lowered the price of its service, undercutting similar plans from other prepaid providers, such as Sprint's Boost Mobile. This new offering will include all taxes and fees for plans that range between $40 and $60 a month.

Clearly, a price war is emerging on voice services.

Meanwhile, operators are trying to squeeze more revenue out of their data services. This is likely why Verizon is looking at increasing the price of its data plans for mid-tier, non-smartphone devices and why it is thinking hard about going with a usage-based model for its 4G wireless network, rather than offering an all-you-can-eat plan.

According to Broadband Reports, Verizon Wireless is planning to force nearly every subscriber to sign up for a data plan. Even customers using its basic "simple feature" phones will be required to have a data plan. And soon all subscribers signing up for a multimedia device will be required to sign up for a data plan. Up to this point, only certain multimedia phones, such as the Samsung Rogue, have required a data plan. And customers with smartphones have already been required to get data plans.

Starting January 18, the data plans for all non-smartphones will change, Broadband Reports said.

Last year, Verizon changed its non-smartphone data plans to offer multimedia phone subscribers two options for data service. They could either subscribe to a $9.99 that offered 25 megabtyes of data with a charge of $0.50 charged for each additional megabyte over the maximum, or they could subscribe to a $19.99 per month plan that offered 75MB with a $0.30 charge for each MB over the cap.

Broadband Reports says the new pricing model will increase the price of the top plan to $29.99 a month. In exchange for the higher price, subscribers will reportedly be given unlimited access to data. This tier of service will also include mobile e-mail service.

Meanwhile, customers opting for the $9.99 plan will still get the 25MB usage cap, but they will be charged $0.20 for each megabyte over the cap instead of $0.30 per megabyte, the blog said.

Verizon Wireless CTO, Dick Lynch
(Credit: Verizon )

This change essentially offers the same all-you-can-eat plan for multimedia phone subscribers that it requires its smartphone customers to buy.

For some heavy-data users this will be a great deal. But for most consumers, it's likely overkill. Most multimedia phone subscribers only use between 25MB and 100MB of data per month, according to Broadband Reports. The 25MB plan may be too little for these customers, but an unlimited plan offers much more than what many consumers need.

And this excess capacity costs consumers.

The business strategy is very similar to how gyms, like the New York Sports Club, make money. For $89 a month, someone can get a full membership to the gym with access day or night at any location. Some people will use their memberships to the fullest, working out seven days a week and using multiple gym locations throughout the week.

But many will go to the gym much less regularly, and they will never go to a facility other than the one where they originally joined as a member. And yet each member pays the same amount every month, regardless of how much they use their membership.

This billing method works out well for companies when many customers use a fraction of the resources available. But when the majority actually go to the gym regularly or access wireless data services, in the case of wireless operators, then these businesses start to lose money, because they have to invest more in infrastructure.

When this happens, a usage-based billing model is more advantageous to the business.

The switch to usage-based billing
This is exactly the model that Verizon plans to switch to when it completes its 4G wireless network. Lynch told the Washington Post last week that it's very likely that Verizon will do away with flat rate pricing when it rolls out its 4G wireless and will instead charge customers based on how much bandwidth they use.

"The problem we have today with flat-based usage is that you are trying to encourage customers to be efficient in use and applications but you are getting some people who are bandwidth hogs using gigabytes a month and they are paying something like megabytes a month," Lynch told the Post. "That isn't long-term sustainable. Why should customers using an average amount of bandwidth be subsidizing bandwidth hogs?"

AT&T is seeing the effects of this problem with the popular Apple iPhone. AT&T reports that its iPhone users consume more data than other 3G wireless customers. This has resulted in strains on the network. And now iPhone users are complaining about poor service, especially in urban areas where iPhone usage is high.

AT&T's head of wireless Ralph de la Vega said last year that AT&T needs to come up with a different way to price its service to incentivize customers to use less data.

Usage-based billing, or asking subscribers to pay for what they use, has increasingly been seen by executives as the answer to this problem. And now the idea has support from the two Republican Federal Communications Commission commissioners, Robert McDowell and Meredith Attwell Baker.

McDowell said during a public appearance at the CES tradeshow last weekend that wireless companies should be able to experiment with different pricing models, according to the The Hill, a blog covering Capitol Hill. He said that allowing an all-you-can eat model to persist will lead to gridlock on the wireless Internet.

Baker agreed and even suggested that people may soon have to pay for "roaming."

So what does all this mean for consumers? Well, in the short term, it means many Verizon Wireless customers are likely to pay more for more service than they actually need. And in the future, 4G wireless subscribers are likely to pay more for services they actually use. Don't expect any great bargains in wireless data now or in the future. In either case, Verizon and other wireless operators will make sure they can get as much money as they can from the increasing number of people who subscribe to their data services.

source:cnet.com

Read Users' Comments (0)