Traffic jams on the infobahns

COMMUTERS stuck in traffic on the Leesburg turnpike in Northern Virginia are just a few hundred yards away from an even bigger…

COMMUTERS stuck in traffic on the Leesburg turnpike in Northern Virginia are just a few hundred yards away from an even bigger jam, one that stretches around the world. This is a traffic jam of bits and bytes, digital data travelling through one of the main arteries of the Internet.

With thousands of new users signing on every day, the Net's routes are brimming with traffic - so much so that some observers within the industry predict its imminent demise due to overcrowding.

"Demand for the Internet is outstripping supply," says Mark Luker, programme director of the National Science Foundation's network infrastructure project in the US. But work is under way to provide more capacity through technical solutions, and ways are being explored to ration the Internet by price, through fees based on the speed and quantity of data.

The facility in Northern Virginia is a computer interchange, called Mac East, where 46 major - and minor Internet service providers come together to exchange data travelling across the Internet.

READ MORE

MFS Communications, which runs Mac East and its California counterpart - yes, it's called Mac West - describes it as "the busiest public exchange point on the Internet". Peak loads run as high as 53 million bytes per second.

Until April 1995, the National Science Foundation (NSF) managed and financed the common "network backbone" underpinning most of the Internet.

Before the network was phased out in favour of privately managed networks, traffic increased steadily from 1.3 trillion bytes a month in March 1991 to a 17.8 trillion bytes in late 1994 - the equivalent of transferring the entire contents of the Library of Congress every four months.

The private system sends data across a host of different company networks and crowded interchange points. As the load continues to increase, MFS Communications and the Internet providers can add more fibreoptic lines and switching equipment, but that is expensive and still may fail to keep up with rising usage.

Some providers are setting up smaller, one to one interchanges to relieve the burden on Mae East. MCI Communications, which has seen a 5,000 per cent increase in Internet traffic since the beginning of 1995, has set up 22 circuits for one to one exchanges.

For now, the best solution is to keep adding hardware, Luker says. "But in the long run, you can solve problems by adding knowledge to the system instead of brute force," he argues.

The NSF recently announced 13 grants for the development of high speed networking technologies and software projects. One project hopes to reduce congestion by developing "caching" systems, where copies of popular sites are held at duplicate locations around the world.

Normally, if an Internet user in Japan tries to download a Web page in New York, the data has to travel all the way around the world - even if another user in Japan has just downloaded the same page. With a cache, the second user could grab the page from a closer computer that the data had already gone through.

"A cache automatically duplicates the pages that are used most often," Luker says. The NSF had sponsored a study at the National Laboratory for Advanced Networking Research of a cache that "is showing great improvements in traffic".

A procedure called multicasting could also reduce repetitive data transfers, says Robert Hagens, MCI's director of Internet engineering. Multicasting sends a stream of data such as sound or video across the Net that can be accessed by many users.

"If you look at the radio stations beginning to appear on the Internet, they commonly - require each listener to open a connection to the station so you've got a lot of duplication," Hagens says. "With multicast, you put that data into a stream of packets that only get duplicated when they have to."

New ways of charging for Internet usage might also alter the traffic patterns. At the moment, most users - in the US at least - pay a flat rate for Internet service regardless of hoc much capacity they use. Sending email is considerably less taxing than sending live videos, but users pay the same for both.

"The Internet is a mature technology but an immature economy," says Hal Varian, dean of the University of California's School of Information Management and Systems. The current pricing model does not provide incentives to Internet providers to offer high quality service, he says.

People who need high priority channels for sending live video might have to pay more, Varian says, while basic tasks such as sending email will remain essentially free. Varian also predicts that Internet service providers will begin paying each other based on the amount of traffic they exchange - much as phone carriers make settlement payments to foreign telecommunications companies for completing international calls.

Much of the slowdown experienced by Web users is caused by limits at each end of a connection rather than by delays moving across the network. Web pages reside on computer servers that can be overwhelmed when too many requests arrive at the same time. And many individuals access the Internet through relatively slow modem links.

Such delays could be eased by a proposal from Sun Microsystems to create a new Internet standard for the way, computers access each others files. Called Web Network File System, the protocol reduces the burden on the computer holding the Web pages, speeding the transfer of files and allowing three times as many Internet surfers to gain access at one time, according to Sanjay Sinha, head of Sun's Solaris Server project.

User delays are real, but by some measures the Internet's performance has actually improved in recent years. Matrix Information and Directory Services of Austin, Texas, compiles a "weather report" which it publishes on the Internet. The report, updated every four hours, shows how long it takes a small message to travel from Matrix's headquarters to 4,500 major computers around the world and back.

Traffic clogging the Internet slows the round trips. The reports show huge fluctuations ill Net traffic, with "rush hour" occurring on weekdays as business users log on (with the critical mass of users in North America having a huge effect on speeds on this side of the Atlantic). But the average delay declined by 30 per cent between January 1994 and January 1996, according to a report by Matrix.