Free Shipping on orders over US$39.99 How to make these links

Researchers discover major roadblock in alleviating network congestion | MIT News

When users want to send data over the internet faster than the network can handle, congestion can occur – the same way that traffic jams congest the morning commute in a big city.

Computers and devices that send data over the internet break the data into smaller packets and use a special algorithm to decide how fast to send those packets. These congestion control algorithms seek to fully discover and utilize available network capacity while sharing it fairly with other users who may be sharing the same network. These algorithms try to minimize the delay caused by data waiting in network queues.

In the last decade, researchers in industry and academia have developed many algorithms that attempt to achieve high rates while controlling delays. Some of them, like the BBR algorithm developed by Google, are now widely used in many websites and applications.

But a group of researchers at MIT discovered that these algorithms can be unfair. In a new study, they show that there is always a network scenario where at least one sender receives almost no bandwidth compared to other senders; that is, a problem known as hunger cannot be avoided.

“What’s really amazing about this paper and the results is that when you consider the real-world complexity of network paths and all the things they can do to data packets, it’s impossible to avoid the delay control algorithms to prevent the delay to avoid starvation using current methods, “said Mohammad Alizadeh, associate professor of electrical engineering and computer science (EECS).

While Alizadeh and his co-authors have not found a traditional congestion control algorithm that can avoid starvation, there may be algorithms of a different kind that can prevent this problem. Their analysis also suggests that changing how these algorithms work, so they allow for more delay changes, could help prevent starvation in some network situations.

Alizadeh co-wrote the paper with first author and EECS graduate student Venkat Arun and senior author Hari Balakrishnan, the Fujitsu Professor of Computer Science and Artificial Intelligence. The research will be presented at the ACM Special Interest Group on Data Communications (SIGCOMM) conference.

Congestion control

Congestion control is a fundamental networking problem that researchers have been trying to solve since the 1980s.

A user’s computer does not know how fast to send data packets over the network because it lacks information, such as the quality of the network connection or how many other senders are using the network. Sending packets too slowly makes poor use of available bandwidth. But sending them too quickly can overwhelm the network, and in doing so, packets start to drop. These packets must be scrambled, leading to longer delays. Delays can also cause packets to wait in the queue for a long time.

Congestion control algorithms use packet losses and delays as signals to detect congestion and decide how fast to send data. But the internet is complex, and packets can be delayed and lost for reasons unrelated to network congestion. For example, data may be placed in a queue on the way and then released with a burst of other packets, or the receiver’s identification may be delayed. The authors call delays not caused by congestion “jitter.”

Even if a congestion control algorithm perfectly measures delay, it cannot tell the difference between delay due to congestion and delay due to jitter. Delay due to jitter is unpredictable and confusing to the sender. Because of this ambiguity, users began to estimate the delay in a different way, which caused them to send packets of unequal value. Eventually, this leads to a situation where starvation occurs and someone is completely shut down, explained Arun.

“We started the project because we lacked a theoretical understanding of congestion control behavior in the presence of jitter. To put it on a stronger theoretical basis, we built a simple mathematical model It’s easy to think about, but it captures some of the complexities of the internet. It’s so exciting that math tells us things we don’t know and has practical relevance,” he said.

Study of hunger

The researchers fed their mathematical model to a computer, gave it a series of commonly used congestion control algorithms, and asked the computer to find an algorithm that would avoid starvation, using their model.

“We can’t do it. We’ve tested every algorithm we know of, and some new ones we’ve developed. Nothing worked. The computer always finds a situation where some people get all the bandwidth and at least one person gets nothing,” Arun said.

The researchers were surprised by this result, especially since these algorithms are widely believed to be reasonably fair. They began to suspect that it was impossible to avoid starvation, a severe form of inequality. This inspired them to define a class of algorithms they called “delay-convergent algorithms” which they proved would always suffer from starvation under their network model. All existing congestion control algorithms that control delay (known to researchers) are delay-convergent.

The fact that such simple failure modes of these widely used algorithms remained unknown for so long shows how difficult it is to understand algorithms through empirical testing alone, Arun added. This emphasizes the importance of a solid theoretical foundation.

But all hope is not lost. While all the algorithms they tried failed, there may be other non-delay-convergent algorithms that could avoid starvation. so the range is greater than any delay that might occur due to network jitter.

“To control the delays, the algorithms also try to bind the delay changes about a desired balance, but no error can generate too much delay change to get better measurement of congestive delays. It’s just a new design philosophy that you have to follow,” Balakrishnan added.

Now, researchers want to keep pushing to see if they can find or build an algorithm that can eliminate hunger. They also want to apply this method to mathematical modeling and computational proofs of other thorny, unsolved problems in network systems.

“We are increasingly relying on computer systems for more critical things, and we need to put their reliability on a stronger conceptual basis. We are showing amazing things that you discover when you take the time to do these formal details what the problem really is,” Alizadeh said.

Source link

We will be happy to hear your thoughts

Leave a reply

Info Bbea
Enable registration in settings - general
Compare items
  • Total (0)
Shopping cart