Hot Downloads

Welcome, Guest
Username: Password: Remember me

TOPIC: latency / bandwidth optimization

latency / bandwidth optimization 11 years 6 months ago #7435

  • carmatic
  • carmatic's Avatar
  • Offline
  • New Member
  • Posts: 14
  • Karma: 0
hello

in a network connection, can latency be optimized at the cost of bandwidth, and vice versa?
The administrator has disabled public write access.

Re: latency / bandwidth optimization 11 years 6 months ago #7439

  • nske
  • nske's Avatar
  • Offline
  • Expert Member
  • Posts: 613
  • Karma: 0
I don't see how these two would be considered reverse sizes at the network level (I mean as a direct result of the way IP over ethernet works -since you are probably refering to that kind of networks).

On the contrary, while they do not depend exactly on the same factors, both sizes are related to each other towards the same direction. I.E. a sudden increase in latency (that could be a result of collisions for example, or inadequate system resources on one end) could limit the maximum utilize-able bandwidth between the involved parts. Similarly, too much traffic in relation to the capacity of the link would introduce higher latency to everyone.

Walking towards the higher OSI layers, things are less specific, more protocol implementations exist and in some of them we see features that could result in what the end user would sense as a "trade-off between bandwidth and latency".

Obviously, every protocol results on some overhead which could be interpreted as latency in many cases. Some protocols use or support compression to pass more data using less bandwidth. This saves bandwidth at the cost of delay (caused for compressing/decompressing), but in most cases it's not too much to be of a concern and you can't do much about it anyway. Whenever you have an option to choose whether to use data compression or not, compression implies some increase in latency at the benefit of bandwidth, but usually it is at your interest to use it.

An other technique that could be considered to provide decreased latency to the end user "at the cost of bandwidth", is pre-fetching/pre-caching (roughly, getting data before you request them so that they will be there when you do, if you do -but usually so will be other data that you will never do). So you can consider this a waste of bandwidth in favor of potentially decreased latency.

A technique that does not exactly exchange bandwidth for latency, but can be used to balance and guarantee some quality in both, is QoS. But that's an other chapter.

Hopefully, you get the idea: Bandwidth and latency are not reverse sizes from their nature. Using some specific techniques one can be given priority over the other, or be utilized to benefit the other, but definitelly their application is not something simple and for general use, and their effectiveness is quite limited and focused.

I think that is a specific as it can get, since we are talking in theory. If you have something specific in mind that you want to do, perhaps you'll get more specific replies if you specify it :)
The administrator has disabled public write access.
Time to create page: 0.077 seconds

CCENT/CCNA

Cisco Routers

  • SSL WebVPN
  • Securing Routers
  • Policy Based Routing
  • Router on-a-Stick

VPN Security

  • Understand DMVPN
  • GRE/IPSec Configuration
  • Site-to-Site IPSec VPN
  • IPSec Modes

Cisco Help

  • VPN Client Windows 8
  • VPN Client Windows 7
  • CCP Display Problem
  • Cisco Support App.

Windows 2012

  • New Features
  • Licensing
  • Hyper-V / VDI
  • Install Hyper-V

Linux

  • File Permissions
  • Webmin
  • Groups - Users
  • Samba Setup