I'm experimenting multicast file transfer using a simple low level multicast transmit/receive software (
I'm wondering why the performance premium of a Gigabit Ethernet over a Fast Ethernet (100Mbps) is just aroud 10%. Of course I never expected to go 10 times faster, but, at least, to double the speed.
I disabled any multicast/broadcast storm control on my NetGear Switch. The switch has a 50Gbit bandwith and there's no other network trafic on it (it's a lab environment). Both servers I'm using have integrated Gigabit Broadcom with hi-speed PCI-E connection to the bus.
I suppose there's something I'm missing. Any idea ?
The program directly report the time needed to transfer the file. My file is 1GB big, so the initial and final overhead is trascurable. In any case the the trasfer rate reported is around 80Mbps for a Fast Ethernet (very cool ! almost the maximum speed of the LAN) and around 90Mbps for a Gigabit Ethernet (very poor). The hard drives are not the bottleneck, because I'm using a stiped set of 15K RPM SCSI drives (disk to disk copy is much faster). My guess is that there's a limitation due to 802.3 frame packet size (1,5K) that is not enough for a gigabit bandwith. I'd like to try Jumbo Frames, but I cannot find any multicast application that uses them.
Another guess is that there's some bandwith limitation in Windows sockets. I'll try later the same with a Linux box.
UFTP is capable of high speeds over gigabit. Between two Solaris 8 servers (SunFire 240) over a gigabit crossover line with no other traffic, I was able to get file transfer speeds of around 400Mbps. Granted, this is an ideal network situation, but it demonstrates that the software can handle these kind of speeds. This speed bettered standard FTP over the same link by about 10%. In a real world scenario you might get a little less, since UFTP uses exactly as much bandwith as you tell it, which could potentially flood out other traffic on the network.
I never had the environment to test UFTP between two Windows PCs at that speed, so I can't say exactly how it could or should behave. It would be interesting to see how the speeds you saw compare to standard FTP in this situation.
You'll probably get differing results for very large files as opposed to medium-sized ones. I did some similar tests a while ago between Windows 2003 servers and graphed the network throughput using task manager. With a huge file the transfer ran at blazing speed until the server's memory buffer was exhausted then settled back to around a third of that value for the rest of the copy. With a smaller (but still big) file the transfer was able to take place fully into memory, so based on a simple size/time result it appeared faster