Bandwidth Throttling
Guess what, people literally interpreted the statement AS IS. The 300GB file is split
into small chucks, just slightly smaller than the 1GB limit, and have them merge back in the destination end. Although it does not violate the management rule, there is no different in terms of bandwidth utilisation. If the user is smart (or dumb), he/she can initiate multiple downloads to accelerate the transfer and this will definitely choke up the WAN link.
Do you know that there are many ways to do bandwdith throttling:
- Client end
- curl with --limit-rate speed
- wget with --limit-rate=amount
- NET::FTP::Throttle perl module
- and many more
- Server end
- lighttpd with connection.kbytes-per-second setting in the configurtion file. See traffic shaping for more details
- mod_bw module in Apache
- and many more
With bandwidth throttling, network utilisation can be controlled. Also, it can avoid splitting/merging of small files and having temporarily storage to house these small files.
Things to take note. Some of the older file systems and utilities can only handle file smaller than 4GB. See the comparison of all the other file systems
Labels: performance
0 Comments:
Post a Comment
<< Home