If you want the Cliff's Notes version of Bandwidth versus Latency, think of a circuit as being a bridge (the kind you drive your car across to get over water). Bandwidth would be the amount of lanes on the bridge, which would determine how many cars can get across during rush hour traffic. Latency would then be the speed the cars can get across that bridge. In this circuit bridge, though, all the cars go roughly the same speed and are limited by the speed of light. So if you want to download a file fast, you want the bridge to be really wide rather than one lane. This would give your bridge a lot of thruput, and it would have high bandwidth. The latency would then be determined by how long your bridge was, and how fast cars could get onto the bridge. Latency can slow things down, of course, but even if you see RTTs of, say, 600ms, which is pretty high, that's still only an additional slowdown of .6 seconds between the time a file is requested until it is sent. When you have packet loss that packets need to be resent. So if you are downloading a big file with 30% packet loss to some place, 30% of the packets would actually need to be resent.