08:39 am, 9 Apr 07
network bottleneck
I transferred some big files from my laptop to my roommate's NAS box last night and it was pretty slow. Lazyweb, please help me diagnose!
Over a gigabit ethernet cable the transfer (via rsync's protocol) went at about 1.2mb/sec. My machine is a Thinkpad T41p. I think it's got gigabit ethernet 'cause that's what I read online, but that transfer speed isn't even 100mbit.
Repro: plugging a Mac laptop into the same cable managed to read files at maybe 30mb/sec. (Memory hazy but writing went at maybe 10-15mb/sec.)
Local hardware: curiously, iostat showed my disk only reading in the tens of kbs per second, so I guess that's broken. Catting another big file to /dev/null as it transferred didn't slow it down. CPU was mostly idle.
Network configuration: mii-diag (is that for all cards or just some specific one?) indicated 100mbit, full-duplex but it doesn't seem to know about gigabit so maybe it's confused. (And as pointed out, we were getting closer to 10mbit.) ethtool claimed a "speed" of 1000mbit but it's unclear whether that was the actual wire speed or just a setting to let you cap the speed.
What am I missing?
(My suspicion now that I've written that is that maybe it was using the wireless accidentally? But network manager did all of the wireless-disconnection, ethernet-DHCPing stuff when I plugged the cable in so I didn't think to verify...)
Over a gigabit ethernet cable the transfer (via rsync's protocol) went at about 1.2mb/sec. My machine is a Thinkpad T41p. I think it's got gigabit ethernet 'cause that's what I read online, but that transfer speed isn't even 100mbit.
Repro: plugging a Mac laptop into the same cable managed to read files at maybe 30mb/sec. (Memory hazy but writing went at maybe 10-15mb/sec.)
Local hardware: curiously, iostat showed my disk only reading in the tens of kbs per second, so I guess that's broken. Catting another big file to /dev/null as it transferred didn't slow it down. CPU was mostly idle.
Network configuration: mii-diag (is that for all cards or just some specific one?) indicated 100mbit, full-duplex but it doesn't seem to know about gigabit so maybe it's confused. (And as pointed out, we were getting closer to 10mbit.) ethtool claimed a "speed" of 1000mbit but it's unclear whether that was the actual wire speed or just a setting to let you cap the speed.
What am I missing?
(My suspicion now that I've written that is that maybe it was using the wireless accidentally? But network manager did all of the wireless-disconnection, ethernet-DHCPing stuff when I plugged the cable in so I didn't think to verify...)
I have found this annoying in the past and found the only to rerank them is to turn them on and off with ifconfig. Once it is known that wireless is gone, it thunks down to the wired connection. If there's some better way to do this, perhaps lazyweb out there can help me too.
The NAS box supports the rsync protocol, so no ssh involved.
(I don't know what other protocols this NAS box speaks. Maybe FTP.)
It's not much of a NAS if it doesn't support some kind of mountable protocol, whether CIFS or NFS or whatever. Give those a shot if you can't do FTP.
Here's an example from my laptop. I have an 802.11 interface at eth1, and a VPN tunnel using vpnc to work.
I've started an HTTP connection to livejournal.com for demonstration purposes. Let's determine how those IP packets are being routed.
Consider the Foreign Address column of the netstat output. The HTTP connection is to 204.9.177.18. Now look at the routing table printed by netstat -rn. None of the entries match this IP specifically, so the kernel falls back to the default route (the entry for 0.0.0.0), which tells it to send the packets to my wi-fi gateway at 192.168.202.1 (and helpfully tells me that it will use eth1 to do so).
By comparison, let's look at a TCP stream going over the VPN tunnel.
Looking back at the route table, we see that this destination IP matches the route 192.168.0.0/16. Note that it does not match the route 192.168.202.0/24 because the third octet differs (and is included in the /24 netmask). So it will use the tun0 interface for that IP, in preference to the default route.
Now, it's possible that you have multiple default routes, and so on. In that case it can get more interesting (and often the easiest way to get your desired behavior is to just ifdown the interface you don't want to use).
I had serious problems for a while with a samba file share on FBSD that was causing my writes to the system to run at <100Kbps. At the time, the network was an 802.11b WiFi network and was capable of ~5Mbps of TCP traffic. Turned out it was just some screwy smb.conf settings.
I recall there can be problems with gigabit ethernet autodetection that totally kills performance. If possible, verify both sides of the connection have negotiated to the same settings.
Also, were you transferring big files or lots of small files? I wouldn't expect high throughput if you're copying, say, a giant source tree from a slow laptop hard drive.
I was transferring just one large file.