The parameters I used were "spray -l 8828 -c 100000 -d 1 10.10.10.10". This command sends 100000 packets with a length of 8828 bytes and a 1 millisecond delay between packets. 8828 bytes was literally the largest packet i could send - down to the byte - without experiencing packet loss. It's worth noting that I also had to use the 1 millisecond delay to avoid packet loss. Without it the hardware simply couldn't cope.
CAT-5e
Run #1:
sending 100000 packets of length 8828 to 10.10.10.10... in 9.20 seconds elapsed time, 0 packets (0.00%) dropped. Sent: 10866 packets/sec, 95932.7 KBytes/sec Recvd: 10803 packets/sec, 95369.5 KBytes/sec
Run #2:
sending 100000 packets of length 8828 to 10.10.10.10... in 9.17 seconds elapsed time, 0 packets (0.00%) dropped. Sent: 10901 packets/sec, 96236.3 KBytes/sec Recvd: 10837 packets/sec, 95669.5 KBytes/sec
Run #3:
sending 100000 packets of length 8828 to 10.10.10.10... in 9.16 seconds elapsed time, 0 packets (0.00%) dropped. Sent: 10914 packets/sec, 96350.9 KBytes/sec Recvd: 10849 packets/sec, 95782.7 KBytes/sec
CAT-6
Run #1:
sending 100000 packets of length 8828 to 10.10.10.10... in 9.11 seconds elapsed time, 0 packets (0.00%) dropped. Sent: 10976 packets/sec, 96900.9 KBytes/sec Recvd: 10911 packets/sec, 96326.6 KBytes/sec
Run #2:
sending 100000 packets of length 8828 to 10.10.10.10... in 9.17 seconds elapsed time, 0 packets (0.00%) dropped. Sent: 10903 packets/sec, 96252.9 KBytes/sec Recvd: 10838 packets/sec, 95685.9 KBytes/sec
Run #3:
sending 100000 packets of length 8828 to 10.10.10.10... in 9.24 seconds elapsed time, 0 packets (0.00%) dropped. Sent: 10821 packets/sec, 95536.1 KBytes/sec Recvd: 10758 packets/sec, 94977.2 KBytes/sec
As you can see CAT-6 didn't give me a performance boost and that's probably because there are bottlenecks preventing better speeds. The bottlenecks might be anything from the router, the ethernet controllers or their drivers. So there you have it. CAT-5e should serve you just as well as CAT-6 unless you have hardware capable of reaching optimal Gigabit ethernet performance. This is very unlikely unless you buy very expensive gear. The theoretical limit of Gigabit ethernet is about 125MB/s, on my LAN I only reached about 96MB/s. These ethernet controllers were integrated on the motherboard so that's what most people would use, but you could spend some cash on expensive Intel adapters with dedicated processors for packet processing. It would probably give you a boost - but few people would do this, and I doubt many would even go as far as I did and patch the via-velocity driver (which is known to suck ass).
Even if you do spend some extra cash on high-end ethernet adapters you still have to account for disk I/O speeds which will also be a limiting factor during file transfer (but it wasn't in my test because spray doesn't read nor write to disk). I was only able to reach about 50MB/s during a file transfer which is about half of the 96MB/s I got using spray, and that's probably because of the disk I/O bottleneck on both ends.
Update1: I conducted a new test (on CAT-6) in which I replaced the Linksys router with a Netgear Gigabit switch, so now I get about 113MB/s and the test finishes about 2 seconds faster, from about 9 down to 7 seconds for transferring 100000 packets of 8828 in length. I never tested with CAT-5e to see if I've exceeded the cable limit. Here's the output:
sending 100000 packets of length 8828 to 10.10.10.10... in 7.84 seconds elapsed time, 0 packets (0.00%) dropped. Sent: 12760 packets/sec, 112645.5 KBytes/sec Recvd: 12851 packets/sec, 113451.6 KBytes/sec
So the conclusion still stands, don't worry about CAT-5e vs CAT-6 because there are almost definitely bigger bottlenecks elsewhere.
Update2: This article summed it up very well.
Update3 (2016-09-20): I've done a lot more testing since I first published this post and since then I've bought more powerful hardware. My router is now a Ubiquiti EdgeRouter PoE, the switch is an HP 1820-8G (Layer 2) - a convenient accessory to my HP MicroServer Gen8 (dual Broadcom adapters) running CentOS 7 and LACP truncated ports. My client computer is a dual processor HP Z820 Workstation with integrated Intel adapters (only one is used in testing). This time around I did benchmarks with iperf3 rather than the aging spray and I used iperf's --format parameter to get the results in KBytes to make a comparison to the old results easier. Here's the output:
Connecting to host 10.10.10.10, port 5201 [ 4] local 10.10.10.193 port 49270 connected to 10.10.10.10 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 112 MBytes 114144 KBytes/sec [ 4] 1.00-2.02 sec 115 MBytes 115755 KBytes/sec [ 4] 2.02-3.00 sec 112 MBytes 115820 KBytes/sec [ 4] 3.00-4.00 sec 113 MBytes 115781 KBytes/sec [ 4] 4.00-5.00 sec 113 MBytes 115702 KBytes/sec [ 4] 5.00-6.00 sec 113 MBytes 115769 KBytes/sec [ 4] 6.00-7.00 sec 113 MBytes 115678 KBytes/sec [ 4] 7.00-8.00 sec 113 MBytes 115793 KBytes/sec [ 4] 8.00-9.01 sec 113 MBytes 115678 KBytes/sec [ 4] 9.01-10.00 sec 113 MBytes 115769 KBytes/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 1.10 GBytes 115589 KBytes/sec sender [ 4] 0.00-10.00 sec 1.10 GBytes 115589 KBytes/sec receive r iperf Done.
I've managed to improve performance a little by getting better hardware and still the cabling is pretty much the same (CAT-6). In my first run of tests I managed to reach about 750Mbit, the second time i reached about 880Mbit and finally in this latest test just about 950Mbit. At this point there isn't all that much more I can do and the performance gains get smaller and smaller. At work I did rudimentary gigabit performance testing over CAT-5 and managed to reach about 960Mbit (albeit with high end networking gear, far better than I have at home) so even cables that old can still stand up very well today. This time, I did get around to testing a CAT-5e cable with my current hardware and the results are as expected:
Connecting to host 10.10.10.10, port 5201 [ 4] local 10.10.10.193 port 49342 connected to 10.10.10.10 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.01 sec 114 MBytes 114619 KBytes/sec [ 4] 1.01-2.01 sec 113 MBytes 115513 KBytes/sec [ 4] 2.01-3.01 sec 113 MBytes 115769 KBytes/sec [ 4] 3.01-4.01 sec 113 MBytes 115641 KBytes/sec [ 4] 4.01-5.01 sec 113 MBytes 115690 KBytes/sec [ 4] 5.01-6.01 sec 113 MBytes 115641 KBytes/sec [ 4] 6.01-7.01 sec 113 MBytes 115641 KBytes/sec [ 4] 7.01-8.01 sec 113 MBytes 115769 KBytes/sec [ 4] 8.01-9.01 sec 113 MBytes 115641 KBytes/sec [ 4] 9.01-10.00 sec 113 MBytes 115641 KBytes/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 1.10 GBytes 115555 KBytes/sec sender [ 4] 0.00-10.00 sec 1.10 GBytes 115555 KBytes/sec receive r iperf Done.Pretty much the same, about 950Mbit. So there you have it, confirmed several times over - hardware is generally what gets you performance, not cables. Once you get close to the gigabit speed limit there's also no point in investing in more expensive gear to get even further because you'll be paying through the nose just to get those extra few megabits - at that point the main argument for buying better gear is features (VLAN support for example) and long-term stability. Of course, 10Gbit ethernet is a different matter - old CAT-5 cabling should (in theory) start experiencing some serious packet loss but for me that remains untested.