Low throughput on Esxi 6.0 update 3
-
Hi,
I have pfsense running on my esxi host which is very powerful ( hpe dl380 gen9 2xcpu 2697 512gb ram) . I am on the latest version pfsense and open-vm tools. my problem is at best I get around 60MB/s throughput but without firewall my pc can use all 1Gb/s .
I have used vmxnet3 nics
I checked system -> activity and I see all of 8 core cpu(s) are utilized around 100%. What could be wrong? is there any known bug?
Thanks
-
What you are seeing is WCUP and 100% idle, not100% usage. The problem isn't with pfSense.
-
With a load of .02 pfsense isn't doing shit.. Its like floating around the pool with a umbrella drink ;) Complete opposite of "working" ;)
-
This picture is for when no file is transferring. so what is the problem? with this hardware why throughout is this much low?
-
This picture is for when no file is transferring. so what is the problem? with this hardware why throughout is this much low?
Probably:
- bad configuration
- bad hardware
- bad software
-
hardware is good
cpu 8 core 2.5 ghz
8g ramconfiguration is two vlans and allow any to any!
can you tell what is your speed test results? -
My speed is 1Gbit if I do that
-
So take your internet/isp out of it… Do a speed test from say a vlan to vlan client, and then put something on pfsense wan where your going to nat, etc. And do some testing with iperf client to a server, etc.
For all we know your running through a VPN and it sucks..
What nics do you actually have in you esxi host? How do you have your physical nics connected both to the physical network and how do you have your vswitches configured. Are you doing any shaping on the port groups or vswitches.
My esxi host is old.. HP n40L, cheap nics. Not even using vmx3 - used e1000 because vmx3 would have problems reporting duplex mismatch with cdp and lldp.. And it could do over 200mbps lan to lan.. And about 120 down from internet.
Going to need way more info if want to help figure out where your problem is. On a side note - why are you running 6.0? 6.5 has been out long time.. I never understand why people don't upgrade? I can understand not jumping on an update day one, etc. updated 3 came out in feb of 2017.. So year old, current build is 6921384 on 6.0
Current for 6.5 is 7388607, which even my very long in the tooth HP N40L microserver runs without any problems..
-
Show us the system loading when you are pushing the maximum throughput.
It's better to use top at the command line:
top -aSH
Also you stated 60MB/s. For clarity that's 60 Mega Bytes? 300Mbps right?
Steve
-
Hi guys, I don't know what's wrong . I decided to change hardware. now I'm running pfsense on a hp dl380 g6 server with 4 links in Lagg (LACP) . two interface vlans defined on it which are 110 and 111
now when I use a client in vlan 111 and try iperf to vlan 110, the maximum speed i get is 940mb/s and in the mean while cpu usage raises 40% , right now the pfsense connection bandwidth is 4Gb/s and if we assume that it will split between vlans, a client should be able to send data with 2Gb/s and server should receive with the same speed. but no more than 940Mb/s is my firewall throughput. Is pfsense able to go further than that?
I need a firewall between my vlans, nothing more. do you know a better solution ?
Thanks you all.
-
Show us the system loading when you are pushing the maximum throughput.
It's better to use top at the command line:
top -aSH
Also you stated 60MB/s. For clarity that's 60 Mega Bytes? 300Mbps right?
Steve
yes it's 60 mega bytes
-
In lagg
1+1+1+1 does not = 4GbpsThat equals 1 and 1 and 1 and 1.. Really in No scenario would a single client talking to a single server on the other side of the lagg be able to achieve anything more than 1.. or 940mbps is really good for a gig connection. You know the whole overhead thing with tcp ;)
Now if you had a bunch of clients talking to a bunch of servers and your lagg was uplink between switches then yeah you could see all 4 members of the lagg being loaded up..
If you need more than 1Gbps - then you need 10ge interfaces… Or do you have hardware that supports new 802.3bz?
-
Yes, the maximum single TCP transfer will still be ~1G (940Mbps).
What iperf command are you using? Try using multiple processes, -P 4.
Can we see the top output?
Steve
-
Even if creating multiple sessions with iperf, those would not take multiple paths across a lagg. Unless your interfaces were doing some other sort of load balancing to force the connections over the physical paths. In a typical lagg, all sessions from that client to the same dest would go over the same physical path.
-
Mmm, good point. I assume this is LACP and not round-robin? (edit: yup)
Test multiple clients then.
Steve
-
Thanks for the answers, I'm not in the office right now. I'll test 4 different clients ( i mean 4 sessions from different clients) to see if I can get 4x1Gb/s or not.
-
Doesn't mean you will get connections over 4 paths on the other connection (sever side)..
4 in a lagg does not = 4GB, it just means 4 different gig paths.. Even distribution across the 4 different paths doesn't always come out even.. Depends on how your switch works out which phy path to use and the other end of that path if some server, etc..
This is been a misunderstanding for a long time.. Not really sure where the misunderstanding got started but it is wide spread cross the internet..
Lagg is good for mitigation of a failed port or a failed cable, etc. And while it can give more bandwidth on an uplink between say switches.. But if you want more than 1ge, then the you should use a faster connection.. 2ge fiber. 10ge, 40ge, etc. Just because you have 4x 1ge in a lagg/portchannel/etherchannel/etc doesn't mean your going to be able to see 4ge go across it..
-
Than Johnpoz, really useful information. in the lag there are other options like round-robin and load balance. which one is better? right now my cisco switch and pfsense configuration is LACP.
I'm not looking for increasing a client' session bandwidth more than 1g. but I have 16 users which will copy files on the network alot. -
So please draw out how your clients get to the server… What is the physical path(s) traffic will flow through.. Is pfsense routing between a client vlan and a server vlan?
You do understand that multiple vlans on same physical path is automatically a hairpin right.. And routing from 1 vlan to another vlan that is on same physical nics.. Double hairpin, etc.
So please draw up the full physical connections from a client to a server.. What cisco switch(es)? Pfsense is VM so how do you have this configure in pfsense and physical nics connected to the vswitch..
-
So please draw out how your clients get to the server… What is the physical path(s) traffic will flow through.. Is pfsense routing between a client vlan and a server vlan?
You do understand that multiple vlans on same physical path is automatically a hairpin right.. And routing from 1 vlan to another vlan that is on same physical nics.. Double hairpin, etc.
So please draw up the full physical connections from a client to a server.. What cisco switch(es)? Pfsense is VM so how do you have this configure in pfsense and physical nics connected to the vswitch..
all of my switches are cisco, core switch is 3850x all ports 10g, 2x10G uplinks to another 3850x switch which my esxi host is connected to.
I figured out I haven't changed my esxi host management network to the newly installed 10G ethernet adapter. So the first thing I did today was switching management network to the new ethernet card. Then I gave a vmxnet3 with vlan id 4095 which is trunk to the pfsense vm, created my vlans and made any to any rules, now when i test with iperf, I give about 3G/s