Low throughput on Esxi 6.0 update 3



  • This picture is for when no file is transferring. so what is the problem? with this hardware why throughout is this much low?



  • @mahdi87_gh:

    This picture is for when no file is transferring. so what is the problem? with this hardware why throughout is this much low?

    Probably:

    • bad configuration
    • bad hardware
    • bad software


  • hardware is good
    cpu 8 core 2.5 ghz
    8g ram

    configuration is two vlans and allow any to any!
    can you tell what is your speed test results?



  • My speed is 1Gbit if I do that


  • Rebel Alliance Global Moderator

    So take your internet/isp out of it… Do a speed test from say a vlan to vlan client, and then put something on pfsense wan where your going to nat, etc. And do some testing with iperf client to a server, etc.

    For all we know your running through a VPN and it sucks..

    What nics do you actually have in you esxi host?  How do you have your physical nics connected both to the physical network and how do you have your vswitches configured.  Are you doing any shaping on the port groups or vswitches.

    My esxi host is old.. HP n40L, cheap nics.  Not even using vmx3 - used e1000 because vmx3 would have problems reporting duplex mismatch with cdp and lldp..  And it could do over 200mbps lan to lan.. And about 120 down from internet.

    Going to need way more info if want to help figure out where your problem is.  On a side note - why are you running 6.0?  6.5 has been out long time.. I never understand why people don't upgrade?  I can understand not jumping on an update day one, etc.  updated 3 came out in feb of 2017.. So year old, current build is 6921384 on 6.0

    Current for 6.5 is 7388607, which even my very long in the tooth HP N40L microserver runs without any problems..


  • Netgate Administrator

    Show us the system loading when you are pushing the maximum throughput.

    It's better to use top at the command line:

    top -aSH
    

    Also you stated 60MB/s. For clarity that's 60 Mega Bytes? 300Mbps right?

    Steve



  • Hi guys, I don't know what's wrong . I decided to change hardware. now I'm running pfsense on a hp dl380 g6 server with 4 links in Lagg (LACP) . two interface vlans defined on it which are 110 and 111

    now when I use a client in vlan 111 and try iperf to vlan 110, the maximum speed i get is 940mb/s and in the mean while cpu usage raises 40% , right now the pfsense connection bandwidth is 4Gb/s and if we assume that it will split between vlans, a client should be able to send data with 2Gb/s  and server should receive with the same speed. but no more than 940Mb/s is my firewall throughput. Is pfsense able to go further than that?

    I need a firewall between my vlans, nothing more. do you know a better solution ?

    Thanks you all.



  • @stephenw10:

    Show us the system loading when you are pushing the maximum throughput.

    It's better to use top at the command line:

    top -aSH
    

    Also you stated 60MB/s. For clarity that's 60 Mega Bytes? 300Mbps right?

    Steve

    yes it's 60 mega bytes


  • Rebel Alliance Global Moderator

    In lagg
    1+1+1+1 does not = 4Gbps

    That equals 1 and 1 and 1 and 1.. Really in No scenario would a single client talking to a single server on the other side of the lagg be able to achieve anything more than 1.. or 940mbps is really good for a gig connection.  You know the whole overhead thing with tcp ;)

    Now if you had a bunch of clients talking to a bunch of servers and your lagg was uplink between switches then yeah you could see all 4 members of the lagg being loaded up..

    If you need more than 1Gbps - then you need 10ge interfaces… Or do you have hardware that supports new 802.3bz?


  • Netgate Administrator

    Yes, the maximum single TCP transfer will still be ~1G (940Mbps).

    What iperf command are you using? Try using multiple processes, -P 4.

    Can we see the top output?

    Steve


  • Rebel Alliance Global Moderator

    Even if creating multiple sessions with iperf, those would not take multiple paths across a lagg.  Unless your interfaces were doing some other sort of load balancing to force the connections over the physical paths.  In a typical lagg, all sessions from that client to the same dest would go over the same physical path.


  • Netgate Administrator

    Mmm, good point. I assume this is LACP and not round-robin? (edit: yup)

    Test multiple clients then.

    Steve



  • Thanks for the answers, I'm not in the office right now. I'll test 4 different clients ( i mean 4 sessions from different clients) to see if I can get 4x1Gb/s or not.


  • Rebel Alliance Global Moderator

    Doesn't mean you will get connections over 4 paths on the other connection (sever side)..

    4 in a lagg does not = 4GB, it just means 4 different gig paths.. Even distribution across the 4 different paths doesn't always come out even.. Depends on how your switch works out which phy path to use and the other end of that path if some server, etc..

    This is been a misunderstanding for a long time.. Not really sure where the misunderstanding got started but it is wide spread cross the internet..

    Lagg is good for mitigation of a failed port or a failed cable, etc.  And while it can give more bandwidth on an uplink between say switches.. But if you want more than 1ge, then the you should use a faster connection.. 2ge fiber.  10ge, 40ge, etc.  Just because you have 4x 1ge in a lagg/portchannel/etherchannel/etc  doesn't mean your going to be able to see 4ge go across it..



  • Than Johnpoz, really useful information. in the lag  there are other options like round-robin and load balance. which one is better?  right now my cisco switch and pfsense configuration is LACP.
    I'm not looking for increasing a client' session bandwidth more than 1g. but I have 16 users which will copy files on the network alot.


  • Rebel Alliance Global Moderator

    So please draw out how your clients get to the server… What is the physical path(s) traffic will flow through.. Is pfsense routing between a client vlan and a server vlan?

    You do understand that multiple vlans on same physical path is automatically a hairpin right.. And routing from 1 vlan to another vlan that is on same physical nics.. Double hairpin, etc.

    So please draw up the full physical connections from a client to a server..  What cisco switch(es)?  Pfsense is VM so how do you have this configure in pfsense and physical nics connected to the vswitch..



  • @johnpoz:

    So please draw out how your clients get to the server… What is the physical path(s) traffic will flow through.. Is pfsense routing between a client vlan and a server vlan?

    You do understand that multiple vlans on same physical path is automatically a hairpin right.. And routing from 1 vlan to another vlan that is on same physical nics.. Double hairpin, etc.

    So please draw up the full physical connections from a client to a server..  What cisco switch(es)?  Pfsense is VM so how do you have this configure in pfsense and physical nics connected to the vswitch..

    all of my switches are cisco, core switch is 3850x all ports 10g, 2x10G uplinks to another 3850x switch which my esxi host is connected to.
    I figured out I haven't changed my esxi host management network to the newly installed 10G ethernet adapter. So the first thing I did today was switching management network to the new ethernet card. Then I gave a vmxnet3 with vlan id 4095 which is trunk to the pfsense vm, created my vlans and made any to any rules, now when i test with iperf, I give about 3G/s


  • Rebel Alliance Global Moderator

    Testing from what to what vmx3 to vmx3 over a vswitch?

    if all your ports are 10ge, where and why would you have 1ge 4 port laggs?



  • @johnpoz:

    Testing from what to what vmx3 to vmx3 over a vswitch?

    if all your ports are 10ge, where and why would you have 1ge 4 port laggs?

    That was a physical server with 4 nics.
    Yes now I'm on a vmware vm with 1 vmxnet3 , no need for laggs.

    But now I have another problem!
    I have defined two vlans and allowed any to any on both. I can ping clients from one vlan to another. but it's just ping!! nothing else works! like http, https, or even smb.
    What could be wrong? is it a driver issue? I disabled hardware large receive offload but no luck


  • Rebel Alliance Global Moderator

    Did you install the vmware tools, those got broken while back.. Just use the openvm tools package.

    Without seeing your rules impossible to say what your problem is - maybe you "think" you allowed any any.. when all you allowed was icmp..  Maybe your host your trying to ping have host firewall - this is very common user error.

    To be honest is freebsd 11 even on the supported 6.0 list - pretty sure 11.x not supported until 6.5



  • @johnpoz:

    Did you install the vmware tools, those got broken while back.. Just use the openvm tools package.

    I installed openvm tools

    @johnpoz:

    Without seeing your rules impossible to say what your problem is - maybe you "think" you allowed any any.. when all you allowed was icmp..  Maybe your host your trying to ping have host firewall - this is very common user error.

    host firewall is completely off
    and about the rules I'll attach a photo

    @johnpoz:

    To be honest is freebsd 11 even on the supported 6.0 list - pretty sure 11.x not supported until 6.5

    so I have to upgrade esxi host?


  • Rebel Alliance Global Moderator

    "so I have to upgrade esxi host?"

    You should of done that a long time ago… What reason are you running 6.0?  6.5 came out Nov 2016... Shoot even your update 3 is over a year old..

    While I am not saying 11.x will not work on 6.5... I just do not see the reason that you would not update?  Good luck calling for support to a company that doesn't list X on their compatibility list..

    And while it might work just fine - if so why has the company not updated their list to say 11.x is supported on 6.0 update 3?


  • Netgate Administrator

    Yeah you should be running 6.5 but as johnpoz says it may work fine in 6.0.

    If you are seeing ICMP work but not TCP it is often an asymmetric routing issue. Is there some other path between those VLANs?

    It could also be packet size issue. Try pinging with much larger packets.

    Steve