Reduced performance Lan to Lan



  • With a default install should i expect a drop in speed lan to lan wise?
    Almost 99% certain that i did a wan lan test with those nics sometime ago, getting 43000KB

    System info
    Model  AMD Athlon™ processor
    CPU Speed 1.33 GHz
    Ram 512MB
    em0: Intel(R) PRO/1000 Network Connection Version - 6.2.9 PCI
    em1: Intel(R) PRO/1000 Network Connection Version - 6.2.9 PCI

    Output from vmstat ( cpu load 50% under test)
    procs      memory          page                      disk          faults                cpu
    r b w    avm    fre      flt  re  pi  po  fr  sr    ad0      in    sy      cs        us sy id
    0 4 0  98108 413100  546  0  0  0 500  0  8      16710  640    23190    1 54 45
    0 3 0  97312 413372  304  0  0  0 299  0  0      16257  337    22760    1 45 55
    0 3 0  97312 413372    0  0  0  0  0  0    0      16400  60    22997    0 51 49
    0 3 0  97312 413372  44  0  0  0  38  0    0      16396  130    24321    0 48 52

    Test setup                        
                                      pfSense                             
    WinXp–---switch---+---Em0--Em1---+---switch----Winxp

    ------------------|                      |-----------------|
          43000KB NetIo                              43000KB NetIo
    |-------------------------------------------------------|
                                    25000KB NetIo
    |-------------------------------------------------------|
            15000KB Transferring a big file using totalcomander
                      Is Bill Gates really that bad :)

    http://pfsense.blogspot.com/2007/06/more-on-network-performance.html
    Could those things be related?



  • which pfsense version are you using?

    you will see some reduction in performance when you drop pfsense in the middle, vs. if you used a crossover cable between the two test machines. How much depends on your hardware and config. You have a relatively weak machine, and I'm sure two NIC's on the same PCI bus.

    Do you have polling enabled?  (if not, don't enable it, it's slower)



  • Polling is disabled ( Some polling info on this site http://taosecurity.blogspot.com/2006/09/freebsd-device-polling.html ).

    Different irq btw.

    Yes same PCI bus ( will do some math on that tomorrow :) )

    Version
        |
        |
      |/



  • I'm a big fan of Richard Betjlich's blog and books (Scott and I met and had lunch with him at BSDCan 2005, I'm scheduled to meet up with him again in August to talk NSM and pfSense) and that's great info, but that polling post is not the case with firewalling. He's looking at it from a client and server perspective, firewalling is much different. I've done extensive testing over the last couple weeks that consistently shows significant performance drops (25-30+%) in firewalling scenarios with polling enabled, regardless of hz setting. That's beside the point though since you aren't using it, but trust me, I know what I'm talking about.  ;)

    Sorry, didn't see your version because sigs don't show up in the topic summary when you're replying.

    You can push roughly 1 Gbps on a PCI bus, depending on what other devices are on the bus, and probably differs from one machine to another to some extent. What you're seeing is about 200 Mbps, which is 400 Mbps on the bus (brought in one NIC, pushed out another). I don't know if your processor is fast enough to max out the bus. I'm guessing when you're pushing ~200 Mbps through the box, its CPU is pegged, and it's probably so interrupt loaded that the webGUI, console, and SSH are unresponsive. Is that the case?

    What you're seeing between the XP box and pfsense is more than half of what you get on traffic going through it (when that's really double the network load). I think you're doing quite well @ 200 Mb for a relatively weak processor.

    When you connect the XP boxes directly with a crossover cable, what do you see then? I don't see where you've tried that, that would be more indicative of the performance potential than what you can get between each side and pfsense.



  • I'm a big fan of Richard Betjlich's blog and books (Scott and I met and had lunch with him at BSDCan 2005, I'm scheduled to meet up with him again in August to talk NSM and pfSense) and that's great info, but that polling post is not the case with firewalling. He's looking at it from a client and server perspective, firewalling is much different. I've done extensive testing over the last couple weeks that consistently shows significant performance drops (25-30+%) in firewalling scenarios with polling enabled, regardless of hz setting. That's beside the point though since you aren't using it, but trust me, I know what I'm talking about.  Wink

    I do trust you :) Yes it seems to be a interesting blog he has.

    You can push roughly 1 Gbps on a PCI bus, depending on what other devices are on the bus, and probably differs from one machine to another to some extent. What you're seeing is about 200 Mbps, which is 400 Mbps on the bus (brought in one NIC, pushed out another). I don't know if your processor is fast enough to max out the bus. I'm guessing when you're pushing ~200 Mbps through the box, its CPU is pegged, and it's probably so interrupt loaded that the webGUI, console, and SSH are unresponsive. Is that the case?

    Only a slightly delay in the gui. Tried with a bit faster pc AMD Athlon XP 2400+ (2 GHz) which raised the transfer rate to 32000KB and still with a CPU usage around 50%. Would something like the iostat command show where the cpu is maxed out?

    When you connect the XP boxes directly with a crossover cable, what do you see then? I don't see where you've tried that, that would be more indicative of the performance potential than what you can get between each side and pfsense.

    First i got 22000kB later it dropped to 15000kB  :'( Will blame it on M$ for now.



  • You did not say what chipset is being used but back when we were testing various machines I found that the nVidia nForce2 chipset supporting the Athlon had very poor PCI utilization.  Total throughput through Intel PCI Gigabit ethernet cards was about half of the saturated PCI throughput of 90 MBytes/s.  Your reported numbers are very similar to what we saw.



  • VIA 8235/8237 (Apollo KM400/KM400A) host to PCI bridge


Log in to reply