Navigation

    Netgate Discussion Forum
    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search

    Firewall Performance - Suitable for >100Mbit?

    Firewalling
    5
    12
    4898
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • T
      tulsaconnect last edited by

      Is anyone using pfSense in transparent firewall mode with traffic >100Mbit?  We are looking to replace our aging PIX 520 platform with something with more grunt, and Cisco ASA's and Juniper SSG's are a bit pricey.  We need something that can scale up to 1GBit of firewall throughput (we won't use any other features besides the firewall function).  We can throw whatever hardware is necessary at the box to accomplish this level of performance.  Would appreciate comments from others that are using pfSense with this sort of load.

      TIA.

      1 Reply Last reply Reply Quote 0
      • P
        Perry last edited by

        Just some links…..

        http://forum.pfsense.org/index.php/topic,4087.msg28786.html#msg28786

        http://forum.pfsense.org/index.php/topic,4700.msg28594.html#msg28594

        http://forum.pfsense.org/index.php/topic,4684.msg29181.html#msg29181

        As i read those, some server class hardware with intel nic's should do it.

        Report back with your specification and performances result, if you end up with a killing pfSense system  ;)

        /Perry
        doc.pfsense.org

        1 Reply Last reply Reply Quote 0
        • C
          cmb last edited by

          Yep, you'll be good with a nice server class box (I suggest Xeon 800 FSB or faster proc) with dual onboard gig NIC's, preferably Intel chipset. Use PCI-X Intel cards for any additional ports you need.

          1 Reply Last reply Reply Quote 0
          • T
            tulsaconnect last edited by

            We plan to use Dell hardware, and unfortunately the newer Dell's are using Broadcom NICs, which aren't as well supported under FreeBSD as Intel NICs.  I wonder how the onboard NICs are connected internally – e.g. at something better than PCI-X or PCIe bus speeds... ?

            1 Reply Last reply Reply Quote 0
            • C
              cmb last edited by

              Well it doesn't really matter what bus the onboard cards are connected to, both PCI-X and PCI-e are substantially faster than a gigabit NIC. I'm not really sure what they're connected to, I would imagine they're probably on a dedicated PCI-X or PCI-e bus of some sort.

              Which Dells are you looking at that have Broadcom? I haven't had a chance to mess with the latest generation, but I've worked with a ton of PowerEdge hardware over the past 10 years. You might want to look at a used 2850, they have dual onboard Intel cards, and would be more than adequate for your stated needs. Plus, last I looked they were dirt cheap on ebay, I bought a 2850 several months back for $2K for a dual 3.4 with 4 GB RAM, 15K RPM drives. If you'd rather go direct to a reseller, I've bought a lot of PE stuff from scsistuff.com and they always have plenty of Dell gear on hand. Their customer service has been excellent as well.

              I'd be very leery of anything with Broadcom NIC's, as the FreeBSD driver really seems to have some issues. I run stock FreeBSD installs on PE 2550's with Broadcom gig NIC's and if they were mission critical I'd have to use different NIC's. They're OK for a general purpose administration server, as a few drops here and there isn't really hurting anything. Reports of instability with Broadcom NIC's pop up on the freebsd-stable list at least once a month it seems, and the problems never get resolved. Any Intel problems that come up get addressed by Intel employees. Broadcom doesn't seem to care about FreeBSD issues.

              1 Reply Last reply Reply Quote 0
              • T
                tulsaconnect last edited by

                The PE 1950/2950's and 850/860s all use integrated Broadcom NICs (as well as the 1425SC's I think).  We run a Data Center, so we have lots of 1850/2850s still available for use, which is probably what I will end up using, but the 2950/1950s are a lot faster and use less power, so they would be preferable if it weren't for the NIC issue.

                1 Reply Last reply Reply Quote 0
                • Cry Havok
                  Cry Havok last edited by

                  @tulsaconnect:

                  We plan to use Dell hardware, and unfortunately the newer Dell's are using Broadcom NICs, which aren't as well supported under FreeBSD as Intel NICs.  I wonder how the onboard NICs are connected internally – e.g. at something better than PCI-X or PCIe bus speeds... ?

                  From discussions with somebody I know you'll find you get better throughput with add-in Intel GBit cards even on boxes with onboard Intel GBit.  He wasn't sure why, but even when the add-in cards had the same chips as the onboard cards he still saw better performance (beyond 100 Mbs, don't remember the exact values) with the add-in cards.

                  1 Reply Last reply Reply Quote 0
                  • T
                    tulsaconnect last edited by

                    Interesting – I wonder if PCI-X vs PCI-e makes any difference.  I've not tried any PCI-e NICs in FreeBSD 6.2 yet..

                    1 Reply Last reply Reply Quote 0
                    • C
                      cmb last edited by

                      @Cry:

                      From discussions with somebody I know you'll find you get better throughput with add-in Intel GBit cards even on boxes with onboard Intel GBit.  He wasn't sure why, but even when the add-in cards had the same chips as the onboard cards he still saw better performance (beyond 100 Mbs, don't remember the exact values) with the add-in cards.

                      That could be true on certain hardware, I very much doubt if it's true on most hardware.

                      @tulsaconnect:

                      Interesting – I wonder if PCI-X vs PCI-e makes any difference.  I've not tried any PCI-e NICs in FreeBSD 6.2 yet..

                      Unlikely, though I don't have the equipment or capabilities of testing anything that big. PCI-X bus speed is 8 Gbps, PCI-e is 8 times that. Either way more than a gig NIC, or even several on a single bus, is going to be able to use.

                      @tulsaconnect:

                      The PE 1950/2950's and 850/860s all use integrated Broadcom NICs (as well as the 1425SC's I think).  We run a Data Center, so we have lots of 1850/2850s still available for use, which is probably what I will end up using, but the 2950/1950s are a lot faster and use less power, so they would be preferable if it weren't for the NIC issue.

                      I think your best bet would be 1850 or 2850. The speed differential likely won't be measurable unless you need to stick a bunch of gig cards in one box. I seem to recall people having issues with the x9xx series PE boxes, and there was recently a thread on freebsd-stable that the driver for the SAS controller in that series has some performance issues. If you try one, let us know how it goes.

                      1 Reply Last reply Reply Quote 0
                      • T
                        tulsaconnect last edited by

                        Yes, the SAS controller on the x9xx series has had issues, but the RAID controller (PERC5/i) seems well supported.  I'm running several 1950s (2 x Dual Core 51xx) on 6.2-RELEASE with no issues under pretty heavy load (using as spam/virus scanners for inbound mail).  I've ordered half dozen PCI-X Intel Pro/1000 MT Dual Port cards which I will throw into a 1950 with 2 x 5160 CPUs and 4GB RAM for my testing.  Any suggestions on a stress tool to measure raw throughput?  (something contained in the FreeBSD ports collection would be preferred)..

                        1 Reply Last reply Reply Quote 0
                        • C
                          cmb last edited by

                          I like netperf best, iperf also does well.

                          /usr/ports/benchmarking/netperf
                          /usr/ports/benchmarking/iperf

                          1 Reply Last reply Reply Quote 0
                          • J
                            jtfinley last edited by

                            @Cry:

                            @tulsaconnect:

                            We plan to use Dell hardware, and unfortunately the newer Dell's are using Broadcom NICs, which aren't as well supported under FreeBSD as Intel NICs.  I wonder how the onboard NICs are connected internally – e.g. at something better than PCI-X or PCIe bus speeds... ?

                            From discussions with somebody I know you'll find you get better throughput with add-in Intel GBit cards even on boxes with onboard Intel GBit.  He wasn't sure why, but even when the add-in cards had the same chips as the onboard cards he still saw better performance (beyond 100 Mbs, don't remember the exact values) with the add-in cards.

                            From what I was told from an Intel engineer was that, the OnBoard NIC's use shared resources relying heavily on the CPU whereas an Add In NIC, usually has it's own timer chips, etc.

                            1 Reply Last reply Reply Quote 0
                            • First post
                              Last post