• Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login
Netgate Discussion Forum
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login

Firewall Performance - Suitable for >100Mbit?

Scheduled Pinned Locked Moved Firewalling
12 Posts 5 Posters 5.2k Views
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • C
    cmb
    last edited by Jun 10, 2007, 4:57 AM

    Yep, you'll be good with a nice server class box (I suggest Xeon 800 FSB or faster proc) with dual onboard gig NIC's, preferably Intel chipset. Use PCI-X Intel cards for any additional ports you need.

    1 Reply Last reply Reply Quote 0
    • T
      tulsaconnect
      last edited by Jun 12, 2007, 12:44 AM

      We plan to use Dell hardware, and unfortunately the newer Dell's are using Broadcom NICs, which aren't as well supported under FreeBSD as Intel NICs.  I wonder how the onboard NICs are connected internally – e.g. at something better than PCI-X or PCIe bus speeds... ?

      1 Reply Last reply Reply Quote 0
      • C
        cmb
        last edited by Jun 12, 2007, 2:57 AM

        Well it doesn't really matter what bus the onboard cards are connected to, both PCI-X and PCI-e are substantially faster than a gigabit NIC. I'm not really sure what they're connected to, I would imagine they're probably on a dedicated PCI-X or PCI-e bus of some sort.

        Which Dells are you looking at that have Broadcom? I haven't had a chance to mess with the latest generation, but I've worked with a ton of PowerEdge hardware over the past 10 years. You might want to look at a used 2850, they have dual onboard Intel cards, and would be more than adequate for your stated needs. Plus, last I looked they were dirt cheap on ebay, I bought a 2850 several months back for $2K for a dual 3.4 with 4 GB RAM, 15K RPM drives. If you'd rather go direct to a reseller, I've bought a lot of PE stuff from scsistuff.com and they always have plenty of Dell gear on hand. Their customer service has been excellent as well.

        I'd be very leery of anything with Broadcom NIC's, as the FreeBSD driver really seems to have some issues. I run stock FreeBSD installs on PE 2550's with Broadcom gig NIC's and if they were mission critical I'd have to use different NIC's. They're OK for a general purpose administration server, as a few drops here and there isn't really hurting anything. Reports of instability with Broadcom NIC's pop up on the freebsd-stable list at least once a month it seems, and the problems never get resolved. Any Intel problems that come up get addressed by Intel employees. Broadcom doesn't seem to care about FreeBSD issues.

        1 Reply Last reply Reply Quote 0
        • T
          tulsaconnect
          last edited by Jun 12, 2007, 1:05 PM

          The PE 1950/2950's and 850/860s all use integrated Broadcom NICs (as well as the 1425SC's I think).  We run a Data Center, so we have lots of 1850/2850s still available for use, which is probably what I will end up using, but the 2950/1950s are a lot faster and use less power, so they would be preferable if it weren't for the NIC issue.

          1 Reply Last reply Reply Quote 0
          • C
            Cry Havok
            last edited by Jun 12, 2007, 7:29 PM

            @tulsaconnect:

            We plan to use Dell hardware, and unfortunately the newer Dell's are using Broadcom NICs, which aren't as well supported under FreeBSD as Intel NICs.  I wonder how the onboard NICs are connected internally – e.g. at something better than PCI-X or PCIe bus speeds... ?

            From discussions with somebody I know you'll find you get better throughput with add-in Intel GBit cards even on boxes with onboard Intel GBit.  He wasn't sure why, but even when the add-in cards had the same chips as the onboard cards he still saw better performance (beyond 100 Mbs, don't remember the exact values) with the add-in cards.

            1 Reply Last reply Reply Quote 0
            • T
              tulsaconnect
              last edited by Jun 12, 2007, 7:30 PM

              Interesting – I wonder if PCI-X vs PCI-e makes any difference.  I've not tried any PCI-e NICs in FreeBSD 6.2 yet..

              1 Reply Last reply Reply Quote 0
              • C
                cmb
                last edited by Jun 13, 2007, 1:34 AM

                @Cry:

                From discussions with somebody I know you'll find you get better throughput with add-in Intel GBit cards even on boxes with onboard Intel GBit.  He wasn't sure why, but even when the add-in cards had the same chips as the onboard cards he still saw better performance (beyond 100 Mbs, don't remember the exact values) with the add-in cards.

                That could be true on certain hardware, I very much doubt if it's true on most hardware.

                @tulsaconnect:

                Interesting – I wonder if PCI-X vs PCI-e makes any difference.  I've not tried any PCI-e NICs in FreeBSD 6.2 yet..

                Unlikely, though I don't have the equipment or capabilities of testing anything that big. PCI-X bus speed is 8 Gbps, PCI-e is 8 times that. Either way more than a gig NIC, or even several on a single bus, is going to be able to use.

                @tulsaconnect:

                The PE 1950/2950's and 850/860s all use integrated Broadcom NICs (as well as the 1425SC's I think).  We run a Data Center, so we have lots of 1850/2850s still available for use, which is probably what I will end up using, but the 2950/1950s are a lot faster and use less power, so they would be preferable if it weren't for the NIC issue.

                I think your best bet would be 1850 or 2850. The speed differential likely won't be measurable unless you need to stick a bunch of gig cards in one box. I seem to recall people having issues with the x9xx series PE boxes, and there was recently a thread on freebsd-stable that the driver for the SAS controller in that series has some performance issues. If you try one, let us know how it goes.

                1 Reply Last reply Reply Quote 0
                • T
                  tulsaconnect
                  last edited by Jun 15, 2007, 1:44 AM

                  Yes, the SAS controller on the x9xx series has had issues, but the RAID controller (PERC5/i) seems well supported.  I'm running several 1950s (2 x Dual Core 51xx) on 6.2-RELEASE with no issues under pretty heavy load (using as spam/virus scanners for inbound mail).  I've ordered half dozen PCI-X Intel Pro/1000 MT Dual Port cards which I will throw into a 1950 with 2 x 5160 CPUs and 4GB RAM for my testing.  Any suggestions on a stress tool to measure raw throughput?  (something contained in the FreeBSD ports collection would be preferred)..

                  1 Reply Last reply Reply Quote 0
                  • C
                    cmb
                    last edited by Jun 19, 2007, 2:44 AM

                    I like netperf best, iperf also does well.

                    /usr/ports/benchmarking/netperf
                    /usr/ports/benchmarking/iperf

                    1 Reply Last reply Reply Quote 0
                    • J
                      jtfinley
                      last edited by Jun 27, 2007, 2:58 PM

                      @Cry:

                      @tulsaconnect:

                      We plan to use Dell hardware, and unfortunately the newer Dell's are using Broadcom NICs, which aren't as well supported under FreeBSD as Intel NICs.  I wonder how the onboard NICs are connected internally – e.g. at something better than PCI-X or PCIe bus speeds... ?

                      From discussions with somebody I know you'll find you get better throughput with add-in Intel GBit cards even on boxes with onboard Intel GBit.  He wasn't sure why, but even when the add-in cards had the same chips as the onboard cards he still saw better performance (beyond 100 Mbs, don't remember the exact values) with the add-in cards.

                      From what I was told from an Intel engineer was that, the OnBoard NIC's use shared resources relying heavily on the CPU whereas an Add In NIC, usually has it's own timer chips, etc.

                      1 Reply Last reply Reply Quote 0
                      • First post
                        Last post
                      Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.
                        This community forum collects and processes your personal information.
                        consent.not_received