Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Intel DN2800mt x64 2.0.3-2.1 bandwidth

    Hardware
    6
    81
    22.9k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • J
      Jbmeth007
      last edited by

      So i've been testing different distros, and so far PfSense is the only one that is sufficient enough to use on my home network with features I need for specific applications.  Some people say that pfsense is more difficult to setup while others "tangle"  are so simple and stupid I can't get them to do what i want.  Therefor make them harder to setup.

      Anyway, my setup is basically this.

      Old Barracuda Spam 400 1u chassis gutted "fits mini-itx up to full atx"
      Intel pro/1000 pt pci-e dual w/Serial
      Intel Dn2800mt "n2800 1.86 2c 2t" external psu
      2GB ddr3  and a temporary 80GB harddrive. "waiting on my order to come in the mail." * 32GB msata Samsung SSD

      At the moment it draws 20w from the wall on a cheap varivoltage 70W amazon special $6.99 charger.  With the hard drive disconnected it draws 11-12W, and suspect with the msata card in place, in total will most likely be near the 13-14W mark.  havent tweaked any settings.  these measurements were taking sitting at a bios screen.  Perhaps a speedstep will lower depending on kernel.

      Throughput testing.  I was able to reach 323Mbps UDP Maximum with a sustained TCP 240Mbps. 0.9ms round trip,  straight firewall.  To me this seems really low.  Verified multple times through 32bit and 64bit distros at 2.0.3 and 2.1.  Others were able to reach 300Mbps tcp on a single core N270 according to reports.  something just doesn't seem right.

      At the moment I have hyperthreading off to minimize latency.  The Intel card is a 4x card but supports open slotted pci-e x1  2.5Gbps bandwidth  so im not capping out.  on the cpu im seeing 40-77% util.  is there something im missing?

      Is this in line with other dual core atoms in the 1.8ghz range.  the n270 is a single core 1.6 reaching 300Mbps  i should be at least 200 more in theory.

      1 Reply Last reply Reply Quote 0
      • W
        wallabybob
        last edited by

        The firewall is currently single threaded so mutiple cores won't help increase firewall throughput, though additional cores will help if you are doing significant application processing.

        Throughput is heavily dependent upon packet size up until CPU saturation or link saturation. What is your test configuration?

        1 Reply Last reply Reply Quote 0
        • K
          kejianshi
          last edited by

          How did you do your throughput testing?  What was your method?

          1 Reply Last reply Reply Quote 0
          • stephenw10S
            stephenw10 Netgate Administrator
            last edited by

            Indeed, I think we need to define your test method. Others with similar hardware have reported >500Mbps throughput.

            Steve

            1 Reply Last reply Reply Quote 0
            • J
              Jbmeth007
              last edited by

              I used two pc's one on the wan side and another on LAN.  Both have verified 900+Mbps to my local NAS

              TamoSoft Server/client,  it tests for throughput TCP up down UDP up down  Loss  and RTT "latency"

              standard default MTU of 1500.  I see activity on both cores while testing

              1 Reply Last reply Reply Quote 0
              • stephenw10S
                stephenw10 Netgate Administrator
                last edited by

                OK seems reasonable. What level of activity are you seeing on each core? Is one core pegged at 100%?

                Steve

                1 Reply Last reply Reply Quote 0
                • J
                  Jbmeth007
                  last edited by

                  no, the highest ive seen was 87% on one particular core but the average between the two would be around 60ish.

                  1 Reply Last reply Reply Quote 0
                  • J
                    jasonlitka
                    last edited by

                    @Jbmeth007:

                    I used two pc's one on the wan side and another on LAN.  Both have verified 900+Mbps to my local NAS

                    TamoSoft Server/client,  it tests for throughput TCP up down UDP up down  Loss  and RTT "latency"

                    standard default MTU of 1500.   I see activity on both cores while testing

                    I've seen +900Mbps through my DN2800MT as well.  My testing was done using iperf (both tcp & udp) from my laptop to my desktop.  The two were on different interfaces with minimal firewall rules.  I'm using a quad-port i350 NIC, not the 82574L onboard.  I don't remember the CPU usage, but it wasn't 100% of a core.

                    I can break anything.

                    1 Reply Last reply Reply Quote 0
                    • J
                      Jbmeth007
                      last edited by

                      Yeah im kinda baffled.  as I am using an external intel pro/1000.  the dn2800mt has an onboard intel pro/1000 as well.  but it sees the same speeds.  although about 40Mbps quicker using both lan on external.

                      but when it comes to transfering files over a NAS on the switch it flys.

                      1 Reply Last reply Reply Quote 0
                      • J
                        Jbmeth007
                        last edited by

                        actually i just looked up the controllers,  onboard is a 82574L and the externals are 82571GI. both show up as intel pro/1000 on terminal side.

                        1 Reply Last reply Reply Quote 0
                        • J
                          Jbmeth007
                          last edited by

                          also a side note, i don't have any rules in place at the moment,  just a default install.

                          1 Reply Last reply Reply Quote 0
                          • K
                            kejianshi
                            last edited by

                            You should be able to pull more bandwidth.  Strange.

                            1 Reply Last reply Reply Quote 0
                            • J
                              Jbmeth007
                              last edited by

                              I agree. link isn't saturated, and cpu isn't pegged.  the switch in place is a enterasys b3g124-48  fully capable of fully saturating a link.  kinda starting to piss me off a bit.  not heat related, cpu is at 20.0C  card is cold to the touch.  onboard and dual intel pro/1000 pt see about the same speed being a tad slower.  both links on just the add-on nic are 40Mb faster.  in the back of my head something tells me tunables in pfsesne.  im not sitting in front of it at the moment,  getting my thoughts together on what to try.  you think possibly the offload settings?

                              1 Reply Last reply Reply Quote 0
                              • K
                                kejianshi
                                last edited by

                                I use dual port intel pro/1000s in my x16 slots…  They move quick.  I don't know.  I've never had one max out without either high cpu load, a bad cable, having a 10/100 NIC in the mix, or something like that.

                                1 Reply Last reply Reply Quote 0
                                • J
                                  Jbmeth007
                                  last edited by

                                  I just had an epiphany,  perhaps my card in one pc "killer Nic 2100"  software is implementing the "QOS" style limiter.  I will try again when i get home with the onboard nic.  both are known to chat at the 900+ mark.  Hopefully that is the culprit, otherwise i just may have to get out the 50 gallon drum and some concrete mix.  Seems pretty warm today.

                                  1 Reply Last reply Reply Quote 0
                                  • stephenw10S
                                    stephenw10 Netgate Administrator
                                    last edited by

                                    @Jason:

                                    I've seen +900Mbps through my DN2800MT as well.

                                    Seems surprisingly high for an Atom board. You do anything special? It that actually through?

                                    Steve

                                    1 Reply Last reply Reply Quote 0
                                    • K
                                      kejianshi
                                      last edited by

                                      I think he is saying the NIC is capable…  Well - Shall see.

                                      1 Reply Last reply Reply Quote 0
                                      • J
                                        Jbmeth007
                                        last edited by

                                        Nope, still pretty slow although it did pick up a little more speed.  328Mbps TCP and 503Mbps UDP

                                        I'm looking at the system activity and what I thought was 87% cpu utilization is actually 87% cpu0  (Idle)    WTF!!! so its utilizing 13%

                                        Kernal em1 que and kernel em2 que shows 5.8%

                                        1 Reply Last reply Reply Quote 0
                                        • K
                                          kejianshi
                                          last edited by

                                          I'm thinking firewall will only utilize one core (or thread) fully.

                                          So, can you turn off hyper-threading in bios (if its present) and try again…  This time looking at per-core utilization.

                                          Its just a theory.

                                          1 Reply Last reply Reply Quote 0
                                          • J
                                            Jbmeth007
                                            last edited by

                                            I did that from the get go..  standard practice minimizing latency.  Hyperthreading is off..  right now as we speak I have both tcp offload engine off, "tried with it on as well"

                                            232Mbps

                                            last pid: 39883;  load averages:  0.21,  0.41,  0.24  up 0+00:05:08    23:24:26
                                            113 processes: 3 running, 93 sleeping, 17 waiting

                                            Mem: 57M Active, 19M Inact, 67M Wired, 284K Cache, 18M Buf, 1806M Free
                                            Swap: 4096M Total, 4096M Free

                                            PID USERNAME PRI NICE  SIZE    RES STATE  C  TIME  WCPU COMMAND
                                              11 root    171 ki31    0K    32K CPU0    0  4:12 88.28% [idle{idle: cpu0}]
                                              11 root    171 ki31    0K    32K RUN    1  3:45 83.98% [idle{idle: cpu1}]
                                                0 root    -68    0    0K  240K -      0  0:23  9.28% [kernel{em1 que}]
                                                0 root    -68    0    0K  240K -      1  0:22  8.40% [kernel{em2 que}]
                                            69333 root      47    0  6956K  1592K select  1  0:09  3.27% /usr/sbin/syslogd -s -c -c -l /var/dhcpd/va
                                            31969 root      76    0  142M 41148K piperd  1  0:07  2.69% /usr/local/bin/php{php}
                                            18343 root      44    0  5780K  1072K piperd  0  0:04  0.49% logger -t pf -p local0.info
                                                0 root    -16    0    0K  240K sched  1  0:44  0.00% [kernel{swapper}]
                                              257 root      76  20  6908K  1360K kqread  1  0:18  0.00% /usr/local/sbin/check_reload_status
                                            18245 root      44    0 11748K  2712K bpf    0  0:02  0.00% /usr/sbin/tcpdump -s 256 -v -S -l -n -e -tt
                                              12 root    -32    -    0K  272K WAIT    0  0:00  0.00% [intr{swi4: clock}]
                                              14 root    -16    -    0K    16K -      0  0:00  0.00% [yarrow]
                                            28465 root      76    0  136M 21540K wait    1  0:00  0.00% /usr/local/bin/php
                                            27988 root      76    0  136M 21540K wait    0  0:00  0.00% /usr/local/bin/php
                                            27735 root      44    0 24220K  3936K kqread  0  0:00  0.00% /usr/local/sbin/lighttpd -f /var/etc/lighty
                                                3 root      -8    -    0K    16K -      0  0:00  0.00% [g_up]
                                            59199 root      76  20  8296K  1776K wait    1  0:00  0.00% /bin/sh /var/db/rrd/updaterrd.sh
                                            22760 root      44    0  5780K  1460K select  1  0:00  0.00% /usr/local/sbin/apinger -c /var/etc/apinger

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.