# Maximum theoretical thoughput.

• So here's a question:
What is the theoretical maximum throughput of a pfSense box with two PCI NICs both on the same bus?

I realise that there are a lot of variables so lets say that CPU speed is not a problem and define the bus as 32bit and 33MHz.

I expected to find this question asked and answered a thousand times but I've come up with only confusing half facts. If you want to know what the maximum speed of a single NIC is that's easy.
The bus can transfer a maximum of 1066Mbps (32 * 33,333,333). The M for Mega is confusing here as Megabits is 2^20 whereas MegaHertz is 1*10^6 but that's an aside!

Anyway the real question is in a firewall configuration where you have to read in from one NIC and then write out again is it half that? Or can the CPU read fro one and write full duplex to another NIC?

The actual results during some testing we have done seem to suggest it's around a quarter of that, 200 - 250Mbps. This seems low but before I go looking for some other problem it would be nice to know what I should expect.

Any thoughts? Any results you have that might apply?

Steve

• There are several factors as I think you know. First is switching, if you are only using a 100MBits switch or card (since you didn't specify) you are only going to be limited to about 112MBits (or 224MBits in full duplex) even though the bus can do more.
Also you ARE comparing theoretical to actual, so you must take into account memory and CPU speeds. Incoming packets must be compared to a set of rules before passing. Generally if you are using something greater than 500MHz, I don't guess CPU would be much of a problem.

Then there is network overhead from the IP stack.

Generally speaking though, this old technology is more than capable with keeping up with most home and office internet connections. Even some datacenter deployments where the speed it low. Our connection is gigabit, but we are only contracted for 100Mbits so our provider caps us at 250MBits. We use the limiter in pfSense to drag that down to 100 so we don't overpay.

In testing, you also have to have a look at the two ends generating and receiving the data to make sure they are capable of greater speed.

• Yes, gigabit NICs, I should have said.

I am interested really in what restrictions are imposed by being on a single PCI bus assuming every other part of the system is able to work faster than that. Perhaps an unlikely scenario.
As I said some recent testing done by another forum member brought up the question. Throughput seemed to be lower than I expected. I started questioning my expectations. Looking into it further seemed to show that although the box has PCI-e and has NICs on PCI-e the four on board interfaces (sk0-3) are in fact on PCI. Though I am deciphering the output of pciconf:

``````
[2.0-RC3][root@pfSense.localdomain]/root(1): pciconf -lc
hostb0@pci0:0:0:0:	class=0x060000 card=0x25908086 chip=0x25908086 rev=0x04 hdr=0x00
cap 09[e0] = vendor (length 9) Intel cap 2 version 1
vgapci0@pci0:0:2:0:	class=0x030000 card=0x25928086 chip=0x25928086 rev=0x04 hdr=0x00
cap 01[d0] = powerspec 2  supports D0 D3  current D0
pcib1@pci0:0:28:0:	class=0x060400 card=0x26608086 chip=0x26608086 rev=0x04 hdr=0x01
cap 10[40] = PCI-Express 1 root port max data 128(128) link x1(x1)
cap 05[80] = MSI supports 1 message
cap 0d[90] = PCI Bridge card=0x26608086
cap 01[a0] = powerspec 2  supports D0 D3  current D0
pcib2@pci0:0:28:1:	class=0x060400 card=0x26628086 chip=0x26628086 rev=0x04 hdr=0x01
cap 10[40] = PCI-Express 1 root port max data 128(128) link x1(x1)
cap 05[80] = MSI supports 1 message
cap 0d[90] = PCI Bridge card=0x26628086
cap 01[a0] = powerspec 2  supports D0 D3  current D0
pcib3@pci0:0:28:2:	class=0x060400 card=0x26648086 chip=0x26648086 rev=0x04 hdr=0x01
cap 10[40] = PCI-Express 1 root port max data 128(128) link x1(x1)
cap 05[80] = MSI supports 1 message
cap 0d[90] = PCI Bridge card=0x26648086
cap 01[a0] = powerspec 2  supports D0 D3  current D0
pcib4@pci0:0:28:3:	class=0x060400 card=0x26668086 chip=0x26668086 rev=0x04 hdr=0x01
cap 10[40] = PCI-Express 1 root port max data 128(128) link x1(x1)
cap 05[80] = MSI supports 1 message
cap 0d[90] = PCI Bridge card=0x26668086
cap 01[a0] = powerspec 2  supports D0 D3  current D0
uhci0@pci0:0:29:0:	class=0x0c0300 card=0x26588086 chip=0x26588086 rev=0x04 hdr=0x00
uhci1@pci0:0:29:1:	class=0x0c0300 card=0x26598086 chip=0x26598086 rev=0x04 hdr=0x00
uhci2@pci0:0:29:2:	class=0x0c0300 card=0x265a8086 chip=0x265a8086 rev=0x04 hdr=0x00
uhci3@pci0:0:29:3:	class=0x0c0300 card=0x265a8086 chip=0x265b8086 rev=0x04 hdr=0x00
ehci0@pci0:0:29:7:	class=0x0c0320 card=0x265c8086 chip=0x265c8086 rev=0x04 hdr=0x00
cap 01[50] = powerspec 2  supports D0 D3  current D0
pcib5@pci0:0:30:0:	class=0x060401 card=0x24488086 chip=0x24488086 rev=0xd4 hdr=0x01
cap 0d[50] = PCI Bridge card=0x24488086
isab0@pci0:0:31:0:	class=0x060100 card=0x26418086 chip=0x26418086 rev=0x04 hdr=0x00
atapci0@pci0:0:31:1:	class=0x01018a card=0x266f8086 chip=0x266f8086 rev=0x04 hdr=0x00
none0@pci0:0:31:3:	class=0x0c0500 card=0x266a8086 chip=0x266a8086 rev=0x04 hdr=0x00
mskc0@pci0:1:0:0:	class=0x020000 card=0x43201148 chip=0x436211ab rev=0x19 hdr=0x00
cap 01[48] = powerspec 2  supports D0 D1 D2 D3  current D0
cap 03[50] = VPD
cap 05[5c] = MSI supports 2 messages, 64 bit
cap 10[e0] = PCI-Express 1 legacy endpoint max data 128(128) link x1(x1)
mskc1@pci0:2:0:0:	class=0x020000 card=0x43201148 chip=0x436211ab rev=0x19 hdr=0x00
cap 01[48] = powerspec 2  supports D0 D1 D2 D3  current D0
cap 03[50] = VPD
cap 05[5c] = MSI supports 2 messages, 64 bit
cap 10[e0] = PCI-Express 1 legacy endpoint max data 128(128) link x1(x1)
mskc2@pci0:3:0:0:	class=0x020000 card=0x43201148 chip=0x436211ab rev=0x19 hdr=0x00
cap 01[48] = powerspec 2  supports D0 D1 D2 D3  current D0
cap 03[50] = VPD
cap 05[5c] = MSI supports 2 messages, 64 bit
cap 10[e0] = PCI-Express 1 legacy endpoint max data 128(128) link x1(x1)
mskc3@pci0:4:0:0:	class=0x020000 card=0x43201148 chip=0x436211ab rev=0x22 hdr=0x00
cap 01[48] = powerspec 2  supports D0 D1 D2 D3  current D0
cap 03[50] = VPD
cap 05[5c] = MSI supports 2 messages, 64 bit
cap 10[e0] = PCI-Express 1 legacy endpoint max data 128(128) link x1(x1)
skc0@pci0:5:0:0:	class=0x020000 card=0x43201148 chip=0x432011ab rev=0x13 hdr=0x00
cap 01[48] = powerspec 2  supports D0 D1 D2 D3  current D0
cap 03[50] = VPD
skc1@pci0:5:1:0:	class=0x020000 card=0x43201148 chip=0x432011ab rev=0x13 hdr=0x00
cap 01[48] = powerspec 2  supports D0 D1 D2 D3  current D0
cap 03[50] = VPD
skc2@pci0:5:2:0:	class=0x020000 card=0x43201148 chip=0x432011ab rev=0x13 hdr=0x00
cap 01[48] = powerspec 2  supports D0 D1 D2 D3  current D0
cap 03[50] = VPD
skc3@pci0:5:3:0:	class=0x020000 card=0x43201148 chip=0x432011ab rev=0x13 hdr=0x00
cap 01[48] = powerspec 2  supports D0 D1 D2 D3  current D0
cap 03[50] = VPD
none1@pci0:5:4:0:	class=0x100000 card=0x0001177d chip=0x0003177d rev=0x00 hdr=0x00
cap 07[e0] = PCI-X 64-bit supports 133MHz, 4096 burst read, 32 split transactions
cap 01[e8] = powerspec 2  supports D0 D3  current D0
cap 05[f0] = MSI supports 1 message, 64 bit

``````

Steve

Edit: Re-reading that output after I posted it I notice that also on bus 5 is the unused Cavium VPN accelaretor, 0003:177D, and that is a PCI-X device. How can I find out what the bus speed/width is?

• Regular PCI is (1) synchronous and (2) assymetric. This means (1) that only a single device can talk on the bus at a time, and all other devices are forced to wait to communicate until the device using the bus stops. This also means that (2) devices cannot read and write at the same time. It's one way or the other.

Conceptually, the PCIe bus is like a high-speed serial replacement of the older PCI/PCI-X bus,[3] an interconnect bus using shared address/data lines.
A key difference between PCIe bus and the older PCI is the bus topology. PCI uses a shared parallel bus architecture, where the PCI host and all devices share a common set of address/data/control lines. In contrast, PCIe is based on point-to-point topology, with separate serial links connecting every device to the root complex (host). Due to its shared bus topology, access to the older PCI bus is arbitrated (in the case of multiple masters), and limited to one master at a time, in a single direction. Furthermore, the older PCI's clocking scheme limits the bus clock to the slowest peripheral on the bus (regardless of the devices involved in the bus transaction). In contrast, a PCIe bus link supports full-duplex communication between any two endpoints, with no inherent limitation on concurrent access across multiple endpoints.

source (internet and wiki).

• Yes that's pretty much what I thought. So half duplex. Not taking into account any processing time or the fact that other devices may need to use the bus the best throughput we could expect is half bus speed. Yet we are actually seeing less than quarter bus speed. Can that be explained by processing delays or other devices?  :-\

Steve

• I would think that if you are seeing a variable max rate, then yes, other devices and other processing would be to blame. If you are seeing a steady max rate, like hard at 250MBits/s then possibly not and there might be other factors external or otherwise.

• Then there's the frequency of interrupts.  A stream of 64-byte packets will generate a lot more interrupts than than a stream of 1500-byte packets and each interrupt has to be processed.

• Yes the interrupt load is certainly important. However this is exactly the sort of thing that I was thinking about when I first looked at the throughput. Before I went off playing with jumbo frames or other tuning options I wondered whether there was any point if the bus was the throttling point.

Steve

• I have a system with 2 gigabit on 2 different buses (one PCI and the other PCI-X) and it is capable of maxing out at 540Mbs/s. I think the quarter speed comes from in/out on one device and then in/out on the second nic. Since you have half duplex, you are halving the half. If you can, try putting them on different buses and see if you double your performance. I would switch to PCI-e if possible. When that gets replaced, I am using strictly PCI-e Intel gigabit nics.

• Ah well that's interesting. Is that 540Mbps figure limited by cpu speed do you think?
You'll see from my pciconf output that the box also has 4 pci-e gigabit nics so testing is relatively easy. However with the current cpu (1.5GHz Pentium-M) it can manage around 550Mbps in one direction using one NIC, with data going to dev/null on the box. It that setup I can see the cpu pegged at 100% so it is clearly the throttling point. Using PCI NICs it runs at around 80%.
Also, and this is really the nub of the matter, when testing and seeing ~200Mbps throughput between two PCI NICs that's sending data in one direction through the box. I can't really see why data would have to travel across the bus 4 times. Sure there is some data in the other direction but it's negligible.

I suspect that I simply haven't understood the process by which data moves through the box.

Steve

• Actually, that has a dual core (even though firewall only uses one) at 3GHz so that is not the limiting factor. Mine is that I have PCI and it is only half duplex. So the ACKs also have to go back, and if ACK is not received the next packets are held in case a re-transmission is needed. So I only get half speed since I am using 2 PCI buses. I would use the PCI NICs for WAN (Unless it is gigabit WAN) and the PCI-e for LAN. WAN tends to be slower than LAN speeds.

Good luck.