Traffic monitoring reduces bandwidth to a third
-
Hi,
our pfSense is running as a Proxmox VM. After we upgraded our internet connection I decided to test our WAN and LAN interfaces speed using iPerf server on pfSense.
I found out that the maximum I can get on LAN is around 300 Mbps while all the other VMs and physical machines are capable of a full 1 Gbps.
While digging around I noticed that the bottleneck is caused by a network monitoring tools (be it ntopng or darkstat). When I disable them, the interface speed jumps up to 1 gig immediately. Also, when using iPerf in a reverse mode, there is no speed drop.
I am aware of the fact that monitoring the interface slows things down but certainly, it shouldn't be by a mere 2/3.
Any advices? -
@ovecka said in Traffic monitoring reduces bandwidth to a third:
Hi,
our pfSense is running as a Proxmox VM. After we upgraded our internet connection I decided to test our WAN and LAN interfaces speed using iPerf server on pfSense.
I found out that the maximum I can get on LAN is around 300 Mbps while all the other VMs and physical machines are capable of a full 1 Gbps.
While digging around I noticed that the bottleneck is caused by a network monitoring tools (be it ntopng or darkstat). When I disable them, the interface speed jumps up to 1 gig immediately. Also, when using iPerf in a reverse mode, there is no speed drop.
I am aware of the fact that monitoring the interface slows things down but certainly, it shouldn't be by a mere 2/3.
Any advices?The problem is those packages/tools puts the NIC in promiscious mode to pick up all packets (to be allowed to look at them/process them). That is a pretty heavy added CPU load/latency. The major problem however is that it includes several context changes (kernel/user mode), and while those are costly in CPU time on physical hardware, they are REALLY costly in a VM - especially if paravirtualization is not used and the hardware is not fairly modern virtualization optimized equipment. So a performance reduction to 1/3 the throughput does not sound unrealistic at all.
Even on the physical smaller ARM appliances (sg-1100/sg-2100 series) using those packages can cost you upwards of half the throughput.
-
@ovecka said in Traffic monitoring reduces bandwidth to a third:
ur pfSense is running as a Proxmox VM. Af
Hi,
Proxmox (any HyperVisor) adds more Overhaed to Network related operations.
There are two choices:- more Power on the Proxmox
- a dedicated Hardware Router on that pfSense could use the hardware accalerated features.
We do never use hypervisors for Routers anymore.
You have more issues with that under heave load.regards,
CAT
-
I've already tried adding more resources to the VM to no avail dedicating 4 cores (Xeon E5540) and 16 GB of RAM to it. The CPU load barely ever goes to 40 % averaging at mere 6 % during peak hours. I don't really think the lack of HW resources is the issue here.
-
@ovecka said in Traffic monitoring reduces bandwidth to a third:
I've already tried adding more resources to the VM to no avail dedicating 4 cores (Xeon E5540) and 16 GB of RAM to it. The CPU load barely ever goes to 40 % averaging at mere 6 % during peak hours. I don't really think the lack of HW resources is the issue here.
The problem is not lack of actual CPU cycles. The problem is the latency involved in packets recieved on the nic requires several context changes on the hypervisor/VM combined to allow for the traffic monitoring package to get the copy. Context changes takes time - regardless of frequency and number of cores.
That latency (turn around time of completing the processing of packets) becomes the bottleneck.Image you have to move water from one point to the other (10meters apart) with a little bucket. No matter how quick you run, if you bring in a friend to help, or how powerfull a runner you are, the real bottleneck of moving the water is the size of the bucket.
-
@ovecka said in Traffic monitoring reduces bandwidth to a third:
I've already tried adding more resources to the VM
Have you tried passing through the NICs pfsense uses.
And using another NIC for other functions on Proxmox