Well, the next thing I would check via Google searches is whether or not there are any outstanding bugs in ESXi with regards to IGMP. The fact it works when you enable NIC pass-through really does seem to point to the vSwitch as the most likely culprit. Might also be the virtual NIC, though. Try searching for hits on the VMXNET3 driver and multicast.
Thank you for the answer but my CPU doesn't allow me to use exsi, I can only use vm player or vm workstation.
The fastest way I've tried is the one that
I found on the internet. I mean, I had to buy a usb ethernet adapter and use the vm workstation property so that it can map the usb adapter as its own (vm only) and then the os host (win10) cannot see the adapter so the configuration is simple and clean :)
Before purchasing a usb adapter, please check if pfsense / freeBSD supports this hardware / chipset
As an update on the topic, I have updated to 2.4.5-p1 and changed the virtual driver to virtio instead of e1000.
This has greatly improved the stability of the pfSense and the high traffic induced network loss have disappeared.
We still experience some random network loss that are under investigation.
@Erutan409 I've been running with it for a couple of days and discovered that my pfSense box has significantly increased CPU load, and services behind that particular interface feel throttled.
Again, I'm running on KVM and I don't think it has any such paravirtual time synchronization—I run NTP on the host and have pfSense update from the host ntpd once an hour.
In the meantime, I've managed to break my pfSense install by powering it off at the wrong point, so I'm going to reinstall the latest version from ISO and switch back to PV NICs. I'll update here again if I learn anything.
Edit 2: I think it's either a bug in 2.45-p1 version of pfsense or something really weird is going on. I just switched back to a bare metal installation and it's still happening. Same exact problem....never noticed such a behavior before. Any idea fellas?
The dedicated server (It has Proxmox OS) has several VMs (Ubuntu server) and each VM should be connected to one subdomain and all subdomains (ex1.example.com, ex2.example.com, etc) must be accessible via the external IP address (17*...**2).
You cannot use 17*...**2 for that, since it is natted to Proxmox. With NAT 1:1 that public can be used by Proxmox only.
I know, it was done by the provider, but why? I think you only need the Proxmox management port, this is only one unique port. 1:1 directs any port to the internal IP.
You also have a second public IP as you wrote. What about that?
I assume, you want to use these subdomain for webservers. To redirect one public IP to multiple internal webservers you can use the haproxy package.
I've setup a new pfsense vm with nothing on it. Results were more or less the same, so the packages weren't interfering.
So, I've setup a new bridge and set MTU to 9000 and now I'm getting about 9 Gbits/s (sometimes 10 with pf disabled). I don't understand networking enough as to why linux VMs work at 20 Gbits/s with MTU 1500.
Other settings such as CPU type, machine type, rx off on the bridge, didn't affect results that much.
Now I'm curious what it would take to get to 20 Gbits/s
I virtualized mine maybe 3 months ago partly as an "I'm stuck at home and need a project" and partly for energy savings. My server has the ram and plenty of CPU so what the hey lets try. I didn't passthrough. Decided instead to setup separate vswitches/portgroups and just dedicate nics that way; one for WAN and another for LAN and put PFSense VM in both portgroups. I'm using the free ESXI 6.7.
These days, mixing servers/IOT and desktops in the same LAN is probably a bigger security issue than virtualizing PFSense.
A iperf to the WAN side of the pfSense VM over the internet shows 900Mbps. When I try and punch it through OpenVPN site-to-site using the same config as you, 80Mbps. Both sides are 3.5GHz+ Xeon/Ryzen CPUs, but CPU usage on pfSense on both sides is 5%.
An iperf from the WAN -> LAN interface (on a different KVM bridge) also shows 800Mbps+.
Similar issue here under ESXi 7.0 and pfSense 2.4.5-RELEASE-p1.
Created a VM with one virtual NIC, booted installed OK.
Powered off, added 2 other NICs, booted and they are OK.
Added a 4th NIC while powered on, the VGA console shows that at operating system level it was detected hotplugged with correct MAC address, but in pfSense web interface did not appear.
Rebooted, still not there.
Powered off and back on, still not there.
Deleted the whole VM, re-created from scratch with 4 NICs, re-installed pfSense from ISO, all NICs present.
This is definitely a plausible explanation, however:
I was doing other linux-based VM installs on the same ESXi machine at the same time and only had a problem with the pfSense instances, multiple times.
I am working from a MacBook which does not have a Scroll Lock key (that I'm aware of). Is there a way to unwittingly send a Scroll Lock equivalent key stroke - perhaps with the right key combo?
I researched this a bit, and found some indication that Fn + Shift + F12 is the equivalent Macbook key combo for activating Scroll Lock. I wouldn't have ver had any reason to use those keys while doing my ESXi or pfSense tasks. I also found an old thread here where someone seems to have the same problem with Macbook randomly activating Scroll Lock and people discuss other possible key combos.
Still, it's hard for me to buy the explanation that this bug would only manifest itself in an ESXi console window and only for pfSense.
Unfortunately, I've finished those projects for now and don't know when I'll next have the opportunity to explore the possibilities of what might have been behind this bug.
Next two, top best solution :
Use a vanilla Windows Pro : I was using pre-2.4.5 using FreeBSD 1.1 and FreeBSD 11.3 using pfSense 2.4.5-p1, and have not seens any issues. But notre that this is a @home setup - more or like a test bed install - as VM are ment to be.
The real setup shouldn't be discarded :
You could even hide it into the Win 2012 device if you have space constraints. No VM code here so many things that can't go wrong.
I hand also an idea to migrate from physical hardware to virtualization the pfsense box .
Tried esxi an unraid , KVM with pass-trough , the results where horrible . Even on vmxnet3/virtio the results are poor .
The machine is capable of delivering 20Gb/s between 2 VM Linux hosts with vmxnet3 , but with pfsense in between max 700-800Mb :(
With pfsense with passtrough I maxed out an i5-4570 with an i350-t4 adapter and did not reach gigabit speed(only pfsense was running on the hardware).On bare metal it works perfect
Now I'm running pfsense on E3-1220L v3 (13W)CPU bare metal and it still beats the crap of i5-4570 in virtualization 😂 .
Pfsense is simple not designed to run properly in virtualized env.
@asdkjw I think that this error will be never fixed because it is old error with UEFI - http://freebsd.1045724.x6.nabble.com/failing-to-install-11-1R-on-VMWare-td6249644.html or http://freebsd.1045724.x6.nabble.com/Boot-failure-svn-up-from-this-morning-td6170968.html
Solution is to reinstall pfSense on Hyper-V 2012R2 as Gen1 VM and then everything will be working ok. It is same in pfSense 2.5 version.
Can't tell you for sure, but I've used both without issue. Currently using an IBM branded i340-T4. Any severe packet loss that I've experienced were ISP related. But the cards a certainly cheap enough to try. OEM versions from Dell, HP, IBM and others for about $25. Here's a good link to find the OEM equivalents of that chipset and several others. Many on EBay include both height brackets.
that I route all traffic from the OVPNServer to a OVPNClient
The OpenVPN client tunnel isn't up at that moment, but the OpenVPN server daemon is already starting, using 'route's to the OpenVPN client. This might explain the 'route' messages.
What happens if you de activate the OpenVPN client - and activate it manually, after the system booted ?
Does it boot faster this way ?
Maybe you could start the OpenVPN client with a cron command that start a serveice (OpenVPN Client) 30 seconds after booting ?
And have it killed before you shut down, so it gets marked as "non running" and won't start during the next boot.
We have the "earlyshellcmd" commands. Let's invent the "lateshellcmd" ;)