ESXi vs Dedicated Hardware Platform.
-
Will running on ESXi impact routing performance or create any additional latency compared to a dedicated hardware platform?
If someone were to run a web server and pfSense on the same machine with the ESXi setup, how would the performance measure up, and would this even be a good plan to begin with in terms of security with the data literally being stored on the same machine now.
-
What type of hardware are you talking about. I would imagine if you are using something like a core i7 or any modern AMD processor with multiple cores and a high clock speed and plenty of RAM then you probably won't take that much of a performance hit. Although it is my opinion there is nothing like running bare metal. Advancements in Hyper Visor technology may prove me wrong though. I am truly amazed by the graphics performance that Microsoft is able to get out of their Xbox One which runs their Game OS in a VM!
-
I've had no problems running pfSense in a VM, except for stability issues running the x64 a few years ago. I've been able to run speed tests on an AES-256 IPSec tunnel and it was able to run at pipe-speed (75Mb/s). VLAN routing between VMs on different physical servers exceeded 1.6Gb/s (using two Gb-E connections). Never had any latency issues either; we do a fair amount of VoIP (internal development and commercial products) over IPSec to our other locations and there's no significant delay.
-
Will running on ESXi impact routing performance or create any additional latency compared to a dedicated hardware platform?
If someone were to run a web server and pfSense on the same machine with the ESXi setup, how would the performance measure up, and would this even be a good plan to begin with in terms of security with the data literally being stored on the same machine now.
Have been running on ESXi for well over a year now with no performance issues. Just last night went to bare metal to see if I was missing anything in performance but except for CPU temp and HDD smart diagnostics there is no difference.
Yes, there is one thing you have to live with.. if your ESXi goes down so does your entire network (especially if you have multiple VLANs like I do) and till it comes back up you are stranded. This is the only reason why I switched to bare metal last night.
-
Just last night went to bare metal to see if I was missing anything in performance but except for CPU temp and HDD smart diagnostics there is no difference.
I ran into this issue – I haven't found any useful docs on writing my own CIM provider. Not that I think I should need one for a bog-standard SuperIO chip from 2006.