ESXi alternative for running pfSense
I have bin running pfSense for a long time on different ESXi versions. Always worked flawless.
My ESXi host died, capacitors gone bad. So I decided to use a different system to run ESXi on and copy all of the VMs.
Since the motherboard (Supermicro X9SBAA-F) has an Marvell 88SE9230 PCIe SATA 6Gb/s Controller, ESXi does not see any disks during install.
I used a sata-xahci .vib file to make a custom ESXi version. That way I can run the system with the sata-controller.
Long story short. The host constantly spits out syslog messages saying the sata-links operate at 1,5Gb/s and then soft-resets the link.
Although the VM's on the host run like normal.
To make the system fully functional like it supposed to, and don't cause possible file corruption, I want to switch to XenServer or Proxmox.
Reading a lot of posts on a install on XenServer, I notice lot's of customisation regarding TCP offload etc. Maybe also not optimal for high performance.
Supermicro X9SBAA-F - Atom S1260 2C/4T 2GHz. - 8GB ECC DDR3
Western Digital Harddisk 3.5" Red WD40EFRX 4TB
Disk Speed in Ubuntu VM on SuperMicro
root@UbuntuVM:/# hdparm -Tt /dev/sda
Timing cached reads: 1554 MB in 2.00 seconds = 777.21 MB/sec
Timing buffered disk reads: 228 MB in 3.05 seconds = 74.75 MB/sec
What would be a good alternative for ESXi. (XenServer, Proxmox, headless virtualbox on debian, Bhyve etc.)
Links to quality guides would be appreciated.
Before you go dropping esxi, what version of the vib are you using?
I used the powershell script from v-front to create the custom ESXi image.
The script uses the latest versions of the .vib files that are requested when creating the custom ISO.
ESXi shows v6.0.0 build-4192238.
The sata controller works now, but looking at the syslog it does not look that healty.
It spits out about 910.000 syslog messages every day. It's a constant loop of the following messages;
vmkernel: res 00/00:00:00:00:00/00:00:00:00:00/00 Emask 0x3 (HSM violation)
vmkernel: cdb 12 01 00 00 ff 00 00 00 00 00 00 00 00 00 00 00
vmkernel: cpu3:33184)<3>ata8.00: cmd a0/00:00:00:00:01/00:00:00:00:00/a0 tag 0 pio 255 in
vmkernel: cpu3:33184)<3>ata8.00: irq_stat 0x40000001
vmkernel: cpu3:33184)<3>ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x2
vmkernel: cpu1:33184)<6>ata8: EH complete
vmkernel: cpu1:33184)<6>ata8.00: configured for UDMA/66
vmkernel: cpu2:33184)<6>ata8: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
vmkernel: cpu3:33184)<6>ata8: soft resetting link
Since this motherboard not on the whitelist of ESXi, this could be expected.
This is the reason I'm looking into a different distro to run pfSense and other VM's on.
ESXi is the easiest in my opinion, and pfSense seem to run with the lowest amount of overhead as a Virtual Machine.
And did you read the bug stuff about marvel?
- See bug info above in "Read before commenting / 4."
Thanks for your reply.
I will try disabling the VT-D settings, if this setting is enabled by default.
Pretty sure I did not enable this feature since I do not use pass-trough on this system.
The manual for this motherboard does not show any setting to enable/disable VT-D support.
Via the vSphere Client the Sata controller shows up as 6gb/s interface and sees the WDC WD40EFRX disk.
Not sure if this is an cosmetic bug.. Thanks for now on ESXi, will dive deeper in to this issue, and might just stick with ESXi.
[root@ESXi-Host:~] lspci -v | grep "Class 0106" -B 1
0000:01:00.0 SATA controller Mass storage controller: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller [vmhba0]
Class 0106: 1b4b:9230
Question remains though.. :) A solid alternative for ESXi or pfSense native with JAILS..
It should run on pretty much any hypervisor but I would stick with a type 1 vs 2. Virtualbox, Workstation, etc. are for testing out stuff not for running vm 24/7/365 etc.
A solid alternative for ESXi or pfSense native with JAILS.
VMware is best of class. I would replace the mainboard before I dropped ESXi. But if you insist, there are only so many type 1 hypervisors out there. Try them until you find one that solves your problem.
Personally, I ran pfSense on KVM, with openvswitch.
It worked like a charm, truth be told. Plus, with virtIO drivers, the uplinks to the virtual pfSense were 10Gig. And using openvswitch, the .1Q part was done entire outside of pfsense on the virtual switch.