Netgate 6100 SSD vs Assumptions...
-
Hello everyone - first time poster here and I am brand-new to Netgate/pfSense. Coming from the EdgeRouter line I am very much at the 'baby steps' stage!
I've purchased a 6100 and before getting it all set-up I did the obvious thing and added an SSD, installed a fresh image on zfs and it was an easy and painless process.
Admittedly I added the first suitable M.2 SSD from my 'old' stock of SSDs. It is a Toshiba 256GB drive that has had a bit of a hard life but still in reasonable health.
The first thing of note is that the pfSense dashboard lists it as a 205GB drive and I am not sure if that is just a pfSense thing, a maximum limit it can use or just the way it partitions a (relatively) large drive. Is this something I should ignore or resolve?
I have a few assumptions on how pfSense works with its storage but we all know where assumptions lead... so please correct me.
My first assumption is that pfSense, once booted, runs from RAM alone, so only used the SSD for swap, logging and supporting some more intensive packages. On that basis I presume a 256GB SSD is just massive overkill (albeit providing more lifetime writes) and that moving sequential data is irrelevant, leaving just small random read/writes, low-latency and reliability as the things to aim for when picking an SSD?
So out of my available (or could be made available) SSDs I have 4 B-Keys to pick from:
- Toshiba 256GB (ie the one mentioned above)
- WD 128GB (same capacity as the 6100 Max but probably still overkill)
- Optane 16GB (high IOPS, low-latency, very reliable but small capacity)
- Optane 64GB* (as above but with better capacity)
I'm favouring the use of Optane as they seem perfect for this role and are exceptionally reliable and long-lived. The 64GB seems a good size (unless my assumptions are wrong) and it would be good to see a use for these depreciated drives.
However, I cannot find anything anywhere about using Optane M.2 SSDs in Netgate appliances. Does anyone happen to know if they work ok?
*Optane M10 series M.2 PCIe-3 80mm
️
-
@robbiett said in Netgate 6100 SSD vs Assumptions...:
Admittedly I added the first suitable M.2 SSD from my 'old' stock of SSDs. It is a Toshiba 256GB drive that has had a bit of a hard life but still in reasonable health.
ZFS partitioning shaves off part of the drive.
-
@robbiett said in Netgate 6100 SSD vs Assumptions...:
only used the SSD for swap, logging and supporting some more intensive packages
Those are listed at https://www.netgate.com/supported-pfsense-plus-packages btw. Disk usage almost entirely depends on log volume.
-
@rcoleman-netgate said in Netgate 6100 SSD vs Assumptions...:
ZFS partitioning shaves off part of the drive.
Ahh, of course - thank you.
@steveits said in Netgate 6100 SSD vs Assumptions...:
Those are listed at https://www.netgate.com/supported-pfsense-plus-packages btw. Disk usage almost entirely depends on log volume.
A helpful link
So in my case I anticipate using NtopNG & Suricata as SSD-using packages but I note your comment on logs driving the storage requirement.️
-
Definitely use an SSD then.
To see more details about the drives on the system try running:
geom disk list
-
@stephenw10 That command explains it all. Thank-you.
Anyone have experience with Optane as the SSD or a reason not to try?
️
-
I have run 16G Optane drives:
[21.02.2-RC][root@6100.stevew.lan]/root: geom disk list Geom name: nvd0 Providers: 1. Name: nvd0 Mediasize: 14403239936 (13G) Sectorsize: 512 Mode: r1w1e2 descr: INTEL MEMPEK1J016GAH lunid: 5cd2e4211b870100 ident: BTBT90920Y0H016N rotationrate: 0 fwsectors: 0 fwheads: 0 Geom name: nvd1 Providers: 1. Name: nvd1 Mediasize: 14403239936 (13G) Sectorsize: 512 Mode: r1w1e2 descr: INTEL MEMPEK1J016GAH lunid: 5cd2e49a8cad0100 ident: PHBT83330GG2016N rotationrate: 0 fwsectors: 0 fwheads: 0
As a test, since they have the required m.2 slots.
-
@stephenw10 Outstanding and thank you very much.
I'll just have to find which machine the 64GB Optane is hiding in. I've found the 16GB one but clearly it isn't going to be up to the task.
Presumably I can nuke the existing eMMC image with:
gpart destroy -F mmcsd0
Boot order looks good already:
efibootmgr -v Boot to FW : false BootCurrent: 001f Timeout : 0 seconds BootOrder : 0003, 0002, 0001, 0000 Boot0003* bootx64.efi PciRoot(0x0)/Pci(0x11,0x0)/Pci(0x0,0x0)/NVMe(0x1,a3-e5-89-00-04-0d-08-00)/HD(1,GPT,4977acb2-b5db-11ed-8ff5-90ec771b70aa,0x28,0x82000)/File(\efi\boot\bootx64.efi) nvd0p1:/efi/boot/bootx64.efi (null) Boot0002* bootx64.efi PciRoot(0x0)/Pci(0x15,0x0)/USB(0x1,0x0)/HD(1,MBR,0x90909090,0x1,0x10418)/File(\EFI\BOOT\bootx64.efi) Boot0001* bootx64.efi PciRoot(0x0)/Pci(0x1c,0x0)/eMMC(0x0)/Ctrl(0x0)/HD(1,GPT,edfff163-c4a3-11eb-98a4-44a8422fabb9,0x3,0x64000)/File(\efi\boot\BOOTx64.efi) mmcsd0p1:/efi/boot/BOOTx64.efi (null) Boot0000* PXE-0 Fv(e35c4a77-3d00-4337-a625-b980a9e00f6c)/File(\pxe_0.nsh)
️
-
Yes that should work. A new efi boot entry for the new drive should be added automatically when you install to it.
Steve
-
@stephenw10 Thanks Steve.
-
-
-
-