NanoBSD and SSD
I've read some of the other threads about SSD and NanoBSD and would like to hear more about your experiences and thoughts.
Does anybody know If using a 4GB image on a larger drive (I have two systems each with 1x32GB drives for example) would wear out the drives or do the SSD "handle" reads and writes so the wear is equal all over the drive? I have seen posts saying both that it will and won't.
If using only 4GB instead of the full drive size does cause the drive to wear out faster - does anybody know if there is an fairly easy way to either do a nanoBSD install and use a larger drive size (for example 16GB or 32GB) or how to alter the size on a running system?
Maybe this is a good idea to add for a future release of the nanoBSD version, it seems that many users have similar questions and want to use nanoBSD to increase stability and mtbf for their systems.
I am currently running two identical systems (hardware-wise) - one with nanobsd 4GB (amd64) and one with the full install ("regular version" running in 32-bit mode), carp and everything runs great but it seems the system running nanoBSD works slightly harder..?
What are your experiences?
The "firmware" in an SSD that implements the wear-levelling algorithm has no knowledge of what a partition is. It just knows that it has to present to the host a bunch of logical blocks from 1 to n that the host can write to. When asked for the data at logical block m, it has to return the data that was last written to block m, so it has to make sure to keep track of that!
It has no understanding that data in the first few logical blocks might have a partition table. Or that there might be special things called "boot blocks" stored at the beginning, end or middle of the logical block range. Or anything else about partition or file-system meta-data.
You can use a 32GB SSD with just a 4GB set of partitions on it (and actually most of that 4GB is files that are written once and then only read). Initially the 4GB of logical blocks will be on some 4GB of physical blocks. As blocks are re-written, the logical block numbers will get re-allocated to unused/less used physical blocks. Eventually all 32GB of physical blocks will have been written to at least once. Depending on the exact algorithm, it will work out that some physical blocks are not getting any more writes (because they hold a read-only exe for example). It will copy those logical blocks to some more heavily used physical blocks, then reuse the less-used physical blocks for new logical block writes…
So, no issue with having partitions on only a part of the logical drive space.
I would really like to be able to get reasonable detail of the wear-levelling algorithm used in each brand/model of SSD and CF card. Then a real decision could be made about which one is more suited to a particular file-system and application load. e.g. Netgate changed from supplying SanDisk to Kingston CF cards (for some different reasons) - but it would be good to know if the SanDisk or Kingston CF card algorithms are different, and if so which one would suit a "typical" pfSense nanobsd install better. Perhaps they all come from the same factory, with the same microcode anyway, and the difference is just the marketing sticker - but it would be nice to know.
Can anyone point us at real information about the wear-leveling algorithm used in particular brands and models of SSD and CF card?
A modern SSD should wear level across the drive. It wouldn't make sense for the partition size to matter as a partition is a logical division in the formatting of data on the drive, that layer of hardware layer doesn't "understand" such things. It should simply see data coming in, mapped to a particular logical sector, and map it to physical memory locations as it sees fit. The logical sector that the OS sees looks just like a physical sector, the ambiguity comes in the hardware re-mapping, which neither the OS nor the BIOS sees, nothing outside the drive (asside from some diagnostics tools that may be able to access such data) should be able to notice the difference.
That being the case, a 4GB partition should still wear across the full accessible data range in a modern drive that does wear leveling.
The main case against that would be if the drive previously had been full of data, simply re-partitioned, and the newly free space not wiped. But, I think the same thing would happen if the partition was the full size and no trim / garbage collection had been done on that partitioned, but free (to the OS) space. Remember, the hardware layer itself (SSD) needs to be "told" in some way that the space the OS has deemed as free is actually free. I would think that your best case scenario would be to use whatever tools your SSD manufacturer provides to run a full wipe on the drive before you install NanoBSD.