SSD (Solid State Drive) and pfSense (Important)
-
Look up the lifetime data written spec for your SSDs, figure out about what you will be writing and decide if that is enough years of lifetime for you. I did that and realized that I'll likely be replacing the drives and computers they are in long before I wear them out.
I've been upgrading my systems here by replacing old first generation small SSDs with newer, faster larger ones as I find good deals. Old SSDs are getting stuffed in USB 3 drive cases and used as big thumb drives. The SSDs are all still showing years of life left in the SMART stats and I hate to toss them when good USB 3 carriers are so cheap.
https://smile.amazon.com/gp/product/B01FQ5R0PG/ref=oh_aui_detailpage_o01_s00?ie=UTF8&psc=1
-
Hi
Worth noting that wear leveling and any free space adds more life. If you buy a 80GB SSD and are only using around 20GB of space at any one time, then essentially the life of the SSD is increased by around 4 times. This is because all that spare space still gets used to spread out the wear, and is why a lot of people see much greater life spans on their SSDs than perhaps the specifications first suggested.
So your question regarding a 120 or 64GB SSD, the 120GB SSD is likely to have the longest lifespan, assuming you hold the same amount of data on both, because the 120GB SSD has much more spare space, and that spare space will be used to spread out the wear. It is also quite likely the 120GB SSD has a fundamentally longer life span anyway on a cell level, as the 64GB m.2 will be packed tighter into less chips using smaller cells, and those smaller cells can't be written to as many times.
It is also likely that both SSDs will be fine and outlast your use for them anyway.
Regards
Phil
-
@Phil_D:
Hi
Worth noting that wear leveling and any free space adds more life.
…
Phil
Yep - I use Intel 320 160G refurbs from Newegg - $45 ea.
2 per server.
ESXI 5.5 runs on a thumb, with a copy on a 2nd thumb just in case.
Format the ssd's as 120G - each pf instance gets a 30G slice of each of the ssd's for a geom mirror.
One instance is up, other is off.
I don't do HA as a vm going down without there being an underlying host problem is so rare that it's not worth the bother.
From receiving the alert to bringing the other vm up is under 5 minutes, even if I'm asleep.
-
AFAIK we can run PF from RamDisk, setting in Advanced System setting.
-
Indeed you can set /var and /tmp to run from ram drives to move the vast majority of file writes off the flash media if you're running from CF for example. Just like Nano does currently.
Steve
-
Indeed you can set /var and /tmp to run from ram drives to move the vast majority of file writes off the flash media if you're running from CF for example. Just like Nano does currently.
Steve
I would like to see a feature to backup /var and /tmp at an interval and at an controlled shutdown/reboot.
-
Sorry to bump the old old post, but its happened before :P.
This post had me scared, as I had decided to try out a small M.2 drive, for my build.
I guess no one caught this before, which is quite surprising. The failed drives in the thread, had nothing to due with being SSDs nor, an SSDs endurance, the OP and other failures, were all Using Kingstons SSD nows, which are known to be the most unreliable SSD on the planet lol (at least those first ones) they just died, had nothing to do with how much writes, they just flat out died, after a month or 2 or 3, sometimes more, but ya they were terrible.
Just wanted to throw that out there, as I keep coming across this while trying to decide which SSD to use. With the fact aside that newer SSDs have much better endurance, that has nothing to do with the few failures of the OP and the others, they were using the worst SSD ever made.
-
Why even use a SSD in such an application? Seems like it's just not suited for it. You might get a great speed bump for certain packages that use the storage quite a bit, but if you are not using those packages, it's pointless to have an SSD. A HDD would be a better fit and what's more, it might likely be the cheaper option.
That could be moot with pfSense supporting ZFS in the future though. You could always create a boot mirror with a couple of USB flash drives. That way you would have a bit of redundancy and USB drives are cheap compared to SSDs or HDDs
In terms of performance (all other things being equal): SSD > USB > HDD –- sure. But when you add in the cost and other parameters, I would think for home users at least, the USB route might be the most viable.
Disclaimer: Offer cannot be combined with any other specific requirements that you may or may not have. YMMV. Some use cases may require a particular solution. Void where prohibited.
;)
-
Why even use a SSD in such an application? Seems like it's just not suited for it. You might get a great speed bump for certain packages that use the storage quite a bit, but if you are not using those packages, it's pointless to have an SSD. A HDD would be a better fit and what's more, it might likely be the cheaper option.
That could be moot with pfSense supporting ZFS in the future though. You could always create a boot mirror with a couple of USB flash drives. That way you would have a bit of redundancy and USB drives are cheap compared to SSDs or HDDs
In terms of performance (all other things being equal): SSD > USB > HDD –- sure. But when you add in the cost and other parameters, I would think for home users at least, the USB route might be the most viable.
Disclaimer: Offer cannot be combined with any other specific requirements that you may or may not have. YMMV. Some use cases may require a particular solution. Void where prohibited.
;)
It's not about speed but the lack of mechanical components. Everything that moves decays relatively fast. Less moving parts means less things to go wrong. It's also why CF cars are used to much, and they aren't exactly fast. Same goes for DOM's, IDE Flash modules and USB drives. All of them work great since they don't have moving parts (unless you count IBM Microdrives as CF cards).
-
@johnkeates:
It's not about speed but the lack of mechanical components. Everything that moves decays relatively fast. Less moving parts means less things to go wrong.
Only if that were true !!
The number of SSD drive failures is what started this thread which is now 8 pages long. Also SSD's have limited number of writes (although, I believe that number is now quite high and that the drive would probably last for years before hitting that number).
I truly believe in using the right tool for the job. SSDs are great, but that doesn't mean it's great in every scenario. Again, it's a moving target too since future technology might improve upon things.
-
@johnkeates:
It's not about speed but the lack of mechanical components. Everything that moves decays relatively fast. Less moving parts means less things to go wrong.
Only if that were true !!
The number of SSD drive failures is what started this thread which is now 8 pages long. Also SSD's have limited number of writes (although, I believe that number is now quite high and that the drive would probably last for years before hitting that number).
Well yeah, cheap consumer SSD's aren't suitable for this kind of work. That always was the case, same for CF cards - it's why there are 'industrial' CF cards and the normal ones you buy at the random electronics stores.
-
@johnkeates:
It's not about speed but the lack of mechanical components. Everything that moves decays relatively fast. Less moving parts means less things to go wrong.
Only if that were true !!
The number of SSD drive failures is what started this thread which is now 8 pages long. Also SSD's have limited number of writes (although, I believe that number is now quite high and that the drive would probably last for years before hitting that number).
I truly believe in using the right tool for the job. SSDs are great, but that doesn't mean it's great in every scenario. Again, it's a moving target too since future technology might improve upon things.
Well the number of SSD drive failures, were again from pretty bad SSDs that didn't fail due to endurance, but rather due to bad design. It even happened again with a newer version of SSD Nows, they are just bad SSDs.
Your right, they have limited rights, the drive I just got on Ebay, has 10tbs worth of writes before its dead. Now think about that, 10tbs. It would take an awful long time to use up 10tb in logs lol.
On to the other reason, Power loss protection, No mechanical parts (IE less heat, less noise), Size (for some), Power Consumption,
ect, ect.So for me in my case, my build is in a very small 1u case, a hard drive doesn't work, at best I could fit a 2.5inch hard drive, that will likely be less reliable then an SSD and cost the same. At the same time, in my case that HDD will generate more heat and use more power than an SSD. My server board has a m.2 slot, that means I do not have to stuff a 2.5in hard drive in there. Even if I did stuff a 2.5 drive in there, and I do mean stuff, the drive location mount barely fits a drive behind my I350 T4, now I have to cable mange it as well, causing further issues and work.
I know that not ever case is like mine, I am just answering the why an SSD from my needs.
ALSO just FYI, this 8 pages is mostly of people saying that OP is wrong and SSDs do not die in 2 months. I fully believe his did, however that was not due to SSD tech or Write Endurance, it was due to the fact that he was using a failure prone SSD. Even low Endurance SSDs that are 16gb have like 10tbw, that is a lot, I cant speak for anyone's setups, but I think it would be awfully hard to write 10tbs of log files and such. (Swap I would still turn off)
Google that SSD, you will see they failed after 2 months in read only workloads, the SSD was bad, not the load he placed on it.
-
@johnkeates:
It's not about speed but the lack of mechanical components. Everything that moves decays relatively fast. Less moving parts means less things to go wrong.
Only if that were true !!
The number of SSD drive failures is what started this thread which is now 8 pages long. Also SSD's have limited number of writes (although, I believe that number is now quite high and that the drive would probably last for years before hitting that number).
I truly believe in using the right tool for the job. SSDs are great, but that doesn't mean it's great in every scenario. Again, it's a moving target too since future technology might improve upon things.
Well the number of SSD drive failures, were again from pretty bad SSDs that didn't fail due to endurance, but rather due to bad design. It even happened again with a newer version of SSD Nows, they are just bad SSDs.
Your right, they have limited rights, the drive I just got on Ebay, has 10tbs worth of writes before its dead. Now think about that, 10tbs. It would take an awful long time to use up 10tb in logs lol.
That's the same thing I said.
On to the other reason, Power loss protection, No mechanical parts (IE less heat, less noise), Size (for some), Power Consumption,
ect, ect.
So for me in my case, my build is in a very small 1u case, a hard drive doesn't work,That's just not true. You just have to buy the right board in that case. For pfSense – if your board already has 2 Intel NICs, you don't need an add-on card... which would leave enough space even for a 3.5" drive. Even if you put in an add-on NIC or any other card, you can put in a 2.5" drive in many 1U cases (as you mentioned). As for power consumption, I would be looking at the CPU TDP and other things before I would look at how much power the drive is going to take. Remember now, that for this application (pfSense -- where you aren't using a storage heavy package) your drive is not going to be constantly spinning as to affect the power consumption that drastically.
at best I could fit a 2.5inch hard drive, that will likely be less reliable then an SSD and cost the same. At the same time, in my case that HDD will generate more heat and use more power than an SSD. My server board has a m.2 slot, that means I do not have to stuff a 2.5in hard drive in there. Even if I did stuff a 2.5 drive in there, and I do mean stuff, the drive location mount barely fits a drive behind my I350 T4, now I have to cable mange it as well, causing further issues and work.
I know that not ever case is like mine, I am just answering the why an SSD from my needs.No, you don't have to "stuff" a 2.5" drive, it would fit quite comfortably. Agreed that in case you want to change the drive later, you might have to remove the card and a load of cables just to access the drive OR you need to choose a better case.
You chose a server board with a M.2 slot (probably for a reason) but what's the TDP of your processor?
I don't know what your requirements are but in order to save power – consider
-
a J3355 – its a SoC, fanless (so no worries about noise) with a TDP of 10W
-
a N3700 – another SoC, fanless -- this I have seen on a server board with quad intel nics, so no need for a NIC card --- with a TDP of 6W
You would save a lot more power consumption on the CPU than on the mechanical vs SSD storage. Again, like I mentioned in my earlier post as well, it does depend on what you plan to do with the machine. So don't go taking that as gospel, is all I am saying.
-
-
Yeah most of the bad rep that SSDs had at one time was due to a number of failures in early models some of which had bad firmware. One particular 8GB drive looked perfect for pfSense but failed with whatever OS it as running.
You should not have any issues running any recent SSD without any special measures. All the hardware we ship is a 'full' install running from flash/SSD.
However you can choose to move /var and /tmp to RAM which will decrease writes a lot. That also allows a normal drive to stop spinning if you've set that as allowed.
Steve
-
When I switched from a HDD (3.5", 5400rpm) to an SSD I saved 7W at the wall, fwiw. Obviously that's not a static number but should be a good game if reference.
Overall SSD > HDD for pfSense in 99% of applications.
-
When I switched from a HDD (3.5", 5400rpm) to an SSD I saved 7W at the wall, fwiw. Obviously that's not a static number but should be a good game if reference.
I didn't expect that much. Good to know.
@TS_b:Overall SSD > HDD for pfSense in 99% of applications.
Agreed. But when you factor in cost, I'd say the USB route might be more viable for home use especially with mirrored drives which can be cheaply replaced if and when one fails.
-
When I switched from a HDD (3.5", 5400rpm) to an SSD I saved 7W at the wall, fwiw. Obviously that's not a static number but should be a good game if reference.
I didn't expect that much. Good to know.
@TS_b:Overall SSD > HDD for pfSense in 99% of applications.
Agreed. But when you factor in cost, I'd say the USB route might be more viable for home use especially with mirrored drives which can be cheaply replaced if and when one fails.
The cost of a decent small SSD isn't very high, <$100 easily, <$50 for a good deal or an off brand. I'd be at least as confident in those as a <$50 hard disk–I've had far more HDD failures than SSD failures, even in the span of time since SSDs became a thing. (Which is part of why I keep WTF'ing over this thread, which seems to exist in some alternate reality where spinning rust is reliable.) Yeah, HDD is cheaper for a given volume of storage, but why on earth are you putting a lot of storage on a firewall? A 16G SSD is more than enough space, so what does it matter if a junky 1TB spinner costs less than 1TB of enterprise SSD?
-
When I switched from a HDD (3.5", 5400rpm) to an SSD I saved 7W at the wall, fwiw. Obviously that's not a static number but should be a good game if reference.
I didn't expect that much. Good to know.
@TS_b:Overall SSD > HDD for pfSense in 99% of applications.
Agreed. But when you factor in cost, I'd say the USB route might be more viable for home use especially with mirrored drives which can be cheaply replaced if and when one fails.
The cost of a decent small SSD isn't very high, <$100 easily, <$50 for a good deal or an off brand. I'd be at least as confident in those as a <$50 hard disk–I've had far more HDD failures than SSD failures, even in the span of time since SSDs became a thing. (Which is part of why I keep WTF'ing over this thread, which seems to exist in some alternate reality where spinning rust is reliable.) Yeah, HDD is cheaper for a given volume of storage, but why on earth are you putting a lot of storage on a firewall? A 16G SSD is more than enough space, so what does it matter if a junky 1TB spinner costs less than 1TB of enterprise SSD?
True, which is why I am advocating a USB drive - 16GB or 32GB – even in mirrored would be cheaper than any HDD or SSD. A pair of Sandisk Cruzer 16GB (low profile) cost about $16 - $17. Other usbs could be even cheaper
Agreed that SSD > USB > HDD as far as pfSense is concerned, since you don't have much to store. But USB option is a lot cheaper than SSD with the same benefit of "no spinning rust" and added advantage of lower cost. This is for home use --- NOT enterprise. Enterprise solutions wouldn't bat an eyelid over a few hundred bucks for SSDs, I know.
USB 2 -- https://www.amazon.com/SanDisk-Cruzer-Low-Profile-Drive-SDCZ33-016G-B35/dp/B005FYNSZA/ -- $8.69 each
USB 3 -- https://www.amazon.com/SanDisk-Ultra-Flash-Drive-SDCZ43-016G-GAM46/dp/B01GK9921C/ -- $ 8.49 each
-
pfSense on ZFS in some sort of redundant raid configuration is a great option for using flash drives as your install media.
Just be sure to have enough RAM to utilize a RAM disk so you don't burn through your writes with logging.
Doing this you will have an install disk array that should last for years of use for <$20.
If you don't already have enough RAM to use a RAM disk it doesn't make sense to buy more RAM to be able to use flash drives as install media.
In that case get a small off-brand SSD, you can get them in the $25 range. They should be fine for pfSense uses. -
Mirrored usb thumb drives? Good grief, just get a small ssd and call it a day, unless your time is literally worth nothing. That mess will almost certainly require manual intervention at the worst possible time