SSD (Solid State Drive) and pfSense (Important)
-
With an APU2C4 use a 64-bit full install with serial console. Nano support will be ended in a future release anyway.
-
With an APU2C4 use a 64-bit full install with serial console. Nano support will be ended in a future release anyway.
I don't follow the forums. Dropping embedded means I have to source new hardware platforms as mine are running off CF. What's the timetable?
-
I read that 2.4 will be 64-bit only.
There will be a last 2.3.y for 32 bit and it will receive security updates for some time.
-
I have read this thread with great interest. I got my first SSDs late last year. Late adopter, partly because I wasn't in need of any system upgrades for a few years, and partly because of all the articles that suggested running an SSD would make the sky fall, global warming real, the stock market crash, my house burn down, etc. I have 80 MEGAbyte, 120 MEGAbyte, and 254 MEGAbyte SCSI and IDE hard disks from the dark ages of computing that still spin, and still work.
I just finished a Skylake-based pfsense system for home use, will run a number of packages but not web caching (unless I can be convinced that the aforementioned things won't happen if I run an SSD). I was hoping to either turn off logging, or send logs to my NAS. Am running 8GB (can go to 16GB easily) RAM and hoping to be able to turn off swap.
First question… Is all that necessary? Yes, I read the thread. But years of SSD-death threads have me programmed to be skeptical.
Second question... I have a two-year-old 120GB SSD (never used, just "older" tech I suppose), a brand new 64GB mSATA drive, and a brand new 64GB m.2 drive. Theoretically the m.2 device would be the most technologically sophisticated in terms of garbage collection, chip quality, etc. So would I be better off using the 120GB (more free space for cell writes) or the more modern device? If I can truly expect years of use out of these SSDs, I would consider using web caching...
Many thanks for this great thread.
-
Look up the lifetime data written spec for your SSDs, figure out about what you will be writing and decide if that is enough years of lifetime for you. I did that and realized that I'll likely be replacing the drives and computers they are in long before I wear them out.
I've been upgrading my systems here by replacing old first generation small SSDs with newer, faster larger ones as I find good deals. Old SSDs are getting stuffed in USB 3 drive cases and used as big thumb drives. The SSDs are all still showing years of life left in the SMART stats and I hate to toss them when good USB 3 carriers are so cheap.
https://smile.amazon.com/gp/product/B01FQ5R0PG/ref=oh_aui_detailpage_o01_s00?ie=UTF8&psc=1
-
Hi
Worth noting that wear leveling and any free space adds more life. If you buy a 80GB SSD and are only using around 20GB of space at any one time, then essentially the life of the SSD is increased by around 4 times. This is because all that spare space still gets used to spread out the wear, and is why a lot of people see much greater life spans on their SSDs than perhaps the specifications first suggested.
So your question regarding a 120 or 64GB SSD, the 120GB SSD is likely to have the longest lifespan, assuming you hold the same amount of data on both, because the 120GB SSD has much more spare space, and that spare space will be used to spread out the wear. It is also quite likely the 120GB SSD has a fundamentally longer life span anyway on a cell level, as the 64GB m.2 will be packed tighter into less chips using smaller cells, and those smaller cells can't be written to as many times.
It is also likely that both SSDs will be fine and outlast your use for them anyway.
Regards
Phil
-
@Phil_D:
Hi
Worth noting that wear leveling and any free space adds more life.
…
Phil
Yep - I use Intel 320 160G refurbs from Newegg - $45 ea.
2 per server.
ESXI 5.5 runs on a thumb, with a copy on a 2nd thumb just in case.
Format the ssd's as 120G - each pf instance gets a 30G slice of each of the ssd's for a geom mirror.
One instance is up, other is off.
I don't do HA as a vm going down without there being an underlying host problem is so rare that it's not worth the bother.
From receiving the alert to bringing the other vm up is under 5 minutes, even if I'm asleep.
-
AFAIK we can run PF from RamDisk, setting in Advanced System setting.
-
Indeed you can set /var and /tmp to run from ram drives to move the vast majority of file writes off the flash media if you're running from CF for example. Just like Nano does currently.
Steve
-
Indeed you can set /var and /tmp to run from ram drives to move the vast majority of file writes off the flash media if you're running from CF for example. Just like Nano does currently.
Steve
I would like to see a feature to backup /var and /tmp at an interval and at an controlled shutdown/reboot.
-
Sorry to bump the old old post, but its happened before :P.
This post had me scared, as I had decided to try out a small M.2 drive, for my build.
I guess no one caught this before, which is quite surprising. The failed drives in the thread, had nothing to due with being SSDs nor, an SSDs endurance, the OP and other failures, were all Using Kingstons SSD nows, which are known to be the most unreliable SSD on the planet lol (at least those first ones) they just died, had nothing to do with how much writes, they just flat out died, after a month or 2 or 3, sometimes more, but ya they were terrible.
Just wanted to throw that out there, as I keep coming across this while trying to decide which SSD to use. With the fact aside that newer SSDs have much better endurance, that has nothing to do with the few failures of the OP and the others, they were using the worst SSD ever made.
-
Why even use a SSD in such an application? Seems like it's just not suited for it. You might get a great speed bump for certain packages that use the storage quite a bit, but if you are not using those packages, it's pointless to have an SSD. A HDD would be a better fit and what's more, it might likely be the cheaper option.
That could be moot with pfSense supporting ZFS in the future though. You could always create a boot mirror with a couple of USB flash drives. That way you would have a bit of redundancy and USB drives are cheap compared to SSDs or HDDs
In terms of performance (all other things being equal): SSD > USB > HDD –- sure. But when you add in the cost and other parameters, I would think for home users at least, the USB route might be the most viable.
Disclaimer: Offer cannot be combined with any other specific requirements that you may or may not have. YMMV. Some use cases may require a particular solution. Void where prohibited.
;)
-
Why even use a SSD in such an application? Seems like it's just not suited for it. You might get a great speed bump for certain packages that use the storage quite a bit, but if you are not using those packages, it's pointless to have an SSD. A HDD would be a better fit and what's more, it might likely be the cheaper option.
That could be moot with pfSense supporting ZFS in the future though. You could always create a boot mirror with a couple of USB flash drives. That way you would have a bit of redundancy and USB drives are cheap compared to SSDs or HDDs
In terms of performance (all other things being equal): SSD > USB > HDD –- sure. But when you add in the cost and other parameters, I would think for home users at least, the USB route might be the most viable.
Disclaimer: Offer cannot be combined with any other specific requirements that you may or may not have. YMMV. Some use cases may require a particular solution. Void where prohibited.
;)
It's not about speed but the lack of mechanical components. Everything that moves decays relatively fast. Less moving parts means less things to go wrong. It's also why CF cars are used to much, and they aren't exactly fast. Same goes for DOM's, IDE Flash modules and USB drives. All of them work great since they don't have moving parts (unless you count IBM Microdrives as CF cards).
-
@johnkeates:
It's not about speed but the lack of mechanical components. Everything that moves decays relatively fast. Less moving parts means less things to go wrong.
Only if that were true !!
The number of SSD drive failures is what started this thread which is now 8 pages long. Also SSD's have limited number of writes (although, I believe that number is now quite high and that the drive would probably last for years before hitting that number).
I truly believe in using the right tool for the job. SSDs are great, but that doesn't mean it's great in every scenario. Again, it's a moving target too since future technology might improve upon things.
-
@johnkeates:
It's not about speed but the lack of mechanical components. Everything that moves decays relatively fast. Less moving parts means less things to go wrong.
Only if that were true !!
The number of SSD drive failures is what started this thread which is now 8 pages long. Also SSD's have limited number of writes (although, I believe that number is now quite high and that the drive would probably last for years before hitting that number).
Well yeah, cheap consumer SSD's aren't suitable for this kind of work. That always was the case, same for CF cards - it's why there are 'industrial' CF cards and the normal ones you buy at the random electronics stores.
-
@johnkeates:
It's not about speed but the lack of mechanical components. Everything that moves decays relatively fast. Less moving parts means less things to go wrong.
Only if that were true !!
The number of SSD drive failures is what started this thread which is now 8 pages long. Also SSD's have limited number of writes (although, I believe that number is now quite high and that the drive would probably last for years before hitting that number).
I truly believe in using the right tool for the job. SSDs are great, but that doesn't mean it's great in every scenario. Again, it's a moving target too since future technology might improve upon things.
Well the number of SSD drive failures, were again from pretty bad SSDs that didn't fail due to endurance, but rather due to bad design. It even happened again with a newer version of SSD Nows, they are just bad SSDs.
Your right, they have limited rights, the drive I just got on Ebay, has 10tbs worth of writes before its dead. Now think about that, 10tbs. It would take an awful long time to use up 10tb in logs lol.
On to the other reason, Power loss protection, No mechanical parts (IE less heat, less noise), Size (for some), Power Consumption,
ect, ect.So for me in my case, my build is in a very small 1u case, a hard drive doesn't work, at best I could fit a 2.5inch hard drive, that will likely be less reliable then an SSD and cost the same. At the same time, in my case that HDD will generate more heat and use more power than an SSD. My server board has a m.2 slot, that means I do not have to stuff a 2.5in hard drive in there. Even if I did stuff a 2.5 drive in there, and I do mean stuff, the drive location mount barely fits a drive behind my I350 T4, now I have to cable mange it as well, causing further issues and work.
I know that not ever case is like mine, I am just answering the why an SSD from my needs.
ALSO just FYI, this 8 pages is mostly of people saying that OP is wrong and SSDs do not die in 2 months. I fully believe his did, however that was not due to SSD tech or Write Endurance, it was due to the fact that he was using a failure prone SSD. Even low Endurance SSDs that are 16gb have like 10tbw, that is a lot, I cant speak for anyone's setups, but I think it would be awfully hard to write 10tbs of log files and such. (Swap I would still turn off)
Google that SSD, you will see they failed after 2 months in read only workloads, the SSD was bad, not the load he placed on it.
-
@johnkeates:
It's not about speed but the lack of mechanical components. Everything that moves decays relatively fast. Less moving parts means less things to go wrong.
Only if that were true !!
The number of SSD drive failures is what started this thread which is now 8 pages long. Also SSD's have limited number of writes (although, I believe that number is now quite high and that the drive would probably last for years before hitting that number).
I truly believe in using the right tool for the job. SSDs are great, but that doesn't mean it's great in every scenario. Again, it's a moving target too since future technology might improve upon things.
Well the number of SSD drive failures, were again from pretty bad SSDs that didn't fail due to endurance, but rather due to bad design. It even happened again with a newer version of SSD Nows, they are just bad SSDs.
Your right, they have limited rights, the drive I just got on Ebay, has 10tbs worth of writes before its dead. Now think about that, 10tbs. It would take an awful long time to use up 10tb in logs lol.
That's the same thing I said.
On to the other reason, Power loss protection, No mechanical parts (IE less heat, less noise), Size (for some), Power Consumption,
ect, ect.
So for me in my case, my build is in a very small 1u case, a hard drive doesn't work,That's just not true. You just have to buy the right board in that case. For pfSense – if your board already has 2 Intel NICs, you don't need an add-on card... which would leave enough space even for a 3.5" drive. Even if you put in an add-on NIC or any other card, you can put in a 2.5" drive in many 1U cases (as you mentioned). As for power consumption, I would be looking at the CPU TDP and other things before I would look at how much power the drive is going to take. Remember now, that for this application (pfSense -- where you aren't using a storage heavy package) your drive is not going to be constantly spinning as to affect the power consumption that drastically.
at best I could fit a 2.5inch hard drive, that will likely be less reliable then an SSD and cost the same. At the same time, in my case that HDD will generate more heat and use more power than an SSD. My server board has a m.2 slot, that means I do not have to stuff a 2.5in hard drive in there. Even if I did stuff a 2.5 drive in there, and I do mean stuff, the drive location mount barely fits a drive behind my I350 T4, now I have to cable mange it as well, causing further issues and work.
I know that not ever case is like mine, I am just answering the why an SSD from my needs.No, you don't have to "stuff" a 2.5" drive, it would fit quite comfortably. Agreed that in case you want to change the drive later, you might have to remove the card and a load of cables just to access the drive OR you need to choose a better case.
You chose a server board with a M.2 slot (probably for a reason) but what's the TDP of your processor?
I don't know what your requirements are but in order to save power – consider
-
a J3355 – its a SoC, fanless (so no worries about noise) with a TDP of 10W
-
a N3700 – another SoC, fanless -- this I have seen on a server board with quad intel nics, so no need for a NIC card --- with a TDP of 6W
You would save a lot more power consumption on the CPU than on the mechanical vs SSD storage. Again, like I mentioned in my earlier post as well, it does depend on what you plan to do with the machine. So don't go taking that as gospel, is all I am saying.
-
-
Yeah most of the bad rep that SSDs had at one time was due to a number of failures in early models some of which had bad firmware. One particular 8GB drive looked perfect for pfSense but failed with whatever OS it as running.
You should not have any issues running any recent SSD without any special measures. All the hardware we ship is a 'full' install running from flash/SSD.
However you can choose to move /var and /tmp to RAM which will decrease writes a lot. That also allows a normal drive to stop spinning if you've set that as allowed.
Steve
-
When I switched from a HDD (3.5", 5400rpm) to an SSD I saved 7W at the wall, fwiw. Obviously that's not a static number but should be a good game if reference.
Overall SSD > HDD for pfSense in 99% of applications.
-
When I switched from a HDD (3.5", 5400rpm) to an SSD I saved 7W at the wall, fwiw. Obviously that's not a static number but should be a good game if reference.
I didn't expect that much. Good to know.
@TS_b:Overall SSD > HDD for pfSense in 99% of applications.
Agreed. But when you factor in cost, I'd say the USB route might be more viable for home use especially with mirrored drives which can be cheaply replaced if and when one fails.