SSD (Solid State Drive) and pfSense (Important)
-
I have a few old ssd running in some pfsense. But I didnt buy them at that time because it was a roll of the dice.
I waited and bought several of the ones proven not to die…. Samsung drives.
-
Nah - years ago SSDs were in general flakey. Sure there were some OK ones but problem is you didn't really have a way of knowing they were OK until they had some time to prove themselves.
Yeah, in the early days Intel ones were always pretty good but expensive, Toshiba ones were pretty reliable but slow, once Samsung came around they started making good reliable ones.
I started using SSD's in late 2009, and all of mine from that era died one way or another. Possibly because I went with OCZ :p
Pretty much the same experience.
I was an early adopter - went with ocz and greatly regretted it.
Even in reference to those drives, the original thread was nonsense.
The ocz issues were related to bad controller logic and had nothing to do with wear.
This thread runs in lockstep with another nonsense mantra that has been repeated ad nauseum by pros who should know better: raid 5 is dead.
That idea was born from, one match-challenged writer who used published warranty information from vendors to establish that an ssd based raid 5 array above a certain modest size would have a 50% chance of suffering a URE before it could complete a rebuild process.
I won't bore everyone with the details, but his core flaw was confusing what the vendor would warranty with actual data.
Marketing of drives is largely about product segmentation.
Vendors adjust what they will cover not an failure rates as much as needing to distinguish there 'gold' and 'platinum' lines.
Enterprise purchasers are notorious for making buying decisions based on an Enterprise label, with little thought given to the basis.
The reads/writes covered by warranty have nothing to do with tested or projected failure rates.
If they did, and using this guy's math, it would be conclusive proof that ALL raid 5 arrays are failure prone, and that they nearly all fail on every rebuild - which obviously is not correct.
Raid 5, along with every other parity based raid type, is a poor choice for most of us here, simply because rebuild times aren't dependent on drive speed as much as they are on controller speed, and a 20x faster ssd doesn't have a 20x faster controller, so you have rebuild times of hours at least, with days on larger arrays.
I only use spinning rust for bulk storage on second tier systems.
Even our mainline servers use all ssd's for storage, and back up to other ssd based arrays.
The only spinning rust I've purchased recently was some HGST 1 tb drives to act as bulk storage on a couple of pf boxes that are running under vsphere on remote site servers.
-
Yeah - I have exactly 2 of those HGST 1 tb running in raid for data storage and seem to be solid for that purpose.
Although on the surface I seem anti SSD, I'm not. I just like the ones with a super solid track record.
I'm no longer a early opt(er) for new bleeding edge tech.
Every time I've lived on the bleeding edge of tech, I've gotten bloody.
I'm more of a wait and watch sort of guy now. I think SSD has come a long way since the beginning of this thread.
-
Nah - years ago SSDs were in general flakey. Sure there were some OK ones but problem is you didn't really have a way of knowing they were OK until they had some time to prove themselves.
Yeah, in the early days Intel ones were always pretty good but expensive, Toshiba ones were pretty reliable but slow, once Samsung came around they started making good reliable ones.
I started using SSD's in late 2009, and all of mine from that era died one way or another. Possibly because I went with OCZ :p
Pretty much the same experience.
I was an early adopter - went with ocz and greatly regretted it.
Even in reference to those drives, the original thread was nonsense.
The ocz issues were related to bad controller logic and had nothing to do with wear.
This thread runs in lockstep with another nonsense mantra that has been repeated ad nauseum by pros who should know better: raid 5 is dead.
That idea was born from, one match-challenged writer who used published warranty information from vendors to establish that an ssd based raid 5 array above a certain modest size would have a 50% chance of suffering a URE before it could complete a rebuild process.
I won't bore everyone with the details, but his core flaw was confusing what the vendor would warranty with actual data.
Marketing of drives is largely about product segmentation.
Vendors adjust what they will cover not an failure rates as much as needing to distinguish there 'gold' and 'platinum' lines.
Enterprise purchasers are notorious for making buying decisions based on an Enterprise label, with little thought given to the basis.
The reads/writes covered by warranty have nothing to do with tested or projected failure rates.
If they did, and using this guy's math, it would be conclusive proof that ALL raid 5 arrays are failure prone, and that they nearly all fail on every rebuild - which obviously is not correct.
Raid 5, along with every other parity based raid type, is a poor choice for most of us here, simply because rebuild times aren't dependent on drive speed as much as they are on controller speed, and a 20x faster ssd doesn't have a 20x faster controller, so you have rebuild times of hours at least, with days on larger arrays.
I only use spinning rust for bulk storage on second tier systems.
Even our mainline servers use all ssd's for storage, and back up to other ssd based arrays.
The only spinning rust I've purchased recently was some HGST 1 tb drives to act as bulk storage on a couple of pf boxes that are running under vsphere on remote site servers.
I WISH I had the budget for SSD's for mainline storage. Write endurance really doesn't tend to be a problem for file server applications, as it is usually write once - lay dormant for years type activity.
For me it comes down to cost.
I have a 12 drive ZFS array in the basement with two striped 6 drive RAIDZ2 vdevs. (I happen to be one of those who thinks only one redundant drive is risky)
This is slower than SSD's for sure, but I speed it up by having gobs of RAM for cache, 1tb of SSD read cache and a coplue of mirrored SSD's dedicated to speeding up the ZIL (intent log)
48TB in total. Doing that with SSD's would have been awesome, but out it in the stratospheric realm cost-wise for me.
-
howdy didly doo ,
i have been really fond of Pfsense , and i have been reading upon this forgive me "old" topic
my hardware isnt quite sophisticated as some of you have, i have 4 x SCB-7968A
(8xGbit uplink from aewin ) i have been using SSD's and i should have read this topic ages ago
since every router i have equipped with a sshd has failed.at first i thought it was the webgui interface , but it turns out its the number of writes which is killing the router all by itself
i have to use full logging mode since they are used in a datacenter , so each 1gb copper interface has atleast 16 ip adresses assigned
together with that i had made modifications to read out every ip's data usage and let the router deliver me an email with thatnow i have not used any "latest" models of SSD , but im surely wondering if its the way to go for such bandwith , and logging capabilities
since i have 4 of these devices im wondering if i should go with plate drives .. or if someone similar experienced the same thing ? -
I'll suggest you to use a regular hard drive for such write intensive loging activities.
a regular 7200 rpm SATA drive is more than fast enough for pFsense.
if you are really picky about HDD speed, then get those 10K rpm or 15K rpm SAS drives
-
howdy didly doo ,
i have been really fond of Pfsense , and i have been reading upon this forgive me "old" topic
my hardware isnt quite sophisticated as some of you have, i have 4 x SCB-7968A
(8xGbit uplink from aewin ) i have been using SSD's and i should have read this topic ages ago
since every router i have equipped with a sshd has failed.at first i thought it was the webgui interface , but it turns out its the number of writes which is killing the router all by itself
i have to use full logging mode since they are used in a datacenter , so each 1gb copper interface has atleast 16 ip adresses assigned
together with that i had made modifications to read out every ip's data usage and let the router deliver me an email with thatnow i have not used any "latest" models of SSD , but im surely wondering if its the way to go for such bandwith , and logging capabilities
since i have 4 of these devices im wondering if i should go with plate drives .. or if someone similar experienced the same thing ?Are you logging locally, or to syslog?
Unless you've hacked the circular logging off, there isn't much point to local logging if you're pushing that much data with granular logging.
There is an option to not log anything locally, which is a good idea in your use case, assuming you're shipping logs elsewhere.
-
So I bought a PC Engine APU2C4 system with a 16GB mSATA SSD drive in it. I'm about to install pfsense but not really sure if I need to go nanobsd or the full install. Since this SSD is one of the newer ones, I believe I shouldn't be too concerned about write failures but then again the only packages (for now) that I'm using are squid and lightsquid. Squid is quite know to write a lot as it is its purpose (caching). So with that, which version should you think I go with?
-
With an APU2C4 use a 64-bit full install with serial console. Nano support will be ended in a future release anyway.
-
With an APU2C4 use a 64-bit full install with serial console. Nano support will be ended in a future release anyway.
I don't follow the forums. Dropping embedded means I have to source new hardware platforms as mine are running off CF. What's the timetable?
-
I read that 2.4 will be 64-bit only.
There will be a last 2.3.y for 32 bit and it will receive security updates for some time.
-
I have read this thread with great interest. I got my first SSDs late last year. Late adopter, partly because I wasn't in need of any system upgrades for a few years, and partly because of all the articles that suggested running an SSD would make the sky fall, global warming real, the stock market crash, my house burn down, etc. I have 80 MEGAbyte, 120 MEGAbyte, and 254 MEGAbyte SCSI and IDE hard disks from the dark ages of computing that still spin, and still work.
I just finished a Skylake-based pfsense system for home use, will run a number of packages but not web caching (unless I can be convinced that the aforementioned things won't happen if I run an SSD). I was hoping to either turn off logging, or send logs to my NAS. Am running 8GB (can go to 16GB easily) RAM and hoping to be able to turn off swap.
First question… Is all that necessary? Yes, I read the thread. But years of SSD-death threads have me programmed to be skeptical.
Second question... I have a two-year-old 120GB SSD (never used, just "older" tech I suppose), a brand new 64GB mSATA drive, and a brand new 64GB m.2 drive. Theoretically the m.2 device would be the most technologically sophisticated in terms of garbage collection, chip quality, etc. So would I be better off using the 120GB (more free space for cell writes) or the more modern device? If I can truly expect years of use out of these SSDs, I would consider using web caching...
Many thanks for this great thread.
-
Look up the lifetime data written spec for your SSDs, figure out about what you will be writing and decide if that is enough years of lifetime for you. I did that and realized that I'll likely be replacing the drives and computers they are in long before I wear them out.
I've been upgrading my systems here by replacing old first generation small SSDs with newer, faster larger ones as I find good deals. Old SSDs are getting stuffed in USB 3 drive cases and used as big thumb drives. The SSDs are all still showing years of life left in the SMART stats and I hate to toss them when good USB 3 carriers are so cheap.
https://smile.amazon.com/gp/product/B01FQ5R0PG/ref=oh_aui_detailpage_o01_s00?ie=UTF8&psc=1
-
Hi
Worth noting that wear leveling and any free space adds more life. If you buy a 80GB SSD and are only using around 20GB of space at any one time, then essentially the life of the SSD is increased by around 4 times. This is because all that spare space still gets used to spread out the wear, and is why a lot of people see much greater life spans on their SSDs than perhaps the specifications first suggested.
So your question regarding a 120 or 64GB SSD, the 120GB SSD is likely to have the longest lifespan, assuming you hold the same amount of data on both, because the 120GB SSD has much more spare space, and that spare space will be used to spread out the wear. It is also quite likely the 120GB SSD has a fundamentally longer life span anyway on a cell level, as the 64GB m.2 will be packed tighter into less chips using smaller cells, and those smaller cells can't be written to as many times.
It is also likely that both SSDs will be fine and outlast your use for them anyway.
Regards
Phil
-
@Phil_D:
Hi
Worth noting that wear leveling and any free space adds more life.
…
Phil
Yep - I use Intel 320 160G refurbs from Newegg - $45 ea.
2 per server.
ESXI 5.5 runs on a thumb, with a copy on a 2nd thumb just in case.
Format the ssd's as 120G - each pf instance gets a 30G slice of each of the ssd's for a geom mirror.
One instance is up, other is off.
I don't do HA as a vm going down without there being an underlying host problem is so rare that it's not worth the bother.
From receiving the alert to bringing the other vm up is under 5 minutes, even if I'm asleep.
-
AFAIK we can run PF from RamDisk, setting in Advanced System setting.
-
Indeed you can set /var and /tmp to run from ram drives to move the vast majority of file writes off the flash media if you're running from CF for example. Just like Nano does currently.
Steve
-
Indeed you can set /var and /tmp to run from ram drives to move the vast majority of file writes off the flash media if you're running from CF for example. Just like Nano does currently.
Steve
I would like to see a feature to backup /var and /tmp at an interval and at an controlled shutdown/reboot.
-
Sorry to bump the old old post, but its happened before :P.
This post had me scared, as I had decided to try out a small M.2 drive, for my build.
I guess no one caught this before, which is quite surprising. The failed drives in the thread, had nothing to due with being SSDs nor, an SSDs endurance, the OP and other failures, were all Using Kingstons SSD nows, which are known to be the most unreliable SSD on the planet lol (at least those first ones) they just died, had nothing to do with how much writes, they just flat out died, after a month or 2 or 3, sometimes more, but ya they were terrible.
Just wanted to throw that out there, as I keep coming across this while trying to decide which SSD to use. With the fact aside that newer SSDs have much better endurance, that has nothing to do with the few failures of the OP and the others, they were using the worst SSD ever made.
-
Why even use a SSD in such an application? Seems like it's just not suited for it. You might get a great speed bump for certain packages that use the storage quite a bit, but if you are not using those packages, it's pointless to have an SSD. A HDD would be a better fit and what's more, it might likely be the cheaper option.
That could be moot with pfSense supporting ZFS in the future though. You could always create a boot mirror with a couple of USB flash drives. That way you would have a bit of redundancy and USB drives are cheap compared to SSDs or HDDs
In terms of performance (all other things being equal): SSD > USB > HDD –- sure. But when you add in the cost and other parameters, I would think for home users at least, the USB route might be the most viable.
Disclaimer: Offer cannot be combined with any other specific requirements that you may or may not have. YMMV. Some use cases may require a particular solution. Void where prohibited.
;)