SSD (Solid State Drive) and pfSense (Important)
-
I assume "3D" refers to TLC…
I'm glad drive manufacturers are getting more bang for the buck. haha.
Actually, they aren't bad from what I've read, but I've yet to test anything other than SLC personally.
I will soon. When one of my drives fail.
vNAND is TLC, except it has about 40k write cycles instead of 1k-3k like normal TLC. SLC has about 100k write cycles, so that puts vNAND between a 3rd and a half of SLC, but with several times the density. Once you include the increased density, vNAND is the same or better than SLC for total number of bytes written.
-
I'm careful and slow to adopt new SSD tech because:
The love affair normally starts with customers saying the drives are perfect/fast/amazing. Anyone who doesn't have one is just using old junk… and then...
Then later you hear about either speed starting to crawl or drives crashing.
Then later the once perfect drive two years later after everyone has one is now the drive you must avoid at all cost because of firmware issues wear leveling issues or some other issues.
So, I prefer drives with a couple years of ratings because a couple of years seems to be the time you start hearing the horror stories.
-
I totally agree. My current rule is an SSD must be out for about a year before I purchase it. I purchased an 840 EVO right after it came out and I eventually had to flash it because of the performance issue. I waited for the 850 EVO to be about a year old. Some site has a 1 year old 850 EVO with 1.5PB written and the drive is still going strong with 60% of its reserve pool left.
-
i totally appreciate all your suggestions and insights in this. as a newbie in the pfsense world and building my own router in the near future, all of this worth more than money can buy. thanks and gratitude. of course, i will continue to monitor this thread and others. with that, i'm gonna be in the hunt for some some SLC SSDs from intel if i'm able to find some. ;D
-
i'm gonna be in the hunt for some some SLC SSDs from intel if i'm able to find some. ;D
Were you able to find some? I'm looking for some storage for my new build too :)
-
Were you able to find some? I'm looking for some storage for my new build too
Intel X25-E 32 GB SATA SLC SSD 2.5" Drive (SSDSA2SH032G101) for ~$345
Perhaps a SATA-DOM SSD (SLC) would be an alternative.
SATA DOM SSD, SLC, 7-PIN, Horizontal - 32 GB for ~135 € -
Hi !
I bought one like that for my pfsense box. No yet received.
It is a 32GB msata SLC SSD, about 30 EUR:
http://www.aliexpress.com/item/New-Mini-PCIE-SSD-mSATA-32GB-SLC-Flash-Hard-Solid-State-Disk-For-PC-Laptop/1875478645.html
I asked the manufacturer and he confirmed the drive supports TRIM command.
You should confirm availability with the sellers as my first order got canceled do to unavailability. -
Hello,
Ive registered here to clarify some things.
I know its an old Topic but since it streched over 4 years it should be the right place.First of Write cycles are not that of a concern as you might think its just a different perception, but in fact you really need to try hard artificial to wear out a normal modern ssd with writes faster then a regular hdd wears out mechanical.
now first thing you have to understand is that a SSD automatical shuffles the writes on different cells and trys to distribute write counts evenly across every cell.
this also means the more free space you have the better youre off. indeed you can wear out an ssd if you have very small free space and heavy writing. lets say you leave 2 GB free, rest is static data, and you write several gigs a day then youre indeed be out of luck quickly.
second ssd raid DOES NOT mirror the write pattern, thats nonsense, whoever wrote that has no clue what he is talking about.
in which cell data is stored is only up to the controller within the sdd he will decide the freshest cell and write to it.now the best thing you can do is use a big ssd 128gig or bigger even you need only a tiny bit.
the money savings are negligent here, but you make shure it never wears out.it doesnt even matter if you partition it or not, any free space is used for garbage collection
forget terms like fragementation and partitions ini a physical aerea, those aint exist anymore, they are just virtual.
lets say oyu have 2 partitions and 4 cells to write then its entirely possible that
partition 1 is on cell 1 and 3, and parition 2 is on cell 2 and 4.
after a few delete and write cycles this could change to partition 1 on cell 3 and 4, partition 2 on cell 1 and 2so it really doesnt matter.
just the more free space you have the longer lifetime in terms of write cycles.
ideally if you can log to ram and archive logs as zips from ram to disk would be the ideal case or a filesystem that compresses automatically.but even without it, on modern drives just use a 128giug or a chea 256gig drive, or take 2 to mirror,.. and youre fine,
swap ofc shoud be used only for emergency so swapiness should be reduced to a minimum, but usually you should not be able to kill your sdd with your firewall logs :)and even if you do, dont worry, you will still be able to read, you firewall will crush, yes, but you still can mirror data to a new drive and proceed
-
@b.o.f.h.:
Hello,
Ive registered here to clarify some things.
I know its an old Topic but since it streched over 4 years it should be the right place.First of Write cycles are not that of a concern as you might think its just a different perception, but in fact you really need to try hard artificial to wear out a normal modern ssd with writes faster then a regular hdd wears out mechanical.
now first thing you have to understand is that a SSD automatical shuffles the writes on different cells and trys to distribute write counts evenly across every cell.
this also means the more free space you have the better youre off. indeed you can wear out an ssd if you have very small free space and heavy writing. lets say you leave 2 GB free, rest is static data, and you write several gigs a day then youre indeed be out of luck quickly.
second ssd raid DOES NOT mirror the write pattern, thats nonsense, whoever wrote that has no clue what he is talking about.
in which cell data is stored is only up to the controller within the sdd he will decide the freshest cell and write to it.now the best thing you can do is use a big ssd 128gig or bigger even you need only a tiny bit.
the money savings are negligent here, but you make shure it never wears out.it doesnt even matter if you partition it or not, any free space is used for garbage collection
forget terms like fragementation and partitions ini a physical aerea, those aint exist anymore, they are just virtual.
lets say oyu have 2 partitions and 4 cells to write then its entirely possible that
partition 1 is on cell 1 and 3, and parition 2 is on cell 2 and 4.
after a few delete and write cycles this could change to partition 1 on cell 3 and 4, partition 2 on cell 1 and 2so it really doesnt matter.
just the more free space you have the longer lifetime in terms of write cycles.
ideally if you can log to ram and archive logs as zips from ram to disk would be the ideal case or a filesystem that compresses automatically.but even without it, on modern drives just use a 128giug or a chea 256gig drive, or take 2 to mirror,.. and youre fine,
swap ofc shoud be used only for emergency so swapiness should be reduced to a minimum, but usually you should not be able to kill your sdd with your firewall logs :)and even if you do, dont worry, you will still be able to read, you firewall will crush, yes, but you still can mirror data to a new drive and proceed
Yeah, write cycles may have been a concern when this thread was first started with low end SSD's, USB drives,CF cards and DOM's, but this really is not the case anymore.
Firstly, pfSense - even in full, non-nanoBSD install - really doesn't write that much. Unless you have squid cache or another write heavy plugin it's only a matter of text log entries, and unless you've tweaked the log settings to be ridiculously verbose this is minor.
I have been using pfSenae for ~5 years now and in that time I've never had it installed on a non-flash media, and I've never had any premature failure of flash devices.
I really feel like this is a legacy concern from back in the day when SSD's had higher write amplification and lower total write cycle counts than they do today.
-
lets say you leave 2 GB free, rest is static data,
A part of any SSD or mSATA will be used by the controller as cache and if there will be then to less
amount of free space it will be not really working well anymore. All in all as mSATA or real SSD the
flash storage will be a huge gain for smaller appliances with a looking eye toward heat prevention,
loudness and speed. Nothing to complain anymore about that art of storage in m eyes. And the new
M.2 SSDs will be in for sure also matching well for that art and wise of usage. -
Actually this thread was 99% bullshit 5 years ago too, unless you bought absolute gutter-trash drives. I recently re-purposed a pfsense box with an intel G2 in it, the wear level was like 5% after 4 years, and that had already spent some time in a desktop pc before.
Keep in mind that every piece of hardware can suffer random fails.
-
Nah - years ago SSDs were in general flakey. Sure there were some OK ones but problem is you didn't really have a way of knowing they were OK until they had some time to prove themselves.
-
Nah - years ago SSDs were in general flakey. Sure there were some OK ones but problem is you didn't really have a way of knowing they were OK until they had some time to prove themselves.
Yeah, in the early days Intel ones were always pretty good but expensive, Toshiba ones were pretty reliable but slow, once Samsung came around they started making good reliable ones.
I started using SSD's in late 2009, and all of mine from that era died one way or another. Possibly because I went with OCZ :p
-
I have a few old ssd running in some pfsense. But I didnt buy them at that time because it was a roll of the dice.
I waited and bought several of the ones proven not to die…. Samsung drives.
-
Nah - years ago SSDs were in general flakey. Sure there were some OK ones but problem is you didn't really have a way of knowing they were OK until they had some time to prove themselves.
Yeah, in the early days Intel ones were always pretty good but expensive, Toshiba ones were pretty reliable but slow, once Samsung came around they started making good reliable ones.
I started using SSD's in late 2009, and all of mine from that era died one way or another. Possibly because I went with OCZ :p
Pretty much the same experience.
I was an early adopter - went with ocz and greatly regretted it.
Even in reference to those drives, the original thread was nonsense.
The ocz issues were related to bad controller logic and had nothing to do with wear.
This thread runs in lockstep with another nonsense mantra that has been repeated ad nauseum by pros who should know better: raid 5 is dead.
That idea was born from, one match-challenged writer who used published warranty information from vendors to establish that an ssd based raid 5 array above a certain modest size would have a 50% chance of suffering a URE before it could complete a rebuild process.
I won't bore everyone with the details, but his core flaw was confusing what the vendor would warranty with actual data.
Marketing of drives is largely about product segmentation.
Vendors adjust what they will cover not an failure rates as much as needing to distinguish there 'gold' and 'platinum' lines.
Enterprise purchasers are notorious for making buying decisions based on an Enterprise label, with little thought given to the basis.
The reads/writes covered by warranty have nothing to do with tested or projected failure rates.
If they did, and using this guy's math, it would be conclusive proof that ALL raid 5 arrays are failure prone, and that they nearly all fail on every rebuild - which obviously is not correct.
Raid 5, along with every other parity based raid type, is a poor choice for most of us here, simply because rebuild times aren't dependent on drive speed as much as they are on controller speed, and a 20x faster ssd doesn't have a 20x faster controller, so you have rebuild times of hours at least, with days on larger arrays.
I only use spinning rust for bulk storage on second tier systems.
Even our mainline servers use all ssd's for storage, and back up to other ssd based arrays.
The only spinning rust I've purchased recently was some HGST 1 tb drives to act as bulk storage on a couple of pf boxes that are running under vsphere on remote site servers.
-
Yeah - I have exactly 2 of those HGST 1 tb running in raid for data storage and seem to be solid for that purpose.
Although on the surface I seem anti SSD, I'm not. I just like the ones with a super solid track record.
I'm no longer a early opt(er) for new bleeding edge tech.
Every time I've lived on the bleeding edge of tech, I've gotten bloody.
I'm more of a wait and watch sort of guy now. I think SSD has come a long way since the beginning of this thread.
-
Nah - years ago SSDs were in general flakey. Sure there were some OK ones but problem is you didn't really have a way of knowing they were OK until they had some time to prove themselves.
Yeah, in the early days Intel ones were always pretty good but expensive, Toshiba ones were pretty reliable but slow, once Samsung came around they started making good reliable ones.
I started using SSD's in late 2009, and all of mine from that era died one way or another. Possibly because I went with OCZ :p
Pretty much the same experience.
I was an early adopter - went with ocz and greatly regretted it.
Even in reference to those drives, the original thread was nonsense.
The ocz issues were related to bad controller logic and had nothing to do with wear.
This thread runs in lockstep with another nonsense mantra that has been repeated ad nauseum by pros who should know better: raid 5 is dead.
That idea was born from, one match-challenged writer who used published warranty information from vendors to establish that an ssd based raid 5 array above a certain modest size would have a 50% chance of suffering a URE before it could complete a rebuild process.
I won't bore everyone with the details, but his core flaw was confusing what the vendor would warranty with actual data.
Marketing of drives is largely about product segmentation.
Vendors adjust what they will cover not an failure rates as much as needing to distinguish there 'gold' and 'platinum' lines.
Enterprise purchasers are notorious for making buying decisions based on an Enterprise label, with little thought given to the basis.
The reads/writes covered by warranty have nothing to do with tested or projected failure rates.
If they did, and using this guy's math, it would be conclusive proof that ALL raid 5 arrays are failure prone, and that they nearly all fail on every rebuild - which obviously is not correct.
Raid 5, along with every other parity based raid type, is a poor choice for most of us here, simply because rebuild times aren't dependent on drive speed as much as they are on controller speed, and a 20x faster ssd doesn't have a 20x faster controller, so you have rebuild times of hours at least, with days on larger arrays.
I only use spinning rust for bulk storage on second tier systems.
Even our mainline servers use all ssd's for storage, and back up to other ssd based arrays.
The only spinning rust I've purchased recently was some HGST 1 tb drives to act as bulk storage on a couple of pf boxes that are running under vsphere on remote site servers.
I WISH I had the budget for SSD's for mainline storage. Write endurance really doesn't tend to be a problem for file server applications, as it is usually write once - lay dormant for years type activity.
For me it comes down to cost.
I have a 12 drive ZFS array in the basement with two striped 6 drive RAIDZ2 vdevs. (I happen to be one of those who thinks only one redundant drive is risky)
This is slower than SSD's for sure, but I speed it up by having gobs of RAM for cache, 1tb of SSD read cache and a coplue of mirrored SSD's dedicated to speeding up the ZIL (intent log)
48TB in total. Doing that with SSD's would have been awesome, but out it in the stratospheric realm cost-wise for me.
-
howdy didly doo ,
i have been really fond of Pfsense , and i have been reading upon this forgive me "old" topic
my hardware isnt quite sophisticated as some of you have, i have 4 x SCB-7968A
(8xGbit uplink from aewin ) i have been using SSD's and i should have read this topic ages ago
since every router i have equipped with a sshd has failed.at first i thought it was the webgui interface , but it turns out its the number of writes which is killing the router all by itself
i have to use full logging mode since they are used in a datacenter , so each 1gb copper interface has atleast 16 ip adresses assigned
together with that i had made modifications to read out every ip's data usage and let the router deliver me an email with thatnow i have not used any "latest" models of SSD , but im surely wondering if its the way to go for such bandwith , and logging capabilities
since i have 4 of these devices im wondering if i should go with plate drives .. or if someone similar experienced the same thing ? -
I'll suggest you to use a regular hard drive for such write intensive loging activities.
a regular 7200 rpm SATA drive is more than fast enough for pFsense.
if you are really picky about HDD speed, then get those 10K rpm or 15K rpm SAS drives
-
howdy didly doo ,
i have been really fond of Pfsense , and i have been reading upon this forgive me "old" topic
my hardware isnt quite sophisticated as some of you have, i have 4 x SCB-7968A
(8xGbit uplink from aewin ) i have been using SSD's and i should have read this topic ages ago
since every router i have equipped with a sshd has failed.at first i thought it was the webgui interface , but it turns out its the number of writes which is killing the router all by itself
i have to use full logging mode since they are used in a datacenter , so each 1gb copper interface has atleast 16 ip adresses assigned
together with that i had made modifications to read out every ip's data usage and let the router deliver me an email with thatnow i have not used any "latest" models of SSD , but im surely wondering if its the way to go for such bandwith , and logging capabilities
since i have 4 of these devices im wondering if i should go with plate drives .. or if someone similar experienced the same thing ?Are you logging locally, or to syslog?
Unless you've hacked the circular logging off, there isn't much point to local logging if you're pushing that much data with granular logging.
There is an option to not log anything locally, which is a good idea in your use case, assuming you're shipping logs elsewhere.