SSD (Solid State Drive) and pfSense (Important)
-
3 - 5 years on my Samsung SSDs - 96% - 99% left. No trim. On and writing 24/7 365.
I'm guessing I'm not looking for any failures before I get bore with the drive performance.
SLC or MLC on them samsung ssds?
-
SLC…
-
@ryan29:
Has anyone tested any mSATA SSDs to see how they handle sudden power loss? There's an Anandtech article that does a better job of explaining the issue I'm concerned about than I could. An excerpt from the "Truth About Micron's Power-Loss Protection" section:
In the MX100 review, I was still under the impression that there was full power-loss protection in the drive, but my impression was wrong. The client-level implementation only guarantees that data-at-rest is protected, meaning that any in-flight data will be lost, including the user data in the DRAM buffer. In other words the M500, M550 and MX100 do not have power-loss protection – what they have is circuitry that protects against corruption of existing data in the case of a power-loss.
So only in-flight data will be lost? I guess I didn't get the memo when they changed the definition of sync. The whole explanation of MLC programming is well worth the read. It pretty much explains the results this guy saw IMO.
I have some of the 16GB PC Engines mSATAs which use a Phison PS3109-S9-J controller. On the Phison site, they reference SmartFlush and GuaranteedFlush trademarks. It sounds like this (PDF warning) which claims they don't ACK a FLUSH CACHE command until data actually exists on (non-volatile) NAND. Based on that, I feel fairly confident using them, but I haven't seen any independent analysis that verifies the claims. Has anyone else? Has anyone used those SSDs? Opinions?
Sync is only successful if it completes, at which point the data is no longer in-flight. Non-synced data is at the whim the of the write cache.
Samsung has taken the approach of dynamically changing between MLC and SLC and quickly writing data as SLC, the re-writting the data as MLC as time permits. This allows sync to return quickly.
-
3 - 5 years on my Samsung SSDs - 96% - 99% left. No trim. On and writing 24/7 365.
I'm guessing I'm not looking for any failures before I get bore with the drive performance.
SLC or MLC on them samsung ssds?
The new 850 use vNAND which can be tweaked between performance and longevity but with no sacrifice to storage density. vNAND can have between 3,000 and 40,000 write cycles, depending on how they tune it. The difference in performance is only between 35MB/s and 55MB/s per chip. I would gladly take 35MB/s for a 10x improvement in write cycles.
One tech site has their 850 pro up to 1.5PB written and it still has 60% of the reserve pool left. It has been at 0 wear leveling for months. Samsung claims they have an 850 120GB drive with 8PB written and it still works.
5 year 170TB written warranty or 10 year for the pro, but same written.
Samsung said they have accepted warranty claims with drives over the 150TB limit, assuming they were using non-server loads, but I wouldn't bank on it.
-
Has anyone used a SATA DOM's for an installation? I've got one in my freenas box and I'm real happy with it.
Would a 16GB SATA DOM suffice or should something larger be used for wear leveling?
I plan on updating hardware at some point and was thinking of using one for a small footprint silent setup unless they aren't recommended.
-
Some of the SATA SSD DOMs (disk on module) sound interesting but unless space is really at a premium I think a regular SSD is going to be better priced. Really dig into the specs if you plan on reading/writing a lot too, I'd avoid any that don't have a full function controller in any case.
http://www.thessdreview.com/our-reviews/mach-xtreme-sata-dom-32gb-ssd-review-small-os-storage-postage-stamp-sized-form-factor/
-
SLC…
at the moment, i believe that SLC SSDs are becoming more difficult to find nowadays than its MLC counterpart. i wonder, however, if by using enterprise SSD would suffice over consumer SSD. e.g., the samsung 845DC EVO over the 840 pro. what else or other options is there for us who are about to get ready to build our own pfsense router?
-
I would look for an MLC SSD with a year or two of ratings not showing any issues if price is a concern. Lots of people are reporting intel to be very solid performer.
I don't trust ANY super new tech in a pfsense install I have to rely on to just-work but if its got a bit of time on it and good ratings, should be ok.
-
I think the new Samsung 850 series comes with 3D memory and a 5 year warranty if you are really worried about write wear.
http://smile.amazon.com/gp/product/B00OAJ412U/ref=smi_www_rco2_go_smi_1968491462
-
I assume "3D" refers to TLC…
I'm glad drive manufacturers are getting more bang for the buck. haha.
Actually, they aren't bad from what I've read, but I've yet to test anything other than SLC personally.
I will soon. When one of my drives fail.
-
Nope… It is the way they stack multiple layers of cells inside the chip. From the link I posted:
Samsung’s innovative 3D V-NAND flash memory architecture breaks through density, performance, and endurance limitations of today’s conventional planar NAND architecture. Samsung 3D V-NAND stacks 32 cell layers vertically resulting in higher density and better performance utilizing a smaller footprint.
or for more detail:
http://www.anandtech.com/show/7203/samsungs-3d-vertical-nand-set-to-improve-nand-densities
-
I assume "3D" refers to TLC…
I'm glad drive manufacturers are getting more bang for the buck. haha.
Actually, they aren't bad from what I've read, but I've yet to test anything other than SLC personally.
I will soon. When one of my drives fail.
vNAND is TLC, except it has about 40k write cycles instead of 1k-3k like normal TLC. SLC has about 100k write cycles, so that puts vNAND between a 3rd and a half of SLC, but with several times the density. Once you include the increased density, vNAND is the same or better than SLC for total number of bytes written.
-
I'm careful and slow to adopt new SSD tech because:
The love affair normally starts with customers saying the drives are perfect/fast/amazing. Anyone who doesn't have one is just using old junk… and then...
Then later you hear about either speed starting to crawl or drives crashing.
Then later the once perfect drive two years later after everyone has one is now the drive you must avoid at all cost because of firmware issues wear leveling issues or some other issues.
So, I prefer drives with a couple years of ratings because a couple of years seems to be the time you start hearing the horror stories.
-
I totally agree. My current rule is an SSD must be out for about a year before I purchase it. I purchased an 840 EVO right after it came out and I eventually had to flash it because of the performance issue. I waited for the 850 EVO to be about a year old. Some site has a 1 year old 850 EVO with 1.5PB written and the drive is still going strong with 60% of its reserve pool left.
-
i totally appreciate all your suggestions and insights in this. as a newbie in the pfsense world and building my own router in the near future, all of this worth more than money can buy. thanks and gratitude. of course, i will continue to monitor this thread and others. with that, i'm gonna be in the hunt for some some SLC SSDs from intel if i'm able to find some. ;D
-
i'm gonna be in the hunt for some some SLC SSDs from intel if i'm able to find some. ;D
Were you able to find some? I'm looking for some storage for my new build too :)
-
Were you able to find some? I'm looking for some storage for my new build too
Intel X25-E 32 GB SATA SLC SSD 2.5" Drive (SSDSA2SH032G101) for ~$345
Perhaps a SATA-DOM SSD (SLC) would be an alternative.
SATA DOM SSD, SLC, 7-PIN, Horizontal - 32 GB for ~135 € -
Hi !
I bought one like that for my pfsense box. No yet received.
It is a 32GB msata SLC SSD, about 30 EUR:
http://www.aliexpress.com/item/New-Mini-PCIE-SSD-mSATA-32GB-SLC-Flash-Hard-Solid-State-Disk-For-PC-Laptop/1875478645.html
I asked the manufacturer and he confirmed the drive supports TRIM command.
You should confirm availability with the sellers as my first order got canceled do to unavailability. -
Hello,
Ive registered here to clarify some things.
I know its an old Topic but since it streched over 4 years it should be the right place.First of Write cycles are not that of a concern as you might think its just a different perception, but in fact you really need to try hard artificial to wear out a normal modern ssd with writes faster then a regular hdd wears out mechanical.
now first thing you have to understand is that a SSD automatical shuffles the writes on different cells and trys to distribute write counts evenly across every cell.
this also means the more free space you have the better youre off. indeed you can wear out an ssd if you have very small free space and heavy writing. lets say you leave 2 GB free, rest is static data, and you write several gigs a day then youre indeed be out of luck quickly.
second ssd raid DOES NOT mirror the write pattern, thats nonsense, whoever wrote that has no clue what he is talking about.
in which cell data is stored is only up to the controller within the sdd he will decide the freshest cell and write to it.now the best thing you can do is use a big ssd 128gig or bigger even you need only a tiny bit.
the money savings are negligent here, but you make shure it never wears out.it doesnt even matter if you partition it or not, any free space is used for garbage collection
forget terms like fragementation and partitions ini a physical aerea, those aint exist anymore, they are just virtual.
lets say oyu have 2 partitions and 4 cells to write then its entirely possible that
partition 1 is on cell 1 and 3, and parition 2 is on cell 2 and 4.
after a few delete and write cycles this could change to partition 1 on cell 3 and 4, partition 2 on cell 1 and 2so it really doesnt matter.
just the more free space you have the longer lifetime in terms of write cycles.
ideally if you can log to ram and archive logs as zips from ram to disk would be the ideal case or a filesystem that compresses automatically.but even without it, on modern drives just use a 128giug or a chea 256gig drive, or take 2 to mirror,.. and youre fine,
swap ofc shoud be used only for emergency so swapiness should be reduced to a minimum, but usually you should not be able to kill your sdd with your firewall logs :)and even if you do, dont worry, you will still be able to read, you firewall will crush, yes, but you still can mirror data to a new drive and proceed
-
@b.o.f.h.:
Hello,
Ive registered here to clarify some things.
I know its an old Topic but since it streched over 4 years it should be the right place.First of Write cycles are not that of a concern as you might think its just a different perception, but in fact you really need to try hard artificial to wear out a normal modern ssd with writes faster then a regular hdd wears out mechanical.
now first thing you have to understand is that a SSD automatical shuffles the writes on different cells and trys to distribute write counts evenly across every cell.
this also means the more free space you have the better youre off. indeed you can wear out an ssd if you have very small free space and heavy writing. lets say you leave 2 GB free, rest is static data, and you write several gigs a day then youre indeed be out of luck quickly.
second ssd raid DOES NOT mirror the write pattern, thats nonsense, whoever wrote that has no clue what he is talking about.
in which cell data is stored is only up to the controller within the sdd he will decide the freshest cell and write to it.now the best thing you can do is use a big ssd 128gig or bigger even you need only a tiny bit.
the money savings are negligent here, but you make shure it never wears out.it doesnt even matter if you partition it or not, any free space is used for garbage collection
forget terms like fragementation and partitions ini a physical aerea, those aint exist anymore, they are just virtual.
lets say oyu have 2 partitions and 4 cells to write then its entirely possible that
partition 1 is on cell 1 and 3, and parition 2 is on cell 2 and 4.
after a few delete and write cycles this could change to partition 1 on cell 3 and 4, partition 2 on cell 1 and 2so it really doesnt matter.
just the more free space you have the longer lifetime in terms of write cycles.
ideally if you can log to ram and archive logs as zips from ram to disk would be the ideal case or a filesystem that compresses automatically.but even without it, on modern drives just use a 128giug or a chea 256gig drive, or take 2 to mirror,.. and youre fine,
swap ofc shoud be used only for emergency so swapiness should be reduced to a minimum, but usually you should not be able to kill your sdd with your firewall logs :)and even if you do, dont worry, you will still be able to read, you firewall will crush, yes, but you still can mirror data to a new drive and proceed
Yeah, write cycles may have been a concern when this thread was first started with low end SSD's, USB drives,CF cards and DOM's, but this really is not the case anymore.
Firstly, pfSense - even in full, non-nanoBSD install - really doesn't write that much. Unless you have squid cache or another write heavy plugin it's only a matter of text log entries, and unless you've tweaked the log settings to be ridiculously verbose this is minor.
I have been using pfSenae for ~5 years now and in that time I've never had it installed on a non-flash media, and I've never had any premature failure of flash devices.
I really feel like this is a legacy concern from back in the day when SSD's had higher write amplification and lower total write cycle counts than they do today.