Can a business grade client computer SSD run for 24/7?
-
I've read a few different times over the years that SSDs last as long as mechanical drives for total data written. It just so happens that by the time you wear out an SSD's wear level, your mechanical drive will die do to being mechanical. It's less of a question of which one will last longer and more of do you want a mechanical drive that will die after 5-10 years due to mechanical wear or an SSD that will die after 5-10 years do to wear leveling, but is 10x-1000x faster and uses less power.
Power loss failure modes are quite different. One thing to be aware of if your concerned about data loss.
-
I am looking at some 1u barebones of supermicro(with fans, not fanless). I think the power supply will not have problems.
intel 710 is using a same controller as consumer grade 320 series and HET MLC which is much better than common MLC.
By the way, intel 710 series looks old and was withdrawn from some high rating resellers. intel s3520 series is newer(released Q3 2016) and still there. Even it is slow.
-
On the budget ssd market, things have regressed reliability and endurance wise be careful there, although I expect ssd's marked for business use are not in that price sector.
But as an example kingston latest 3 models for 60 gig capacity have regressions in random write, rated endurance (number of tbw for warranty expiry) and nand type. (TLC vs MLC). Because of this I brought a batch of old ssd models of ebay as newer models are inferior, and the market is also flooded with unknown chinese brands as well now.
-
There is a fine line between "budget" and "crap".
"Budget" is buying the cheapest thing that satisfies the requirements
"Crap" is buying something that is so cheap that they forewent qualityNot to say that something more expensive has higher quality.
Techreport did an endurance test of SSDs and was able to write 2.4PiB to a 256 GiB TLC Samsung 840 drive before it suddenly failed, but it did get hit by a power failure shortly before and that may have contributed. And the 840s were known to have endurance issues. SSDs have gotten much better since, but no new endurance tests that I'm aware of.
Even in the early days of SSDs when they actually did wear out, the RMA rate was 1/2 for SSDs than mech drives. Back in 2013, there was an article claiming reported failure rates for SSDs where 1/3 of mech drives. My biggest concern would be sudden power loss and how whatever SSD handles that. Some newer high density mech drives can lose committed data on power loss.
-
Ok,
"I am looking at some 1u barebones of supermicro(with fans, not fanless). I think the power supply will not have problems.
intel 710 is using a same controller as consumer grade 320 series and HET MLC which is much better than common MLC.
By the way, intel 710 series looks old and was withdrawn from some high rating resellers. intel s3520 series is newer(released Q3 2016) and still there. Even it is slow."
So your getting a rack mount server, but insist on using a single piece of media even though you can easily mirror with zfs and have good redundancy and monitoring of drive health.
that 710 will out last anything you buy from Kingston. The controller matching the 320 I take as the 320 being good not the 710 being bad, unless I'm mistaken the Kingston you are looking at is standard MLC?
You never mentioned speed, only reliability, if the 710 isn't fast enough, then that should be mentioned when asking how long its going to last. The main reason I linked it was its CHEAP the VALUE Quality/Cost is awesome, so buy 4 and get 100 GB x2 speed and some redundancy. Probably cost you about what the 1 Kingston will.
Oh yea and remember when I said I'd never seen Intel's fail (even in Sewer inspection robots that ate lesser media), I've seen Kingston and I saw it again.
So night before last I'm in a Datacenter racking a server for a customer(Occasional I don't own or manage their stuff), and one of the current tenants of the rack had angry flashing lights.
The reason, I'll give you 3 guesses :)Attached to a p420 controller in an HP Proliant DL385 Gen 8 server. Running XenServer 6.5 with redundant PSUs being fed clean power, and perfect temps, an ideal environment physically.
But combination of HP Raid controller and Xenserver 6.5 means the controller doesn't get to shuffle space like it should, TRIM not passed to drives, or must be manualy evoked from XC.
And the most ironic part is I'm fairly sure this would not have happened if they had only allowed use of half the space, but the drive was near full, and the server not watched closely. After all next to new 512MB high quality SSDs why would they have a problem?
I pulled drive did a secure erase and Kingston utilities said it had 90% life left, the drives were installed in 2016 less than two year.
Beating on it with various utilities, drive is fine as far as I can tell. Performance is normal, can write TBs to it.Good luck with the build, super micro always makes nice stuff, and pfSense is glorious compared to Xenserver.

 -
For clarity this was a 480GB KC300 same 'Business' class just one generation older than what you are looking at. Even billed as 'Enterprise' or at least its endurance and controller features were. Same MTBF, near identical endurance rating. Different controller though LSI doesn't make crap in my experience, maybe they just sell the good batches to other vendors?
https://www.kingston.com/datasheets/skc300s3_us.pdf
-
There is a fine line between "budget" and "crap".
"Budget" is buying the cheapest thing that satisfies the requirements
"Crap" is buying something that is so cheap that they forewent qualityNot to say that something more expensive has higher quality.
Techreport did an endurance test of SSDs and way able to write 2.4PiB to a 256 GiB TLC Samsung 840 drive before it suddenly failed, but it did get hit by a power failure shortly before and that may have contributed. And the 840s were known to have endurance issues. SSDs have gotten much better sense, but no new endurance tests that I'm aware of.
Even in the early days of SSDs when they actually did wear out, the RMA rate was 1/2 for SSDs than mech drives. Back in 2013, there was an article claiming reported failure rates for SSDs where 1/3 of mech drives. My biggest concern would be sudden power loss and how whatever SSD handles that. Some newer high density mech drives can lose committed data on power loss.
techreport tested drives (a) that are older generations, the 840 is 2 or 3 generations old now I believe. and (b) in a different price bracket.
The £30-£60 price bracket gets very little attention from the tech media, and the products coming out reflects that.
The quality in terms of random performance and endurance in that price bracket has regressed significantly in the past few years.
One of the reasons been is that the reputable brands have increased the minimum size of the drives they sell and since the price per gig hasnt gone down it effectively means they have abandoned that market. Try and find a 30 or 60 gig samsung 850/950 pro/evo e.g.
The failure rate for some of these new drives is worse than those "first years", DOA has skyrocketed, and failure within 3 months is extremely high for some drives.
-
A $1000 budget but it need more space for logs and cache, more cpu cores for snort, and a little bit low heat and power usage. So passed XG-7100 and WD or Seagate's mechanical enterprise drives.
(I did know each snort process can handle around 200Mbps and each core handles 1 snort process based on Security Onion's default setting.)KC300 has a SandForce controller which will have very low performance if the drive has very little space left(10% left?).
In 3D NAND's age, whether TLC or MLC, consumer grade SSDs usually use Marvell and SMI as their controller, except Samsung using its own. Phison and SandForce are rare. For me, I will pass a MLC drive with SandForce or SMI controller.Micron has a document "Over-Provisioning the Micron
1100 SSD for Data Center Applications" which also discussed leave some spare space to make Data Center applications become possible in a client SSD.
By the way, I preferred high rating online stores with brand new products. So I am considering S3520, KC400 and Micron 1100 now. Of cause S3520 is a data center grade and more $$ per GB. 1100 is a TLC and lesser $$ per GB.
-
There is a fine line between "budget" and "crap".
"Budget" is buying the cheapest thing that satisfies the requirements
"Crap" is buying something that is so cheap that they forewent qualityNot to say that something more expensive has higher quality.
Techreport did an endurance test of SSDs and way able to write 2.4PiB to a 256 GiB TLC Samsung 840 drive before it suddenly failed, but it did get hit by a power failure shortly before and that may have contributed. And the 840s were known to have endurance issues. SSDs have gotten much better sense, but no new endurance tests that I'm aware of.
Even in the early days of SSDs when they actually did wear out, the RMA rate was 1/2 for SSDs than mech drives. Back in 2013, there was an article claiming reported failure rates for SSDs where 1/3 of mech drives. My biggest concern would be sudden power loss and how whatever SSD handles that. Some newer high density mech drives can lose committed data on power loss.
techreport tested drives (a) that are older generations, the 840 is 2 or 3 generations old now I believe. and (b) in a different price bracket.
The £30-£60 price bracket gets very little attention from the tech media, and the products coming out reflects that.
The quality in terms of random performance and endurance in that price bracket has regressed significantly in the past few years.
One of the reasons been is that the reputable brands have increased the minimum size of the drives they sell and since the price per gig hasnt gone down it effectively means they have abandoned that market. Try and find a 30 or 60 gig samsung 850/950 pro/evo e.g.
The failure rate for some of these new drives is worse than those "first years", DOA has skyrocketed, and failure within 3 months is extremely high for some drives.
in a different price bracket.
Than what? "Budget" does not mean "cheap", just "cheaper" or "sub optimal but still functional". I have a $130 "budget" NIC that I use for pfSense, because it's cheaper than a $500 NIC. I am using two $150 "budget" SSDs for pfSense, because they're cheaper than $500 SLC enterprise NICs. "Budget" means "cheaper", not "bottom of the barrel".
Just because someone's budget cannot afford proper parts, does not make what they're asking for "budget" parts. They just want some knock-off that they can afford and works better than not working at all. They're just trying to minimize the damage. If someone wants a part that fits within a specific budget, they can post the budget or price range they're looking for. Otherwise you get people saying something like "I need a budget terabit router" and everyone be like "well, here's a motherboard with a 1gb realtek nic". It makes no sense.
Can a "budget" SSD run 24/7 in a business environment? Sure it can. You just can't get a piece of crap. Can a $60 SSD? Probably not, at least not writing all the time. Can a $40 SSD? An SSD that is about the same price as a case fan shouldn't be used for anything.
-
budget generally refers to the cheapest products that a company makes.
So e.g. if kingston make $30 ssds then their $100 ssds are not budget. But I guess we dont see eye to eye on this.
I have never ever before seen someone use the term in the way you have done such as look at these budget parts. What you said I would phrase as look at these parts I am buying "within my budget".
But yes the exact point I was making that $30-60 ssd's a few years back were capable of what you said they not capable off, which is 24/7 server type use. Thats the point I made, in that newer parts in the same price bracket are of lower spec than the older ones.
The $40 ssd in my pfsense unit has higher endurance than the $300 ssd in my PC. Its only priced at $40 due to the size of the drive, in the year of its release the components were of equal quality to $400 drives made by the same manufacturer.