SSD (Solid State Drive) and pfSense (Important)
-
This topic was designed to provide information but also to gather feedback from
other pfSense users operating on SSD drives.If you were hoping to make pfSense ultra reliable by adding a SSD then this is a must read,
otherwise you could actually decrease the reliability more then your current drive.SSDs are not simple upgrades for systems doing frequent disk writes. I wrote this
after my SSD upgrade planning so that you know what you're dealing with.Having a pfSense firewall/router in a production environment that handles and routes data 24/7 made me rethink the hardware approach and to upgrade to prevent future problems.
The idea was to make the system run entirely solid-state with virtually no moving parts to fail (except fans and
the system I chose has redundant fans). Similar to Cisco routers and other embedded solutions, designing
an entirely solid-state system was as simple as replacing the mechanical hard drive with a solid state drive.However, if you think it ends here there is actually much more to go and failure to do so could result
in a SSD drive life time much less than the typical mechanical drives. Without tuning, your SSD could last only
months. That's right, months.Almost all common SSD drives today utilize NAND flash cells. NAND flash cells can only handle about 10,000
writes before they become unusable. For this reason SSD manufacturers use controllers that are able to distribute
data for even wear. Even if you really only need a few gigabytes of space make sure to upsize significantly on the
SSD so the drive controller has more memory area to spread across.Some SSDs are better then others and use higher quality NAND flash components so it's a good idea to look around
for a good SSD and upsize considerably.Now, no matter the SSD quality, there are several key factors on the pfSense system that you will have to address
to maximize the life of your SSD. This applies to all SSD drives of any manufacturer.-
Use higher amounts of RAM and disable the swap file (During install delete the /swap partition)
The swap file can quickly wear out your SSD if not disabled. Remember, 10k writes per memory cell is it. -
Disable firewall rule logging
The logging will also quickly wear out your SSD. If you require logging perhaps use a remote syslog server. -
RRD Graphing
The RRD graphing backend runs and writes graph files to the drive (SSD) every minute. Disable RRD graphing backend.
** May be a good feature in pfSense to add the option to change the RRD /tmp directory from hard disk to ram disk for SSDs. Only down
side is graphs will be lost on power down/reboot because the RAM is volatile. -
Upgrade to pfSense 2.0 for features such as Hard Drive S.M.A.R.T. Status
pfSense 2.0 includes a very important tool if you're running a SSD and that is S.M.A.R.T. monitoring. With this feature you will be
able to check the SSD's S.M.A.R.T. logs for I/O errors and drive problems. By monitoring this you can help see a failure coming. -
Disable ANY packages that could frequently write to the disk. If you are using a SSD drive for reliability it does require a sacrifice
of many/most packages for pfSense. If you want a ultra reliable and fast pfSense system, it's crucial that you are willing to give up
these extras.
Alternative to SSDs, you could use hardware RAID with regular hard drives if you require all of the packages, logging and extras. In my experience, you
want as little running as possible on a firewall. If you use a SSD, then you really have no choice. It's a must or you will have a failure. Time before failure
depends on the SSD quality, the SSD controller, the SSD size and frequency/size of disk writes. In some cases with continuous writes and high disk traffic
could damage the SSD in months.This applies to all SSD drives no matter the brand, quality or features. Hybrid SSDs may work a bit differently because of the RAM buffer/cache but
you should still apply the same rules.Keep pfSense simple and eliminate frequent drive writing and your system could last 10 years or more.
Question? Is this all absolutely necessary or is it just "optimization".
Answer: It is absolutely necessary, otherwise you're much better off using regular non-SSD drives in RAID. More then likely you chose SSDs for reliability so this is crucial. Placing SSDs in RAID is almost useless because the wear patterns would be mirrored and chances are they would both fail at the exact time assuming there are no manufacturing flaws. I simply cannot stress enough how dangerous it is to run SSDs with continuous writes. Your SSD will fail otherwise, only question is when.Hope this helps everyone thinking about SSDs and please post all of your thoughts. I'm sure I missed other stuff.
** This does not affect NanoBSD/Embedded versions of pfSense and to resolve these issues installing the embedded version will solve all of these problems because it will run entirely from ram **
FIX: To solve the problem install the embedded version of pfSense that runs NanoBSD. This version was aimed at compact flash cards where limited writes are a factor. SSDs use the same technology so make sure to install the embedded version (NanoBSD) if you use a SSD. The embedded version loads the entire file system in to ram so that RRD, logs and any other write intentive tasks are handled in ram instead of on disk. Just prepared to lose your VGA console and you will need to use serial to configure. Default settings are 9600 baud 8-n-1. Thank you for the input/ideas everyone.
– koukobin pointed out that there is a NanoBSD version that has VGA support. Thank you for the info!
-
-
thanks for this post.
thought id chime in myself and say, that i had gotten a kingston SSDNow 16 GB ssd, in hopes of having a fast and silent box.all went well for about 2 and a half months until wham, the drive died.
got a new one from kingston since it was well within the manufacturers warranty. -
thanks for this post.
thought id chime in myself and say, that i had gotten a kingston SSDNow 16 GB ssd, in hopes of having a fast and silent box.all went well for about 2 and a half months until wham, the drive died.
got a new one from kingston since it was well within the manufacturers warranty.Hi – sorry this happened to you. The SSD I bought was a Kingston SSD S100 8 GB. I installed everything and got all of pfSense setup and was able to change every setting except RRD logging. You can disable most all writing except the RRD backend which writes all of those graphs every 1 minute. If we come up with a fix for that the SSD concept would be great. Only way would be to disable RRD and that's not a great option.
Hopefully your crash didn't take you down too long and everything turned out ok. I bailed last minute on the SSD idea until we figure it all out.
The manufacturers don't explain the dangers of the SSDs and they put the life as "1,000,000 hours" in their specs. Only problem is that must be either in off mode or read only. 10,000 writes per memory cell is not a lot at all and everyone should think twice before using the SSDs.
Now, the pfSense LiveCD loads the entire file system in to RAM which would be perfect. If they create a SSD version of pfSense that in essence is the LiveCD image but allows mounting a Read Only file system for normal operations and only mounts Read/Write for config changes then this system would be perfect for SSDs. I would rather lose my RRD data on reboots then lose the entire drive and take down all WAN traffic...
Very sorry that the system crashed, these SSDs sure are not what they seem like.
-
Do you need to do all these if you use nanobsd version?
I have been using compact flash drives two years now without a single problem.
In nanobsd version i thing even the logs are kept in RAM.
-
Do you need to do all these if you use nanobsd version?
I have been using compact flash drives two years now without a single problem.
In nanobsd version i thing even the logs are kept in RAM.
That's a very good question. I myself have never used the nanobsd version so I am not certain on that. Maybe that version does use ram entirely and only mounts the SSD/drive in read-only and read/write for config changes. If that's the case maybe I will go the nanobsd route. It's always a good idea to limit writes regardless but that sure would be better then the regular version on a SSD.
-
Do you need to do all these if you use nanobsd version?
I have been using compact flash drives two years now without a single problem.
In nanobsd version i thing even the logs are kept in RAM.
Just checked on some things and the nanobsd version may not be a concern. It looks like you're right and the file system is loaded in to memory. Someone can hopefully confirm this for you but I think you are ok.
-
Just confirmed and the NanoBSD version is not effected.
Here is information regarding your question from the pfSense site:
"The embedded version is specifically tailored for use with any hardware using Compact Flash rather than a hard drive. CF cards can only handle a limited number of writes, so the embedded version runs read only from CF, with read/write file systems as RAM disks. "
I will update my posting to exclude nanobsd/embedded version.
This may also be the way for others to use the SSD and pfSense without issues. I may do this myself. Thank you for posting that it sparked some ideas.
-
If you are testing this with a Nano installation you should be aware that there is currently a bug in 2.0rc1 which means that the filesystem is left RW after boot. It should be RO.
However it does not cause any undue writes as everything is set to run from RAM as you say.
If you check the filesystem don't be alarmed to find it's still RW, as I was! ;)Steve
Edit: Bug listed here.
-
If you are testing this with a Nano installation you should be aware that there is currently a bug in 2.0rc1 which means that the filesystem is left RW after boot. It should be RO.
However it does not cause any undue writes as everything is set to run from RAM as you say.
If you check the filesystem don't be alarmed to find it's still RW, as I was! ;)Steve
Edit: Bug listed here.
Hey Steve – thanks for the heads up! I did install pfSense back to the SSD. I went the embedded route and I took my 8GB SSD drive and loaded the 4GB nano image (didn't really need the extra space). It booted and loaded in to ram and solved my problem. Everything including RRD, /tmp and logs all run from ram now. According to stats the disk writes are virtually nothing except for config file changes. Only downside to running on embedded is you lose the VGA/Keyboard console and have to go serial to configure but not really an issue. Who knows maybe going this route will be more reliable because VGA isnt being loaded.
This looks like the perfect solution for SSDs (the same setup as CF). pfSense really did take these issues in to account when they created the embedded/CF version.
Thanks for the help everyone.
-
Just checked on some things and the nanobsd version may not be a concern. It looks like you're right and the file system is loaded in to memory. Someone can hopefully confirm this for you but I think you are ok.
The nanobsd version can be configured to write out the logs/ RRD data at specific intervals like 30 minutes or 1 hour etc. This allows you to retain information across reboots and yet heavily reduce the amount of I/O to the SSD/ CF.
Next on the point of the SSD, most controllers are optimized for Windows (unfortunately). They rely on the Trim command to do garbage collection.
However, the Indilinx based drives (OCZ Vertex, Corsair X128 etc) have hardware level garbage collection which isn't as efficient but beats having none under *nix. This would be useful to improve general performance in the long run.The Sandforce based drives offer very very impressive write amplification and compression algorithms. This would be good for wear levelling and long term reliability. A write amplification of 0.5X means that The equivalent wear and tear in the long run is half the data being written. This is obtained through clever compression tricks and since the RRD/ log data is easily compressible, this is very effective as well.
SSDs like the OCZ Vertex 2(e), Gskill Phoenix Pro use the the Sandforce controllers.I did a short write-up on the OCZ Vertex 2E drive here:
http://vr-zone.com/articles/old-dog-new-tricks-ocz-vertex-2e-reviewed/10323.htmlThe first section lists out the very basic pros of using the Sandforce controller and how it helps reduce the amount of write penalty in terms of performance and wear and tear as well.
Also in the testing is the Indilinx based drive which will show the difference between the effectiveness of garbage collection algorithms between the 2 controllers and the performance differences as well.Of note is the difference between the compressible and incompressible data testing. If you're running embedded with mostly compressible data written (logs and such), the Sandforce is likely to be a better choice.
If you run Squid and most of the content being cached are large compressed EXE files or video files, then the Indilinx is likely to be your best bet. -
If you want to have VGA and console output at the same time, you can use Hacom nanobsd Pfsense images:
http://www.hacom.net/catalog/pub/pfsense
-
SSD?
pfsense system spec:
wyse 3455xl
Ram 256 SDRAM
HD: 4 gig SSD IDE DOM
NIC: 4 Ethernet (intel) dr0-WAN, dr1-LAN, dr2-OPT1, dr3-OPT2I installed on a 4 gig DOM SSD by Live CD for about 10 days now. I saw FJSchrankJr post a warning about SSD. What is the best solution for me? Can I load the nanobsd image by the Gui to fix my problem? or I have to reinstall from clean install.
Thank
-
The nanobsd version can be configured to write out the logs/ RRD data at specific intervals like 30 minutes or 1 hour etc. This allows you to retain information across reboots and yet heavily reduce the amount of I/O to the SSD/ CF.
Can you explain this process (or link to the page)? I just picked up a 4GB CF card to run my pfSense 2.0 install from and it'd be great to keep the graphs. Thanks!
Jim
-
What is the best solution for me? Can I load the nanobsd image by the Gui to fix my problem? or I have to reinstall from clean install.
You will want to switch to either the embedded install, sellected when you install from CD, or a NanoBSD image. You will want to do it quickly before your DOM dies! ;)
You can probably backup your config file and restore it to the fresh install. You can't switch the install type from the GUI.
Steve
-
Thank Steve
-
FYI I've had 1.23 full install running on a 40gb intel ssd (spare at time)for at least 1.5years so far. No issues.
-
Do you have a lot of ram in that machine, enough to prevent swapping?
With a 40GB drive I imagine you will have a lot of empty space, the ware leveling on the drive will swap data around such that 10,000 writes can go along way.
Even so I'd keep checking the SMART status of the drive if I were you.
Steve
-
Note the logs are kept in RAM whether you're running a full install or nanobsd.
A lot of people are running normal full installs on SSD. Sure they don't have infinite writes, but it's enough it should on average last as long as a typical hard drive. It's unlikely you're touching swap at all on your firewall, if you are you have serious issues.
Can you explain this process (or link to the page)?
Check Diagnostics>Nanobsd, the settings for the periodic automatic copy to CF of RRD data are there.
-
@cmb:
A lot of people are running normal full installs on SSD. Sure they don't have infinite writes, but it's enough it should on average last as long as a typical hard drive. It's unlikely you're touching swap at all on your firewall, if you are you have serious issues.
Hmmm, that's interesting. I assume you are talking about larger, HDD replacement type drives such as tester_02's 40GB?
I did some research on this a while ago and came to the conclusion that it just wasn't worth the extra investment in a firewall appliance. The speed increase provided by a SSD is unlikely to be of much benefit except in boot up time which doesn't count for much.
I suppose there are enough people running a Windows install from SSD that we'd be seeing failures by now if it was a problem.I'd be interested to hear the reasons behind running an SSD from anyone doing so.
Steve
-
SSD?
pfsense system spec:
wyse 3455xl
Ram 256 SDRAM
HD: 4 gig SSD IDE DOM
NIC: 4 Ethernet (intel) dr0-WAN, dr1-LAN, dr2-OPT1, dr3-OPT2I installed on a 4 gig DOM SSD by Live CD for about 10 days now. I saw FJSchrankJr post a warning about SSD. What is the best solution for me? Can I load the nanobsd image by the Gui to fix my problem? or I have to reinstall from clean install.
Thank
Hi – if you are running off of the LiveCD then everything is mounted in memory. However, if you install that the file system will be loaded to your hard drive.
The only way that I know of to switch to nanobsd version is to do a clean install like I did. I should warn you that once you load NanoBSD version everything except the config files will be running in RAM and 256 might not be enough. In the past I had been running on 256 and a hard drive install with no problem but when I switched to NanoBSD version (upgraded to 512Mb RAM) I did run out of memory. I am not sure if it was some type of memory leak or cache but I will have to install more memory because the system almost crashed.
The easiest way to upgrade is to make a config backup, write the NanoBSD image to the drive and boot. If your main firewall cannot go down then try to find another machine to act as a mirror. That's how I did it, then just restored the config and I was up and running. Good luck!
I did notice on pfSense 1.2.3 the Embedded option during the LiveCD install was not the NanoBSD version but just the disabled vga version. Make sure you use the NanoBSD version from the download mirrors.
CORRECTION: 256Mb is more than enough RAM for a base embedded install. However, adding additional packages, logging, etc. where memory usage will be higher consider using memory of 512Mb+.
-
FYI I've had 1.23 full install running on a 40gb intel ssd (spare at time)for at least 1.5years so far. No issues.
Some SSD drives use hard drive controllers that spread wear across the entire drive. 40GB drive would give the controller plenty of space to spread. If you are not running NanoBSD or an optimized regular install, you will eventually crash. SSDs and Compact flash should be treated the same way. One of the main reasons for the NanoBSD version was to address the CF write limitations.
-
256MB should be no problem, as long as you don't have a memory leak! :P
That's the standard amount in the Watchguard X-Core and there are many people running that with NanoBSD.
Steve
-
256MB should be no problem, as long as you don't have a memory leak! :P
That's the standard amount in the Watchguard X-Core and there are many people running that with NanoBSD.
Steve
Hi Steve, I do agree! I am running on pfSense 2.0 (NanoBSD) with 512Mb of RAM and no packages installed. Overnight I went to about 98% memory and DHCP server crashed. All logging is disabled so either there is a cache building somewhere in ramdisk or I do have a leak.
-
256MB should be no problem, as long as you don't have a memory leak! :P
That's the standard amount in the Watchguard X-Core and there are many people running that with NanoBSD.
Steve
Hi Steve, I do agree! I am running on pfSense 2.0 (NanoBSD) with 512Mb of RAM and no packages installed. Overnight I went to about 98% memory and DHCP server crashed. All logging is disabled so either there is a cache building somewhere in ramdisk or I do have a leak.
Good news is that my memory issue on 512Mb is not a leak. I guess we're pushing more traffic then I thought because as the MBUFs increase dynamically, the memory is jumping up too. This was not an issue on the non NanoBSD version because the file system was on disk and gave me much more RAM.
The NanoBSD version really is the perfect solution running on a SSD or CF drive. I am not running the traffic shaper on this system but if you run the traffic shaper make sure you have a lot of memory. The queues need lots of memory.
If you put pfSense on the right hardware, run NanoBSD, use a SSD, moderately high-speed CPU and a lot of memory pfSense performance is terrific and is rock solid.
-
@cmb:
A lot of people are running normal full installs on SSD. Sure they don't have infinite writes, but it's enough it should on average last as long as a typical hard drive. It's unlikely you're touching swap at all on your firewall, if you are you have serious issues.
Hmmm, that's interesting. I assume you are talking about larger, HDD replacement type drives such as tester_02's 40GB?
Yeah, 30+ GB.
In general, it's not worth it for a firewall in most environments. Unless you have something like Squid installed, the disk is almost never touched after you're booted up anyway.
-
mb link=topic=34381.msg178857#msg178857 date=1300254998]
@stephenw10:@cmb:
A lot of people are running normal full installs on SSD. Sure they don't have infinite writes, but it's enough it should on average last as long as a typical hard drive. It's unlikely you're touching swap at all on your firewall, if you are you have serious issues.
Hmmm, that's interesting. I assume you are talking about larger, HDD replacement type drives such as tester_02's 40GB?
Yeah, 30+ GB.
In general, it's not worth it for a firewall in most environments. Unless you have something like Squid installed, the disk is almost never touched after you're booted up anyway.
Hi cmb: In the NanoBSD version after boot the disk is not touched but if you are running the normal install and have not disabled log writing, RRD graphing, or any other write intensive service the SSD will be destroyed and quite possibly long before a typical hard drive. 10k writes per cell is nothing and if RRD and logs are constantly writing with each write it's destroying the NAND cells. If you're using a 30GB SSD and the memory controller in the SSD utilizes wear leveling then it has plenty of space to remap/spread across. However, if numerous writes are occurring constantly then it's a matter of time before it does fail, 30GB gives you more time. Also Hybrid SSDs are different because of the RAM cache, those would work a bit longer. Just use NanoBSD version on a SSD and it will last a very long time.
Wear leveling: http://en.wikipedia.org/wiki/Wear_leveling
SSDs are fairly new technologies and are not ready for server environments with frequent writes so it's very important to treat the current SSD technology the same way you do as CF. Embedded pfSense (NanoBSD) version is perfect for SSDs. SSDs are not a simple drop-in replacement on the normal pfSense install and if you use the embedded/NanoBSD version no additional steps are needed.
When running off of the LiveCD it does mount in memory but once installed the file system is loaded on the drive and not in RAM. There is a RAM disk but it is not used for the /tmp directory (used for RRD storage amonst other stuff) on the normal pfSense version. On embedded/NanoBSD version it runs even the /tmp directory in RAM.
You're right about the swap file, typically this will never be used unless you run out of memory. However the regular version of pfSense is just not optimized for CF/SSD memory and so disk writes will be occurring unless you disable the other services I mentioned in the first post.
-
I think you may be underestimating the number of cells in a 30GB drive available for wear leveling.
If it were true that SSDs would wear out rapidly wouldn't we be seeing more failures among all the SSD netbooks and macbook airs?Consider that Intel said, upon launching their 80GB X-25:
Our MLC SSD can allow a user to write 100GB/day every day for five years without wearing out the drive
However that doesn't tie in with the second post in this thread. 16GB drive, 2.5 months, dead!
I suspect that it's very dependent on the algorithms in the drive controller.
Then again I ran Windows 98 from a 128MB CF card for a few years with no problems! ::) (though I did disable swap)
Steve
-
I think you may be underestimating the number of cells in a 30GB drive available for wear leveling.
If it were true that SSDs would wear out rapidly wouldn't we be seeing more failures among all the SSD netbooks and macbook airs?Consider that Intel said, upon launching their 80GB X-25:
Our MLC SSD can allow a user to write 100GB/day every day for five years without wearing out the drive
However that doesn't tie in with the second post in this thread. 16GB drive, 2.5 months, dead!
I suspect that it's very dependent on the algorithms in the drive controller.
Then again I ran Windows 98 from a 128MB CF card for a few years with no problems! ::) (though I did disable swap)
Steve
Yes, you're completely right.
There are a lot of NAND cells on a 30GB drive but if logs are being used, RRD graphs, swap files that 30GB will not even be enough to stop
it from crashing. Notebook computers are much different then server environments because writes to the disk are not continuous all day.Different drives have different wear leveling and Intel makes very good products so there system I am sure is top notch but
because you really don't need much room on pfSense I would suggest just running a smaller SSD (8-16GB) on the embedded pfSense.pfSense did take in to account every last detail of how important it is to restrict write activity on flash as does NanoBSD.
Run on embedded pfSense/NanoBSD and your other components will go bad long before the SSD. I have included a link to a NanoBSD
PDF with number examples but please be advised that their 200,000 writes per cell is off because that is for commercial flash and the
consumer grade NAND can typically only do 10k writes. SSD is also a better choice then CF because it's faster and has wear leveling.www.bsdcan.org/2006/papers/nanobsd.pdf
Supposedly, if it only does a few writes per day it could last 20+ years. Now embedded pfSense does not do any writes unless you save
a config and not sure how many times someone would change a setting in a day but still. Ever since I upgraded to embedded I have not
seen any activity on the disk at all (until I change something). It's great.If someone is choosing CF or SSD, Kingston makes a 8GB SSD and it sells for $39 on NewEgg. Just run on embedded pfSense and it will
last forever (well – not quite). -
They way I see the issue with SSD's was that a CF adapter with a decent card was around $80 locally. I could pickup a new 40gb drive at the same price on the day I was looking at it (I actually had a spare drive and used that instead).
Intel controllers are decent even with OS's that don't support trim (1.2.3), and I just wanted a quiet system. I remember reading at the time the large amount of data the ssd can write per day before wearing out. I just could not see pfsense doing that, so I put the ssd in (speed was not a factor). It's lasted a long time, and it allowed me a full install (squid/snort).
I think if it did die in the next year, I'd put another one in. I don't like the CF flashing, and like the quiet compared to a laptop drive (there is also the issue of laptop drives not being rated for constant usage).Next time I'd either run another drive with 2.0 (new freebsd is trim aware I believe), or I'd go with one of the newer kingston v100 drives that have more aggressive garbage collection for non trim aware os's (apple).
-
I think you may be underestimating the number of cells in a 30GB drive available for wear leveling.
If it were true that SSDs would wear out rapidly wouldn't we be seeing more failures among all the SSD netbooks and macbook airs?Consider that Intel said, upon launching their 80GB X-25:
Our MLC SSD can allow a user to write 100GB/day every day for five years without wearing out the drive
However that doesn't tie in with the second post in this thread. 16GB drive, 2.5 months, dead!
I suspect that it's very dependent on the algorithms in the drive controller.
Then again I ran Windows 98 from a 128MB CF card for a few years with no problems! ::) (though I did disable swap)
Steve
The Sandforce controllers don't write directly to the NAND flash and they do have overprovisioning.
eg. A 120GB Sandforce has only about 110GB of usable space but 128GB of real NAND flash.What they do is to actually compute the hash of the data and write the hash out. This is actually compressed so it takes up even less space.
For log files and similarly highly compressible data, this results in less data being written to the cells. When the data is requested on a read, the controller takes the hash and generates the parity data to restore it.
Furthermore, the overprovisioned space will also reduce the actual wear on the usable area. That is why Sandforce can claim 0.5X write amplification. i.e. On the average, only half the amount of cells are being written to for an average set of data being stored to the drive.That said, this is very different from wear levelling itself. Wear levelling is more related to how writes are spread out throughout the cells to ensure that wear and tear is evened out.
-
Our MLC SSD can allow a user to write 100GB/day every day for five years without wearing out the drive
However that doesn't tie in with the second post in this thread. 16GB drive, 2.5 months, dead!
I suspect that it's very dependent on the algorithms in the drive controller.
Then again I ran Windows 98 from a 128MB CF card for a few years with no problems! ::) (though I did disable swap)
Steve
The Sandforce controllers don't write directly to the NAND flash and they do have overprovisioning.
eg. A 120GB Sandforce has only about 110GB of usable space but 128GB of real NAND flash.What they do is to actually compute the hash of the data and write the hash out. This is actually compressed so it takes up even less space.
For log files and similarly highly compressible data, this results in less data being written to the cells. When the data is requested on a read, the controller takes the hash and generates the parity data to restore it.
Furthermore, the overprovisioned space will also reduce the actual wear on the usable area. That is why Sandforce can claim 0.5X write amplification. i.e. On the average, only half the amount of cells are being written to for an average set of data being stored to the drive.That said, this is very different from wear levelling itself. Wear levelling is more related to how writes are spread out throughout the cells to ensure that wear and tear is evened out.
Supposedly industrial/commercial flash memory can do up to 200k writes per cell compared to the 10k most consumer grade
flash can do. I wonder if there is a SSD out there that uses this grade of flash though I would imagine it costs a fortune!nevertheless running on embedded pfSense and a cheap SSDNow 100 8GB is more than adequate for us.
I have a question off topic a bit for someone:
I am running about 1GB of ram and about 74% is in use which should not be the case. In researching this a bit I noticed
about 5200 processes running so_recv inetd. We heavily use NAT reflection here so can anyone explain how NAT redirection is handled
in pfSense? This morning memory usage is down to 20% and the processes are down to 2000 however NAT reflection stopped working.Trying to trace the problem but need to understand how pfSense is handling reflections. Probably the wrong place to post this.
-
SSD that can do 2,000,000 writes? Industrial/military?
They show a picture of a fighter jet on the home page:
http://www.delkinoem.com/sata-drive-industrial.html#tab-2Just hope they don't take the cheap route and use a Kingston SSD in military unmanned aerial vehicles…
-
Supposedly industrial/commercial flash memory can do up to 200k writes per cell compared to the 10k most consumer grade
flash can do. I wonder if there is a SSD out there that uses this grade of flash though I would imagine it costs a fortune!nevertheless running on embedded pfSense and a cheap SSDNow 100 8GB is more than adequate for us.
It's called SLC; we're looking at 2 to 3 times the price of a similar unit using the regular MLC stuff.
The main thing is that most newer controllers are geared towards using cheaper MLC flash and so you typically get slower SSDs when you're on the look out for SLC.
-
Supposedly industrial/commercial flash memory can do up to 200k writes per cell compared to the 10k most consumer grade
flash can do. I wonder if there is a SSD out there that uses this grade of flash though I would imagine it costs a fortune!nevertheless running on embedded pfSense and a cheap SSDNow 100 8GB is more than adequate for us.
It's called SLC; we're looking at 2 to 3 times the price of a similar unit using the regular MLC stuff.
The main thing is that most newer controllers are geared towards using cheaper MLC flash and so you typically get slower SSDs when you're on the look out for SLC.
Yes noticed the speed was much slower.
-
Yes noticed the speed was much slower.
SLC NAND flash is actually faster on the whole than MLC. Largely due to the fact that it doesn't need to be written to in blocks. However, most current SSDs using SLC are geared towards enterprise usage where data security and reliability is very important.
You can't just wipe the data on an SSD like you would with a mechanical drive using a degaussing wand.Hence, these SSDs will have additional ECC parity which reduces the performance and also encryption built-in. The end result is an SSD that is slower than its MLC counterpart. Furthermore, most controllers would be ported over directly from their MLC counterparts so you lose the cell level write capabilities anyway.
-
Actually did not know that… Up until the other day I didn't even realize there was industrial flash capable of that. Sure beats the days of EPROM and UV erasers.
-
However that doesn't tie in with the second post in this thread. 16GB drive, 2.5 months, dead!
Remember that SSD is just electronics that might die for some other reason than wearing out. I've seen cheap SSD:t breaking after just weeks of very light laptop use, and I'm sure that the NAND media were just fine, and it was the controller that died. Also note that most good SSD just go into read-only mode when they wear out and there should be no data-loss.
I've no experience yet with running pfsense on SSD but I'd use an quality SLC drive or large-enough (80+ GB) Intel MLC, maybe "short stroked", and would not worry about it too much. Small compact flashes are a totally different thing, of course. I've been using a 160 GB Intel drive for 1,5 years on my laptop without disabling swap or doing anything else to reduce writes (and it's a Mac without TRIM support), and I even run a program called downtimed that touches the drive every 15 seconds. Now, after writing 3.5 TB to the drive, the SMART media wearout indicator says 98, which means I've used about 2% of the drives life…I think this drive will outlive me.
-
My full install at home as main router/firewall was running from 2004 on Sandisk Ultra III 1 GB until it gave in January 2011. So 6-7 years.. quite good for consumer grade. But of course your mileage could vary :). I had spare 1 GB Kingston Elite Pro and I am running that for now, will see how long it will last.
I ordered 4 GB Trancend 133x for 10€ and I'm experimenting with 2.0 setup with that and will be taken to production when it's stable enough. No need for expensive SSD:s.
-
My full install at home as main router/firewall was running from 2004 on Sandisk Ultra III 1 GB until it gave in January 2011. So 6-7 years.
Hmm, interesting stuff. Did you have a lot of ram? Were you running any packages?
Steve
-
Yes, always have enough. I had first 512 Mb and now there is 2Gb. Snort hogs a lot of memory :), which was my only package running always and couple of diagnostic/testing tool packages like iPerf. Also I installed dashboard when it was relased.
I also deleted the page file at install.
I googled quickly and seems like Ultra III might be SLC type and it's more like professional grade. Kingston Elite Pro is also SLC. So Trancend is MLC http://reviews.pricegrabber.co.uk/laptop-memory/m/47729654/ . Should have bought Extreme III :). But I will built a setup on Trancend, so we'll see if it really fails after couple of months.
I have been going to install dedicated logging service LogAnalyzer on a virtual machine http://loganalyzer.adiscon.com/ . It would be nice to have all logs in database in one place. This should also help continue life of my CF installation if the logs are really written to CF. But I thought that they are only written on local RAM disk, cause there is option Disable writing log files to the local ram disk and they disappear after reboot.