SSD (Solid State Drive) and pfSense (Important)
-
mb link=topic=34381.msg178857#msg178857 date=1300254998]
@stephenw10:@cmb:
A lot of people are running normal full installs on SSD. Sure they don't have infinite writes, but it's enough it should on average last as long as a typical hard drive. It's unlikely you're touching swap at all on your firewall, if you are you have serious issues.
Hmmm, that's interesting. I assume you are talking about larger, HDD replacement type drives such as tester_02's 40GB?
Yeah, 30+ GB.
In general, it's not worth it for a firewall in most environments. Unless you have something like Squid installed, the disk is almost never touched after you're booted up anyway.
Hi cmb: In the NanoBSD version after boot the disk is not touched but if you are running the normal install and have not disabled log writing, RRD graphing, or any other write intensive service the SSD will be destroyed and quite possibly long before a typical hard drive. 10k writes per cell is nothing and if RRD and logs are constantly writing with each write it's destroying the NAND cells. If you're using a 30GB SSD and the memory controller in the SSD utilizes wear leveling then it has plenty of space to remap/spread across. However, if numerous writes are occurring constantly then it's a matter of time before it does fail, 30GB gives you more time. Also Hybrid SSDs are different because of the RAM cache, those would work a bit longer. Just use NanoBSD version on a SSD and it will last a very long time.
Wear leveling: http://en.wikipedia.org/wiki/Wear_leveling
SSDs are fairly new technologies and are not ready for server environments with frequent writes so it's very important to treat the current SSD technology the same way you do as CF. Embedded pfSense (NanoBSD) version is perfect for SSDs. SSDs are not a simple drop-in replacement on the normal pfSense install and if you use the embedded/NanoBSD version no additional steps are needed.
When running off of the LiveCD it does mount in memory but once installed the file system is loaded on the drive and not in RAM. There is a RAM disk but it is not used for the /tmp directory (used for RRD storage amonst other stuff) on the normal pfSense version. On embedded/NanoBSD version it runs even the /tmp directory in RAM.
You're right about the swap file, typically this will never be used unless you run out of memory. However the regular version of pfSense is just not optimized for CF/SSD memory and so disk writes will be occurring unless you disable the other services I mentioned in the first post.
-
I think you may be underestimating the number of cells in a 30GB drive available for wear leveling.
If it were true that SSDs would wear out rapidly wouldn't we be seeing more failures among all the SSD netbooks and macbook airs?Consider that Intel said, upon launching their 80GB X-25:
Our MLC SSD can allow a user to write 100GB/day every day for five years without wearing out the drive
However that doesn't tie in with the second post in this thread. 16GB drive, 2.5 months, dead!
I suspect that it's very dependent on the algorithms in the drive controller.
Then again I ran Windows 98 from a 128MB CF card for a few years with no problems! ::) (though I did disable swap)
Steve
-
I think you may be underestimating the number of cells in a 30GB drive available for wear leveling.
If it were true that SSDs would wear out rapidly wouldn't we be seeing more failures among all the SSD netbooks and macbook airs?Consider that Intel said, upon launching their 80GB X-25:
Our MLC SSD can allow a user to write 100GB/day every day for five years without wearing out the drive
However that doesn't tie in with the second post in this thread. 16GB drive, 2.5 months, dead!
I suspect that it's very dependent on the algorithms in the drive controller.
Then again I ran Windows 98 from a 128MB CF card for a few years with no problems! ::) (though I did disable swap)
Steve
Yes, you're completely right.
There are a lot of NAND cells on a 30GB drive but if logs are being used, RRD graphs, swap files that 30GB will not even be enough to stop
it from crashing. Notebook computers are much different then server environments because writes to the disk are not continuous all day.Different drives have different wear leveling and Intel makes very good products so there system I am sure is top notch but
because you really don't need much room on pfSense I would suggest just running a smaller SSD (8-16GB) on the embedded pfSense.pfSense did take in to account every last detail of how important it is to restrict write activity on flash as does NanoBSD.
Run on embedded pfSense/NanoBSD and your other components will go bad long before the SSD. I have included a link to a NanoBSD
PDF with number examples but please be advised that their 200,000 writes per cell is off because that is for commercial flash and the
consumer grade NAND can typically only do 10k writes. SSD is also a better choice then CF because it's faster and has wear leveling.www.bsdcan.org/2006/papers/nanobsd.pdf
Supposedly, if it only does a few writes per day it could last 20+ years. Now embedded pfSense does not do any writes unless you save
a config and not sure how many times someone would change a setting in a day but still. Ever since I upgraded to embedded I have not
seen any activity on the disk at all (until I change something). It's great.If someone is choosing CF or SSD, Kingston makes a 8GB SSD and it sells for $39 on NewEgg. Just run on embedded pfSense and it will
last forever (well – not quite). -
They way I see the issue with SSD's was that a CF adapter with a decent card was around $80 locally. I could pickup a new 40gb drive at the same price on the day I was looking at it (I actually had a spare drive and used that instead).
Intel controllers are decent even with OS's that don't support trim (1.2.3), and I just wanted a quiet system. I remember reading at the time the large amount of data the ssd can write per day before wearing out. I just could not see pfsense doing that, so I put the ssd in (speed was not a factor). It's lasted a long time, and it allowed me a full install (squid/snort).
I think if it did die in the next year, I'd put another one in. I don't like the CF flashing, and like the quiet compared to a laptop drive (there is also the issue of laptop drives not being rated for constant usage).Next time I'd either run another drive with 2.0 (new freebsd is trim aware I believe), or I'd go with one of the newer kingston v100 drives that have more aggressive garbage collection for non trim aware os's (apple).
-
I think you may be underestimating the number of cells in a 30GB drive available for wear leveling.
If it were true that SSDs would wear out rapidly wouldn't we be seeing more failures among all the SSD netbooks and macbook airs?Consider that Intel said, upon launching their 80GB X-25:
Our MLC SSD can allow a user to write 100GB/day every day for five years without wearing out the drive
However that doesn't tie in with the second post in this thread. 16GB drive, 2.5 months, dead!
I suspect that it's very dependent on the algorithms in the drive controller.
Then again I ran Windows 98 from a 128MB CF card for a few years with no problems! ::) (though I did disable swap)
Steve
The Sandforce controllers don't write directly to the NAND flash and they do have overprovisioning.
eg. A 120GB Sandforce has only about 110GB of usable space but 128GB of real NAND flash.What they do is to actually compute the hash of the data and write the hash out. This is actually compressed so it takes up even less space.
For log files and similarly highly compressible data, this results in less data being written to the cells. When the data is requested on a read, the controller takes the hash and generates the parity data to restore it.
Furthermore, the overprovisioned space will also reduce the actual wear on the usable area. That is why Sandforce can claim 0.5X write amplification. i.e. On the average, only half the amount of cells are being written to for an average set of data being stored to the drive.That said, this is very different from wear levelling itself. Wear levelling is more related to how writes are spread out throughout the cells to ensure that wear and tear is evened out.
-
Our MLC SSD can allow a user to write 100GB/day every day for five years without wearing out the drive
However that doesn't tie in with the second post in this thread. 16GB drive, 2.5 months, dead!
I suspect that it's very dependent on the algorithms in the drive controller.
Then again I ran Windows 98 from a 128MB CF card for a few years with no problems! ::) (though I did disable swap)
Steve
The Sandforce controllers don't write directly to the NAND flash and they do have overprovisioning.
eg. A 120GB Sandforce has only about 110GB of usable space but 128GB of real NAND flash.What they do is to actually compute the hash of the data and write the hash out. This is actually compressed so it takes up even less space.
For log files and similarly highly compressible data, this results in less data being written to the cells. When the data is requested on a read, the controller takes the hash and generates the parity data to restore it.
Furthermore, the overprovisioned space will also reduce the actual wear on the usable area. That is why Sandforce can claim 0.5X write amplification. i.e. On the average, only half the amount of cells are being written to for an average set of data being stored to the drive.That said, this is very different from wear levelling itself. Wear levelling is more related to how writes are spread out throughout the cells to ensure that wear and tear is evened out.
Supposedly industrial/commercial flash memory can do up to 200k writes per cell compared to the 10k most consumer grade
flash can do. I wonder if there is a SSD out there that uses this grade of flash though I would imagine it costs a fortune!nevertheless running on embedded pfSense and a cheap SSDNow 100 8GB is more than adequate for us.
I have a question off topic a bit for someone:
I am running about 1GB of ram and about 74% is in use which should not be the case. In researching this a bit I noticed
about 5200 processes running so_recv inetd. We heavily use NAT reflection here so can anyone explain how NAT redirection is handled
in pfSense? This morning memory usage is down to 20% and the processes are down to 2000 however NAT reflection stopped working.Trying to trace the problem but need to understand how pfSense is handling reflections. Probably the wrong place to post this.
-
SSD that can do 2,000,000 writes? Industrial/military?
They show a picture of a fighter jet on the home page:
http://www.delkinoem.com/sata-drive-industrial.html#tab-2Just hope they don't take the cheap route and use a Kingston SSD in military unmanned aerial vehicles…
-
Supposedly industrial/commercial flash memory can do up to 200k writes per cell compared to the 10k most consumer grade
flash can do. I wonder if there is a SSD out there that uses this grade of flash though I would imagine it costs a fortune!nevertheless running on embedded pfSense and a cheap SSDNow 100 8GB is more than adequate for us.
It's called SLC; we're looking at 2 to 3 times the price of a similar unit using the regular MLC stuff.
The main thing is that most newer controllers are geared towards using cheaper MLC flash and so you typically get slower SSDs when you're on the look out for SLC.
-
Supposedly industrial/commercial flash memory can do up to 200k writes per cell compared to the 10k most consumer grade
flash can do. I wonder if there is a SSD out there that uses this grade of flash though I would imagine it costs a fortune!nevertheless running on embedded pfSense and a cheap SSDNow 100 8GB is more than adequate for us.
It's called SLC; we're looking at 2 to 3 times the price of a similar unit using the regular MLC stuff.
The main thing is that most newer controllers are geared towards using cheaper MLC flash and so you typically get slower SSDs when you're on the look out for SLC.
Yes noticed the speed was much slower.
-
Yes noticed the speed was much slower.
SLC NAND flash is actually faster on the whole than MLC. Largely due to the fact that it doesn't need to be written to in blocks. However, most current SSDs using SLC are geared towards enterprise usage where data security and reliability is very important.
You can't just wipe the data on an SSD like you would with a mechanical drive using a degaussing wand.Hence, these SSDs will have additional ECC parity which reduces the performance and also encryption built-in. The end result is an SSD that is slower than its MLC counterpart. Furthermore, most controllers would be ported over directly from their MLC counterparts so you lose the cell level write capabilities anyway.
-
Actually did not know that… Up until the other day I didn't even realize there was industrial flash capable of that. Sure beats the days of EPROM and UV erasers.
-
However that doesn't tie in with the second post in this thread. 16GB drive, 2.5 months, dead!
Remember that SSD is just electronics that might die for some other reason than wearing out. I've seen cheap SSD:t breaking after just weeks of very light laptop use, and I'm sure that the NAND media were just fine, and it was the controller that died. Also note that most good SSD just go into read-only mode when they wear out and there should be no data-loss.
I've no experience yet with running pfsense on SSD but I'd use an quality SLC drive or large-enough (80+ GB) Intel MLC, maybe "short stroked", and would not worry about it too much. Small compact flashes are a totally different thing, of course. I've been using a 160 GB Intel drive for 1,5 years on my laptop without disabling swap or doing anything else to reduce writes (and it's a Mac without TRIM support), and I even run a program called downtimed that touches the drive every 15 seconds. Now, after writing 3.5 TB to the drive, the SMART media wearout indicator says 98, which means I've used about 2% of the drives life…I think this drive will outlive me.
-
My full install at home as main router/firewall was running from 2004 on Sandisk Ultra III 1 GB until it gave in January 2011. So 6-7 years.. quite good for consumer grade. But of course your mileage could vary :). I had spare 1 GB Kingston Elite Pro and I am running that for now, will see how long it will last.
I ordered 4 GB Trancend 133x for 10€ and I'm experimenting with 2.0 setup with that and will be taken to production when it's stable enough. No need for expensive SSD:s.
-
My full install at home as main router/firewall was running from 2004 on Sandisk Ultra III 1 GB until it gave in January 2011. So 6-7 years.
Hmm, interesting stuff. Did you have a lot of ram? Were you running any packages?
Steve
-
Yes, always have enough. I had first 512 Mb and now there is 2Gb. Snort hogs a lot of memory :), which was my only package running always and couple of diagnostic/testing tool packages like iPerf. Also I installed dashboard when it was relased.
I also deleted the page file at install.
I googled quickly and seems like Ultra III might be SLC type and it's more like professional grade. Kingston Elite Pro is also SLC. So Trancend is MLC http://reviews.pricegrabber.co.uk/laptop-memory/m/47729654/ . Should have bought Extreme III :). But I will built a setup on Trancend, so we'll see if it really fails after couple of months.
I have been going to install dedicated logging service LogAnalyzer on a virtual machine http://loganalyzer.adiscon.com/ . It would be nice to have all logs in database in one place. This should also help continue life of my CF installation if the logs are really written to CF. But I thought that they are only written on local RAM disk, cause there is option Disable writing log files to the local ram disk and they disappear after reboot.
-
I will built a setup on Trancend, so we'll see if it really fails after couple of months.
That will be useful real world data, especially for me as I'm also runing 133X Transend cards.
Good luck! 8)Steve
-
SSD that can do 2,000,000 writes? Industrial/military?
They show a picture of a fighter jet on the home page:
http://www.delkinoem.com/sata-drive-industrial.html#tab-2Just hope they don't take the cheap route and use a Kingston SSD in military unmanned aerial vehicles…
Off topic, but I worked at a large defense contractor on some bleeding edge electronics to retrofit Abrams and Bradley vehicles to plug into the Army's FCS network a couple years back, and I remember the SSD used in each one of those babies cost roughly $50K each. And it was definitely not Kingston, lol. I'd imagine a UAV or aircraft would have even greater physical requirements, although I suppose a tank has to withstand a direct RPG/explosives hit which can cause a lot of stress on computer stuff.
Is there any way to verify if a given CF card has wear-leveling? Or can anyone recommend a certain brand/model they know has that feature? I'd hate to buy a cheapie 16gb card thinking it would be safe and have it die a couple months out, but I'd like to save a few bucks over industrial CF at $100/pop.
-
Newegg has some low capacity SSDs for cheap.
The 8GB Kingston SSDNow S100 SSD comes in at US$39.99 - http://www.newegg.com/Product/Product.aspx?Item=N82E16820139427
The 16GB at US$49.99 - http://www.newegg.com/Product/Product.aspx?Item=N82E16820139428A 32GB ADATA SSD costs US$69.99 - http://www.newegg.com/Product/Product.aspx?Item=N82E16820211478
The OCZ Onyx 32GB weighs in at US$74.99 - http://www.newegg.com/Product/Product.aspx?Item=N82E16820227510With prices like these, I don't think you should bother looking for CF cards with wear levelling controllers. Most of them would be SLC based industrial units that cost an arm and a leg. A SSD would definitely have wear levelling built-in and can plausibly be cheaper.
If you must use a CF card, then you can look for the Transcend CF300 and CF100i units on Newegg, they use SLC NAND and have wear levelling built-in.
-
Couldn't you just use an SLC drive instead of the cheap MLC drives and limit the number of writes? I would think this would be a good compromise for those that want to use packages and the full install. Of course the SLC drive will be more expensive, but you get what you pay for! :)
-
Thank you very much to every one that participated in this topic
All of the doubts i had about using SSD / Flash Drives has been covered.
Originally I had a hard time trying to figure out which image or install method was best to use, and what's the draw back if i just went with the ISO install method on to a Flash media.
Well the results are quite clear, it's exactly as i though it'll be in the worse case scenario.
Basically after reading all of the above, now I know i'll be better off using the nanobsd (embedded) install than the regular method via the liveCD
another alternative i had in mind, and was planning to use to avoid the COM port annoyance of the nanobsd version, was to use the memstick image, which supposedly is a liveCD in a bootable usb
Now my question is.
would it work if i raw write the pfSense-memstick-2.0-RC3-i386-20110621-1650.img to a SSD?
I'll assume yes.
but then it comes the issue, how to save the configuration, and reload the configs automatically after reboot.both method has its trade off
the nanobsd is already made and configured to that extend, but it lacks keyaboard and video
the usb stick version offers keyboard and video, but it lacks the option to keep and auto load the configuration changeshmmmmmmm…...... perhaps a work around is to use something like Universal USB Installer ( http://www.pendrivelinux.com/universal-usb-installer-easy-as-1-2-3/ ) that works for most linux/bsd liveCDs, yet you can keep the changes you have made during the session on a CASPER partition or file in the host USB drive, which in our case will be a SSD drive or Flash drive that boots from ATA instead of USB.
Any though on that?
Just for reference the hardware i was using is currently running the old pfsense 1.2.3, which i plan to wipe out and install clean using a 8GB Class 6 Transcend Flash Drive on ATA-2 (EIDE) it's an old
Dual Pentium 233 MMX
512 RAM
3x PCI 10/100/1000 Mbit/s 3Com (WAN1+WAN2+LAN)
1x WiFi G (LAN)so the only upgrade is really replacing the old super slow, noisy and hot, 500Mb SCSI 10Mbyte/ HDD & SCSI controller with a 8GB SD to ATA which maxes out at 16 Mbyte/s
then installing the new pfsense 2.0 to it.
it's a very well kept Antique machine, which server no better purpose than just being a load balancing router/switch/firewall
it's even too slow for a GUI linux :P