SSD (Solid State Drive) and pfSense (Important)
-
Do you have a lot of ram in that machine, enough to prevent swapping?
With a 40GB drive I imagine you will have a lot of empty space, the ware leveling on the drive will swap data around such that 10,000 writes can go along way.
Even so I'd keep checking the SMART status of the drive if I were you.
Steve
-
Note the logs are kept in RAM whether you're running a full install or nanobsd.
A lot of people are running normal full installs on SSD. Sure they don't have infinite writes, but it's enough it should on average last as long as a typical hard drive. It's unlikely you're touching swap at all on your firewall, if you are you have serious issues.
Can you explain this process (or link to the page)?
Check Diagnostics>Nanobsd, the settings for the periodic automatic copy to CF of RRD data are there.
-
@cmb:
A lot of people are running normal full installs on SSD. Sure they don't have infinite writes, but it's enough it should on average last as long as a typical hard drive. It's unlikely you're touching swap at all on your firewall, if you are you have serious issues.
Hmmm, that's interesting. I assume you are talking about larger, HDD replacement type drives such as tester_02's 40GB?
I did some research on this a while ago and came to the conclusion that it just wasn't worth the extra investment in a firewall appliance. The speed increase provided by a SSD is unlikely to be of much benefit except in boot up time which doesn't count for much.
I suppose there are enough people running a Windows install from SSD that we'd be seeing failures by now if it was a problem.I'd be interested to hear the reasons behind running an SSD from anyone doing so.
Steve
-
SSD?
pfsense system spec:
wyse 3455xl
Ram 256 SDRAM
HD: 4 gig SSD IDE DOM
NIC: 4 Ethernet (intel) dr0-WAN, dr1-LAN, dr2-OPT1, dr3-OPT2I installed on a 4 gig DOM SSD by Live CD for about 10 days now. I saw FJSchrankJr post a warning about SSD. What is the best solution for me? Can I load the nanobsd image by the Gui to fix my problem? or I have to reinstall from clean install.
Thank
Hi – if you are running off of the LiveCD then everything is mounted in memory. However, if you install that the file system will be loaded to your hard drive.
The only way that I know of to switch to nanobsd version is to do a clean install like I did. I should warn you that once you load NanoBSD version everything except the config files will be running in RAM and 256 might not be enough. In the past I had been running on 256 and a hard drive install with no problem but when I switched to NanoBSD version (upgraded to 512Mb RAM) I did run out of memory. I am not sure if it was some type of memory leak or cache but I will have to install more memory because the system almost crashed.
The easiest way to upgrade is to make a config backup, write the NanoBSD image to the drive and boot. If your main firewall cannot go down then try to find another machine to act as a mirror. That's how I did it, then just restored the config and I was up and running. Good luck!
I did notice on pfSense 1.2.3 the Embedded option during the LiveCD install was not the NanoBSD version but just the disabled vga version. Make sure you use the NanoBSD version from the download mirrors.
CORRECTION: 256Mb is more than enough RAM for a base embedded install. However, adding additional packages, logging, etc. where memory usage will be higher consider using memory of 512Mb+.
-
FYI I've had 1.23 full install running on a 40gb intel ssd (spare at time)for at least 1.5years so far. No issues.
Some SSD drives use hard drive controllers that spread wear across the entire drive. 40GB drive would give the controller plenty of space to spread. If you are not running NanoBSD or an optimized regular install, you will eventually crash. SSDs and Compact flash should be treated the same way. One of the main reasons for the NanoBSD version was to address the CF write limitations.
-
256MB should be no problem, as long as you don't have a memory leak! :P
That's the standard amount in the Watchguard X-Core and there are many people running that with NanoBSD.
Steve
-
256MB should be no problem, as long as you don't have a memory leak! :P
That's the standard amount in the Watchguard X-Core and there are many people running that with NanoBSD.
Steve
Hi Steve, I do agree! I am running on pfSense 2.0 (NanoBSD) with 512Mb of RAM and no packages installed. Overnight I went to about 98% memory and DHCP server crashed. All logging is disabled so either there is a cache building somewhere in ramdisk or I do have a leak.
-
256MB should be no problem, as long as you don't have a memory leak! :P
That's the standard amount in the Watchguard X-Core and there are many people running that with NanoBSD.
Steve
Hi Steve, I do agree! I am running on pfSense 2.0 (NanoBSD) with 512Mb of RAM and no packages installed. Overnight I went to about 98% memory and DHCP server crashed. All logging is disabled so either there is a cache building somewhere in ramdisk or I do have a leak.
Good news is that my memory issue on 512Mb is not a leak. I guess we're pushing more traffic then I thought because as the MBUFs increase dynamically, the memory is jumping up too. This was not an issue on the non NanoBSD version because the file system was on disk and gave me much more RAM.
The NanoBSD version really is the perfect solution running on a SSD or CF drive. I am not running the traffic shaper on this system but if you run the traffic shaper make sure you have a lot of memory. The queues need lots of memory.
If you put pfSense on the right hardware, run NanoBSD, use a SSD, moderately high-speed CPU and a lot of memory pfSense performance is terrific and is rock solid.
-
@cmb:
A lot of people are running normal full installs on SSD. Sure they don't have infinite writes, but it's enough it should on average last as long as a typical hard drive. It's unlikely you're touching swap at all on your firewall, if you are you have serious issues.
Hmmm, that's interesting. I assume you are talking about larger, HDD replacement type drives such as tester_02's 40GB?
Yeah, 30+ GB.
In general, it's not worth it for a firewall in most environments. Unless you have something like Squid installed, the disk is almost never touched after you're booted up anyway.
-
mb link=topic=34381.msg178857#msg178857 date=1300254998]
@stephenw10:@cmb:
A lot of people are running normal full installs on SSD. Sure they don't have infinite writes, but it's enough it should on average last as long as a typical hard drive. It's unlikely you're touching swap at all on your firewall, if you are you have serious issues.
Hmmm, that's interesting. I assume you are talking about larger, HDD replacement type drives such as tester_02's 40GB?
Yeah, 30+ GB.
In general, it's not worth it for a firewall in most environments. Unless you have something like Squid installed, the disk is almost never touched after you're booted up anyway.
Hi cmb: In the NanoBSD version after boot the disk is not touched but if you are running the normal install and have not disabled log writing, RRD graphing, or any other write intensive service the SSD will be destroyed and quite possibly long before a typical hard drive. 10k writes per cell is nothing and if RRD and logs are constantly writing with each write it's destroying the NAND cells. If you're using a 30GB SSD and the memory controller in the SSD utilizes wear leveling then it has plenty of space to remap/spread across. However, if numerous writes are occurring constantly then it's a matter of time before it does fail, 30GB gives you more time. Also Hybrid SSDs are different because of the RAM cache, those would work a bit longer. Just use NanoBSD version on a SSD and it will last a very long time.
Wear leveling: http://en.wikipedia.org/wiki/Wear_leveling
SSDs are fairly new technologies and are not ready for server environments with frequent writes so it's very important to treat the current SSD technology the same way you do as CF. Embedded pfSense (NanoBSD) version is perfect for SSDs. SSDs are not a simple drop-in replacement on the normal pfSense install and if you use the embedded/NanoBSD version no additional steps are needed.
When running off of the LiveCD it does mount in memory but once installed the file system is loaded on the drive and not in RAM. There is a RAM disk but it is not used for the /tmp directory (used for RRD storage amonst other stuff) on the normal pfSense version. On embedded/NanoBSD version it runs even the /tmp directory in RAM.
You're right about the swap file, typically this will never be used unless you run out of memory. However the regular version of pfSense is just not optimized for CF/SSD memory and so disk writes will be occurring unless you disable the other services I mentioned in the first post.
-
I think you may be underestimating the number of cells in a 30GB drive available for wear leveling.
If it were true that SSDs would wear out rapidly wouldn't we be seeing more failures among all the SSD netbooks and macbook airs?Consider that Intel said, upon launching their 80GB X-25:
Our MLC SSD can allow a user to write 100GB/day every day for five years without wearing out the drive
However that doesn't tie in with the second post in this thread. 16GB drive, 2.5 months, dead!
I suspect that it's very dependent on the algorithms in the drive controller.
Then again I ran Windows 98 from a 128MB CF card for a few years with no problems! ::) (though I did disable swap)
Steve
-
I think you may be underestimating the number of cells in a 30GB drive available for wear leveling.
If it were true that SSDs would wear out rapidly wouldn't we be seeing more failures among all the SSD netbooks and macbook airs?Consider that Intel said, upon launching their 80GB X-25:
Our MLC SSD can allow a user to write 100GB/day every day for five years without wearing out the drive
However that doesn't tie in with the second post in this thread. 16GB drive, 2.5 months, dead!
I suspect that it's very dependent on the algorithms in the drive controller.
Then again I ran Windows 98 from a 128MB CF card for a few years with no problems! ::) (though I did disable swap)
Steve
Yes, you're completely right.
There are a lot of NAND cells on a 30GB drive but if logs are being used, RRD graphs, swap files that 30GB will not even be enough to stop
it from crashing. Notebook computers are much different then server environments because writes to the disk are not continuous all day.Different drives have different wear leveling and Intel makes very good products so there system I am sure is top notch but
because you really don't need much room on pfSense I would suggest just running a smaller SSD (8-16GB) on the embedded pfSense.pfSense did take in to account every last detail of how important it is to restrict write activity on flash as does NanoBSD.
Run on embedded pfSense/NanoBSD and your other components will go bad long before the SSD. I have included a link to a NanoBSD
PDF with number examples but please be advised that their 200,000 writes per cell is off because that is for commercial flash and the
consumer grade NAND can typically only do 10k writes. SSD is also a better choice then CF because it's faster and has wear leveling.www.bsdcan.org/2006/papers/nanobsd.pdf
Supposedly, if it only does a few writes per day it could last 20+ years. Now embedded pfSense does not do any writes unless you save
a config and not sure how many times someone would change a setting in a day but still. Ever since I upgraded to embedded I have not
seen any activity on the disk at all (until I change something). It's great.If someone is choosing CF or SSD, Kingston makes a 8GB SSD and it sells for $39 on NewEgg. Just run on embedded pfSense and it will
last forever (well – not quite). -
They way I see the issue with SSD's was that a CF adapter with a decent card was around $80 locally. I could pickup a new 40gb drive at the same price on the day I was looking at it (I actually had a spare drive and used that instead).
Intel controllers are decent even with OS's that don't support trim (1.2.3), and I just wanted a quiet system. I remember reading at the time the large amount of data the ssd can write per day before wearing out. I just could not see pfsense doing that, so I put the ssd in (speed was not a factor). It's lasted a long time, and it allowed me a full install (squid/snort).
I think if it did die in the next year, I'd put another one in. I don't like the CF flashing, and like the quiet compared to a laptop drive (there is also the issue of laptop drives not being rated for constant usage).Next time I'd either run another drive with 2.0 (new freebsd is trim aware I believe), or I'd go with one of the newer kingston v100 drives that have more aggressive garbage collection for non trim aware os's (apple).
-
I think you may be underestimating the number of cells in a 30GB drive available for wear leveling.
If it were true that SSDs would wear out rapidly wouldn't we be seeing more failures among all the SSD netbooks and macbook airs?Consider that Intel said, upon launching their 80GB X-25:
Our MLC SSD can allow a user to write 100GB/day every day for five years without wearing out the drive
However that doesn't tie in with the second post in this thread. 16GB drive, 2.5 months, dead!
I suspect that it's very dependent on the algorithms in the drive controller.
Then again I ran Windows 98 from a 128MB CF card for a few years with no problems! ::) (though I did disable swap)
Steve
The Sandforce controllers don't write directly to the NAND flash and they do have overprovisioning.
eg. A 120GB Sandforce has only about 110GB of usable space but 128GB of real NAND flash.What they do is to actually compute the hash of the data and write the hash out. This is actually compressed so it takes up even less space.
For log files and similarly highly compressible data, this results in less data being written to the cells. When the data is requested on a read, the controller takes the hash and generates the parity data to restore it.
Furthermore, the overprovisioned space will also reduce the actual wear on the usable area. That is why Sandforce can claim 0.5X write amplification. i.e. On the average, only half the amount of cells are being written to for an average set of data being stored to the drive.That said, this is very different from wear levelling itself. Wear levelling is more related to how writes are spread out throughout the cells to ensure that wear and tear is evened out.
-
Our MLC SSD can allow a user to write 100GB/day every day for five years without wearing out the drive
However that doesn't tie in with the second post in this thread. 16GB drive, 2.5 months, dead!
I suspect that it's very dependent on the algorithms in the drive controller.
Then again I ran Windows 98 from a 128MB CF card for a few years with no problems! ::) (though I did disable swap)
Steve
The Sandforce controllers don't write directly to the NAND flash and they do have overprovisioning.
eg. A 120GB Sandforce has only about 110GB of usable space but 128GB of real NAND flash.What they do is to actually compute the hash of the data and write the hash out. This is actually compressed so it takes up even less space.
For log files and similarly highly compressible data, this results in less data being written to the cells. When the data is requested on a read, the controller takes the hash and generates the parity data to restore it.
Furthermore, the overprovisioned space will also reduce the actual wear on the usable area. That is why Sandforce can claim 0.5X write amplification. i.e. On the average, only half the amount of cells are being written to for an average set of data being stored to the drive.That said, this is very different from wear levelling itself. Wear levelling is more related to how writes are spread out throughout the cells to ensure that wear and tear is evened out.
Supposedly industrial/commercial flash memory can do up to 200k writes per cell compared to the 10k most consumer grade
flash can do. I wonder if there is a SSD out there that uses this grade of flash though I would imagine it costs a fortune!nevertheless running on embedded pfSense and a cheap SSDNow 100 8GB is more than adequate for us.
I have a question off topic a bit for someone:
I am running about 1GB of ram and about 74% is in use which should not be the case. In researching this a bit I noticed
about 5200 processes running so_recv inetd. We heavily use NAT reflection here so can anyone explain how NAT redirection is handled
in pfSense? This morning memory usage is down to 20% and the processes are down to 2000 however NAT reflection stopped working.Trying to trace the problem but need to understand how pfSense is handling reflections. Probably the wrong place to post this.
-
SSD that can do 2,000,000 writes? Industrial/military?
They show a picture of a fighter jet on the home page:
http://www.delkinoem.com/sata-drive-industrial.html#tab-2Just hope they don't take the cheap route and use a Kingston SSD in military unmanned aerial vehicles…
-
Supposedly industrial/commercial flash memory can do up to 200k writes per cell compared to the 10k most consumer grade
flash can do. I wonder if there is a SSD out there that uses this grade of flash though I would imagine it costs a fortune!nevertheless running on embedded pfSense and a cheap SSDNow 100 8GB is more than adequate for us.
It's called SLC; we're looking at 2 to 3 times the price of a similar unit using the regular MLC stuff.
The main thing is that most newer controllers are geared towards using cheaper MLC flash and so you typically get slower SSDs when you're on the look out for SLC.
-
Supposedly industrial/commercial flash memory can do up to 200k writes per cell compared to the 10k most consumer grade
flash can do. I wonder if there is a SSD out there that uses this grade of flash though I would imagine it costs a fortune!nevertheless running on embedded pfSense and a cheap SSDNow 100 8GB is more than adequate for us.
It's called SLC; we're looking at 2 to 3 times the price of a similar unit using the regular MLC stuff.
The main thing is that most newer controllers are geared towards using cheaper MLC flash and so you typically get slower SSDs when you're on the look out for SLC.
Yes noticed the speed was much slower.
-
Yes noticed the speed was much slower.
SLC NAND flash is actually faster on the whole than MLC. Largely due to the fact that it doesn't need to be written to in blocks. However, most current SSDs using SLC are geared towards enterprise usage where data security and reliability is very important.
You can't just wipe the data on an SSD like you would with a mechanical drive using a degaussing wand.Hence, these SSDs will have additional ECC parity which reduces the performance and also encryption built-in. The end result is an SSD that is slower than its MLC counterpart. Furthermore, most controllers would be ported over directly from their MLC counterparts so you lose the cell level write capabilities anyway.
-
Actually did not know that… Up until the other day I didn't even realize there was industrial flash capable of that. Sure beats the days of EPROM and UV erasers.