is this ok for a SSD setup?
-
Hello to all
Im going through a learning curve with pfSense on my recently installed system.
2.4.5-RELEASE-p1 (amd64)
FreeBSD 11.3-STABLELenovo m93p Tiny
480gb SSD (Kingston 480GB A400 SATA 3 SSD SA400S37/480G)
8gb RAMAccording to the SMART parameter 233 Flash_Writes_GiB, which i have been constantly monitoring, the write rate is about 1.5 gb per hour.
My log setup is
I have installed:
*Suricata
*pfBlockerNG
*Status_traffic_totals
*ntopng (which, after disabling it, i was able to decrease the write rate down to 1 gb/hour)Some questions:
*as im well aware the SSD will eventually wear off, do you think this is a decent write rate for my system?
*in case not, is there any way to improve it?
*i find it hard to believe than both pfBlocker and Suricata can write 1gb of data per hour. is this a normal behavior?
*i have seen mixed opinions regarding RAM disks for var and tmp. what can you recommend for my case?thanks!
-
Hmm, that does seem high.
RAM disks with Suricata and pfBlocker would need to be huge.
How is that drive mounted? What does
mount -p
show?You could be hitting this if it's an older install: https://redmine.pfsense.org/issues/9483
Or it could be the counter is not showing values in the expected units.
Steve
-
@stephenw10
thanks Steve. this sounds concerningregarding the issue you mentioned, this is a very recent install (about 1 week).
here are my mount points
# mount -p zroot/ROOT/default / zfs rw,noatime,nfsv4acls 0 0 devfs /dev devfs rw 0 0 zroot/var /var zfs rw,noatime,nfsv4acls 0 0 zroot /zroot zfs rw,noatime,nfsv4acls 0 0 zroot/tmp /tmp zfs rw,nosuid,noatime,nfsv4acls 0 0 /dev/md0 /var/run ufs rw 2 2 devfs /var/dhcpd/dev devfs rw 0 0 #
and my /etc/fstab file which interestingly doesnt show any other partition (i dont know why)
# Device Mountpoint FStype Options Dump Pass# /dev/ada0p2 none swap sw 0 0
regarding the counter, is there any way to confirm this?
thanks again
-
I would not worry about the wear too much. According to the data sheet that drive is good for 160TB of writes. Or about 4,000 days at 1.5GB/hour.
https://images-na.ssl-images-amazon.com/images/I/710+8SNYSCL.pdf
-
You are using ZFS so the fstab is no longer required to mount those.
You can probably see something useful with the correct incantation of iostat.
Steve
-
@stephenw10
thanksim running this command. please correct me if im wrong.
top -m io
im seeing some casual activity spiking (100%) under php, suricata, vnstat and cron but nothing constant
this is the iostat output
# iostat tty md0 ada0 pass0 cpu tin tout KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy in id 0 2 0.00 0 0.00 23.75 20 0.47 0.40 0 0.00 2 0 0 0 97 #
@andyrh said in is this ok for a SSD setup?:
I would not worry about the wear too much. According to the data sheet that drive is good for 160TB of writes. Or about 4,000 days at 1.5GB/hour.
https://images-na.ssl-images-amazon.com/images/I/710+8SNYSCL.pdf
thanks for the information
-
What are you seeing from top?
Maybe try:
top -mio -ototal -SH
That will show you system processes too.
Not really something I've ever dug into too deeply but iostat looks like it gives more useful data.
You are certainly seeing higher values that I do:[2.4.5-RELEASE][admin@2220.stevew.lan]/root: iostat tty md0 ada0 pass0 cpu tin tout KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy in id 170 1 0.00 0 0.00 31.28 1 0.03 0.00 0 0.00 0 0 0 0 99
Steve
-
@andyrh said in is this ok for a SSD setup?:
Or about 4,000 days at 1.5GB/hour.
That's not even 11 years!
-
Mmm, I guess 0.47MB/s is ~1.7GB/h.
0.03MB/s seems about average on the systems I have here without ramdisks. It's 0 with ramdisks.
But still well within the expect drive life I would think.
Steve
-
hello @stephenw10
thanks all for your feedback. even though, at this rate, the ssd would probably last for a while, im concerned on the higher rate on my system compared to you. its way higher.is there anything else i might check on my system?
here are the outputs:
# iostat tty md0 ada0 pass0 cpu tin tout KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy in id 0 2 0.00 0 0.00 23.50 20 0.46 0.40 0 0.00 2 0 0 0 97 #
# top -mio -ototal -SH last pid: 53706; load averages: 0.08, 0.12, 0.12 up 2+05:46:06 19:54:39 482 processes: 5 running, 454 sleeping, 23 waiting CPU: 3.9% user, 0.1% nice, 0.7% system, 0.0% interrupt, 95.3% idle Mem: 185M Active, 1599M Inact, 1182M Wired, 208K Buf, 4839M Free ARC: 403M Total, 197M MFU, 165M MRU, 89K Anon, 1916K Header, 40M Other 134M Compressed, 438M Uncompressed, 3.26:1 Ratio Swap: 2048M Total, 2048M Free PID USERNAME VCSW IVCSW READ WRITE FAULT TOTAL PERCENT COMMAND 19 root 20 0 1 41 0 42 100.00% zfskern{txg_thread_enter} 97223 root 13 0 2 10 0 12 75.00% php 87433 root 4 2 1 0 0 1 6.25% filterdns{imgur.com} 87433 root 2 0 1 0 0 1 6.25% filterdns{s.imgur.com} 83603 root 2 0 0 0 0 0 0.00% sshd 96315 root 0 0 0 0 0 0 0.00% php 96014 root 0 0 0 0 0 0 0.00% php 95949 root 2 0 0 0 0 0 0.00% lighttpd_pfb 95823 root 0 0 0 0 0 0 0.00% php_pfb{php_pfb} 95764 root 38 0 0 0 0 0 0.00% clog_pfb 88796 root 0 0 0 0 0 0 0.00% vnstatd 31964 root 0 0 0 0 0 0 0.00% sh 21628 unbound 12 0 0 0 0 0 0.00% unbound{unbound} 21628 unbound 8 0 0 0 0 0 0.00% unbound{unbound} 21628 unbound 8 2 0 0 0 0 0.00% unbound{unbound} 21628 unbound 8 0 0 0 0 0 0.00% unbound{unbound} 96844 root 0 0 0 0 0 0 0.00% dpinger{dpinger} 96844 root 8 0 0 0 0 0 0.00% dpinger{dpinger} 96844 root 8 0 0 0 0 0 0.00% dpinger{dpinger} 96844 root 2 0 0 0 0 0 0.00% dpinger{dpinger} 96844 root 0 0 0 0 0 0 0.00% dpinger{dpinger} 52515 root 0 0 0 0 0 0 0.00% openvpn 7368 root 188 0 0 0 0 0 0.00% suricata{suricata} 7368 root 7 0 0 0 0 0 0.00% suricata{RX#01-em0} 7368 root 0 0 0 0 0 0 0.00% suricata{W#01} 7368 root 2 0 0 0 0 0 0.00% suricata{W#02} 7368 root 2 0 0 0 0 0 0.00% suricata{W#03} 7368 root 0 0 0 0 0 0 0.00% suricata{W#04} 7368 root 2 0 0 0 0 0 0.00% suricata{FM#01} 7368 root 2 0 0 0 0 0 0.00% suricata{FR#01} 91862 root 0 0 0 0 0 0 0.00% sh 91704 root 0 0 0 0 0 0 0.00% sh 91670 root 0 0 0 0 0 0 0.00% sshg-blocker{sshg-blocker} 91670 root 0 0 0 0 0 0 0.00% sshg-blocker{sshg-blocker} 91373 root 0 0 0 0 0 0 0.00% sshg-parser 91117 root 0 0 0 0 0 0 0.00% cat 90879 root 0 0 0 0 0 0 0.00% sh 17118 root 0 0 0 0 0 0 0.00% nginx 16865 root 0 0 0 0 0 0 0.00% nginx 16643 root 0 0 0 0 0 0 0.00% nginx 13588 root 0 0 0 0 0 0 0.00% syslogd 13131 root 0 0 0 0 0 0 0.00% php-fpm{php-fpm} 93437 root 0 0 0 0 0 0 0.00% sh 92850 root 0 0 0 0 0 0 0.00% sh 92179 root 0 0 0 0 0 0 0.00% getty 92024 root 0 0 0 0 0 0 0.00% getty 91891 root 0 0 0 0 0 0 0.00% getty 91769 root 0 0 0 0 0 0 0.00% getty 91535 root 0 0 0 0 0 0 0.00% getty 91492 root 0 0 0 0 0 0 0.00% getty 91180 root 0 0 0 0 0 0 0.00% getty
-
I've got to think it's because of zfs. It's not something I've ever looked too hard at because the only place it's really an issue is booting from flash and we don't use zfs there.
Steve
-
@stephenw10 should i consider UFS for my setup instead?
i dont have any big power outages concerns since all my systems run on UPS with power generator backup -
If you can easily test that I would do so.
I don't really think you need to worry either way.
Steve
-
@andresmorago said in is this ok for a SSD setup?:
Lenovo m93p Tiny
On a tangent, looks like a nice little box for home-baked.
Does it have a slot for a NIC(NOPE) or are you going to use the wireless, or VLANs on the single NIC? -
@provels said in is this ok for a SSD setup?:
@andresmorago said in is this ok for a SSD setup?:
Lenovo m93p Tiny
On a tangent, looks like a nice little box for home-baked.
Does it have a slot for a NIC(NOPE) or are you going to use the wireless, or VLANs on the single NIC?i added an additional mini PCIe ethernet card along with the other NIC that already came with the machine.
@stephenw10 said in is this ok for a SSD setup?:
If you can easily test that I would do so.
I don't really think you need to worry either way.
Stevethanks Steve. i think i can do that tonight and test.
after disabling a lot of log options from suricata and pfblockerng, i was able to reduce the write rate:[2.4.5-RELEASE][admin@svr00.moragomez.com]/root: iostat tty md0 ada0 pass0 cpu tin tout KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy in id 0 0 0.00 0 0.00 17.86 18 0.32 0.40 0 0.00 2 0 0 0 98
-
hi again. @stephenw10
i finally reinstalled everything with UFS. so far, the only think i disabled was suricatathese are my numbers so far. ill keep testing and report back
[2.4.5-RELEASE][admin@svr00.jjj.com]/root: iostat tty md0 md1 ada0 cpu tin tout KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy in id 0 6 0.00 1 0.00 0.00 5 0.00 29.00 4 0.11 1 0 0 0 99 [2.4.5-RELEASE][admin@svr00.jjj.com]/root:
-
Yeah, I spoke to our devs about this since I'd never considered it. ZFS is expected to have a higher IO rate than UFS. What you are seeing in either case doesn't seem to be cause for alarm.
Steve
-
@stephenw10
thanks so much for your feedback and information. i would stick with ufs from now on, due to my basic setup not needing much