Webgui slow to make changes after 2.2.3
-
As already noted above and even on 2.2.4 release notes: The only thing that helps is switching to permanent RW in Diagnostics - NanoBSD. Stop wasting your time.
I was always under the assumption that embedded installs should remain in RO mode, why is that not the case any longer? 2.2.4 has already locked on me twice within the span of two weeks, something 2.1.5 never did. Will my config remain OK if I experience one of these sudden crashes if I switch to RW mode?
The long delays in making changes is VERY frustrating, it's not like this is a Cisco device where I can make instantaneous changes via a command line. These long delays are a waste of time as well.
-
Same issues here going from 2.1 from 2.2.4 Painfully trying to roll back now. Neither the rw/ro or the updater settings change appear to improve my issue. Similar to a few of the others, I am also on an embedded NanoBSD installation.
-
If your media is slow on 2.2.3+ then you need a new/different disk. High quality CF cards and fast SD cards should be fine. I have a Sandisk in my ALIX (Stated speed is 30MB/s, 200x) that saves nearly as fast as when there is no transition at all on 2.2.3 and 2.2.4, and a crappy Kingston that takes ages in the same role.
Your choices are:
1. Keep the disk permanently RW – There is little risk here from the base system. Switching to RO is mostly a formality/safety belt. Packages may not be so kind.
2. Get a faster/better disk -- They're cheap, and worth a few extra bucks.If you choose option 1, make sure you are on 2.2.4 or later. Do not stay on 2.2.3.
-
So I'm running 2.2.4 on a Sokeris Net5501 with a 133x CF card. A 133x card is roughly 20mb a sec, yet I see posts to the forum by people with 266x CF cards who are also seeing the same issue.
CF cards are cheap and I have no problem buying one of the fastest ones out there, but judging from the experience of a few others that hasn't helped.
So at the moment no one (the devs included) has a firm idea on what the cause is or fix.
-
We know the problem, slow/cheap/iffy disks are slow during to RW to RO transition when it syncs all data to disk. The patch that let it work faster was unsafe because it shortcut that process. It's not just about the rated speed of the card, either. There is no proper,safe, fix at the moment other than to skip the transitions (stay RW) or get a disk that works better/faster.
-
jimp, i for one am not using slow/cheap/iffy disks (i have used five different of CF card types to test ..kingston, sandisk, etc, and the slowest has been an x266) and i still experience the problems.
for example: slow loading of the dashboard after login, EXTREMELY slow rule savings, EXTREMELY slow routing setups, and slow loading any RRD graphs…..just to name a couple examplesfor instance...i have run a tests on six different hardware firewalls
taking the SAME EXACT model CF card(s) and running 2.2.2 in the different hardware firewalls ... i do NOT experience the slowness across the board.
2.2.3-4 causes the slowness.
i currently have the systems set as RW and not RO.so...i know for a fact the cards nor the hardware configurations (rules & routing) are the issue(s).
the ONLY thing that changed was going from 2.2.2 release amd64 TO 2.2.3-4 release amd64 with the systems set as RWone example of the hardware i used in the test(s) that is strictly used for firewall(s) purposes without other packages installed, and DNS forwarded ... i listed below
hardware: ONE Watchguard XTM 1050 (configured with 4 of the 1gbps interfaces assigned to diverse internet providers which are between 20-55% utilized tops) - (using nanobsd instead of the hdd connection, long story as to why...but it has worked great until latest release(s))
specs - Intel(R) Xeon(R) CPU E5410 @ 2.33GHz - 8 CPUs: 2 package(s) x 4 core(s) @ 0.18, 0.10, 0.11 ….. with 17% of 8156 MB ram –- as it sits running 2.2.4 release am64sooooo…....??
-
Well…. So. UFS without the patch performs like garbage on CF. There's a serious performance bug somewhere that noone upstream ever fixed.
-
2.2.2 and before had the dangerous patch to speed things up. Yes, it was faster, but it was taking an unacceptably dangerous shortcut.
2.2.3 removed the patch (If you have registered for access to the -tools repo, you can see where the patch was removed here. It was already deactivated at that point).
Over a month ago I tried with several cards and different operating systems/versions while we were trying to understand the problem better. The Sandisk 200x card and the generic PC Engines 4GB card I have were the only ones that were "fast" on 2.2.3+. A Sandisk 100x and Kingston 133x were still slow.
Results of the testing I did (basically a while loop that repeated switching to RW, performing some filesystem operations, and then switching back to RO):
If you look at some of the columns on 2.2.2 you can see where basically the RO switch was a bit of a NOOP while others show it processing data to disk.
The patch is gone and not coming back, so the choices are, as stated: Stay RW or use a different disk. It's not just the rated speed of the disk, but likely also the quality of the controller on the disk.
FYI- Here is the disk I use/like and that is fast:
And it identifies as:ada0: <sandisk sdcfh-004g="" hdx="" 6.03=""> CFA-0 device</sandisk>
EDIT: Note: All of that testing was performed on an ALIX 2d3
-
Thank you, your last post was very informative and gave a much better understanding of the issue. I was THIS close to downgrading to 2.2.2, but I'll instead place an order for that specific CF card.
pFsense has recently been locking up on me randomly when trying to login to the WebGUI and I'm thinking this maybe related. All routing stops but you can still ping the internal interface.
-
What "fixes" the issue? from disk to disk, regardless of type (flash / spinny) and write rate, isn't necessarily the disk controller, but how much cache the disk controller has to work with.
If it has enough cache to absorb all the random writes and spit them out to the physical layer on its own time, it tells the kernel it has all that data, and you don't see hanging issues.
If it doesn't have enough cache to absorb all the random writes, you wait until it has actually written most of it to the physical layer, then it tells the kernel it has all the data, and this is the hang you experience.
So, it's not a disk speed issue at all, it's that faster / larger / newer disks tend to have better controllers with more cache than slower / smaller / older disks.
It doesn't help that NanoBSD images are not 4k aligned, so writes take even longer than they should on flash & 4k drives. Hopefully this gets fixed for the 2.3 branch, or we get a NanoBSD installer so we can fix it manually.
Want to test? Run a pfSense NanoBSD VM on a system with a RAID controller that has a descent size cache in write-back mode. Now try it with the controller cache disabled. Now try it with the controller cache disabled and running on a true 4k sector drive. Spoiler: good, bad, worse.
-
so, what you are saying is that a kingston CF 266x ultimate card is not 'fast' enough, but the cf card you posted (pictured) is slower and is 'fast' enough?
sorry…but that does not fly in my book. nor make any sense.
i know for a FACT that it is NOT any of the EIGHT different cards that i have tried (with the slowest being the kingston CF 266x ultimate) ..... across multiple hardware specs .... that is causing the slowness.
the ONLY way to actually have the webgui to be productive is to change the nanobsd to r/w .. which for is bad.
downgrading to 2.2.2 on the SAME cards and the SAME hardware...everything runs perfect. in fact i actually am able to achieve slightly better throughput up & down.
so, the bug that was 'fixed' in 2.2.3 for nanobsd introduced this slowness (to virtually unusable) state....
-
As already noted above and even on 2.2.4 release notes: The only thing that helps is switching to permanent RW in Diagnostics - NanoBSD. Stop wasting your time.
…and for a firewall/router .. that is bad & unacceptable for a production environment.
-
All of those points have already been addressed in previous replies. Please read it again and pay attention as I will not repeat myself.