[ER] pfSense box unreachable after config LAGG failover interfaces for LAN/DMZ
-
Packages should persist through an update. When you update everything should be reinstalled automagically. However some don't as you've probably found. ::) Packages have to be kept up to date by their maintainers.
Steve
-
Packages should persist through an update. When you update everything should be reinstalled automagically. However some don't as you've probably found. ::) Packages have to be kept up to date by their maintainers.
When I mean "persist", then I mean "persist" not "automatic reinstall".
If I update Mac OS X, I don't have to reinstall Photoshop or even third party device driver afterwards, either. Yes, things are (pretty much all from what I can tell) reinstalled. But it takes about 3-4h for a few dozen packages to re-download and re-install, particularly if you don't have a blazing fast drive, CPU and connection.
For something like a firewall, that's a damn long time. That's a few hours without backup DNS server, without e-mail filtering, without web server, without phone service, etc. (depending on what packages are installed and used.)
Right now, updates of pfSense are closer to a re-install than to a true update. IMO that is one of the biggest weak spots of pfSense when compared to most commercial boxes: there an update is more or less the time it takes to upload the firmware, plus a reboot, with the only down-time for any service being the reboot.
The good thing is, the stable releases require such an upgrade fairly rarely (still a hassle), but it kind of gets a grind when dealing with snapshot releases… So this is something that should at some point be addressed. Maybe the new package system makes this better, would be awesome. Otherwise, it's maybe something for a 2.5 or 3.0 release of pfSense, but certainly something that should be on the radar in the long term.
-
OK, when I add manually fw rules to let 443 and 22 pass, then I'm seemingly not getting locked out.
I can however confirm, that the anti-lockout rule does get nuked behind the user's back (because it's missing now), and that despite the fact that both the LAN and the DMZ(opt1) interface are disabled, which particularly in combination I consider dubious.If we're concerned, it would be better to never open port 80 on the WAN and force https for the web configurator unless access is through the LAN or another trusted network.
I'd consider passwords going in the clear over the net a bigger problem than keeping encrypted access to the configuration interface open from public networks.
-
Do you not get redirected to https if you try to connect to port 80?
I think I agree with you that switching the webgui access to LAN should only happen once LAN is enabled with a valid IP that seems like an oversight. I have done similar things before but not including a LAGG setup. You can usually get around it by making all the required changes before applying them.
it takes about 3-4h for a few dozen packages to re-download and re-install
3-4h! :o What packages are you running that take that long?
My own box takes, maybe, a few minutes to reinstall the packages. I am only running a few small packages though.Steve
-
Squid and Snort alone can take minutes for each … a slow connection would extend even that.
-
Do you not get redirected to https if you try to connect to port 80?
AFAIK only IFF you have the https as default AND you enable redirection. But if the web configurator is set to use http, then you can happily send all the info in the clear as long as the default lock-out rule allow port 80.
My point however is, that https should ALWAYS be the default, and http on a WAN link should essentially never happen. Someone sniffing the password, and you might as well not have a firewall.
I'd hike the restrictions for allowing http to be enabled:- system must have active LAN link
- no anti-lockout rule on the WAN link that opens up port 80 to the system
Even in a LAN it's naive to assume that you're in a friendly environment, so I don't know why the console keeps asking me if I want to revert the web configurator to http whenver I make a minor change in the interface setup or something like that. Things should default to https, and people should have to jump though hoops if they really want to expose their sensitive passwords etc. to the public.
it takes about 3-4h for a few dozen packages to re-download and re-install
3-4h! :o What packages are you running that take that long?
My own box takes, maybe, a few minutes to reinstall the packages. I am only running a few small packages though.Let's start with snort, squid3, then let's continue with pfBlocker, mailscanner, dansguardian, imspector, anti-virus (havp), vhosts, sipproxy, radius, postfix, pfflowd, arpwatch, mailreport, nmap, and a few more. And I haven't even thought of freeswitch or asterisk yet ;)
Admittedly, I don't have the fastest connection yet (pfSense is supposed to help, because all my WAN traffic needs to get tunneled out to the internet due to a numbskull ISP called Verizon), and a CF Card is not the fastest disk drive, either…
-
I would certainly agree that using http for admin access to a firewall is not a good idea. However, to be fair, those two things are enabled by default so you have to actively choose to use http.
Though I can't imagine ever wanting to do it I wouldn't want to have that choice taken away from me.Clearly your package list dwarfs mine. ::)
Steve
-
An impressive list no doubt. There is something to be said about about having to many services running on the firewall. It increases the risk of a bug in any one of them compromising security. Though having them there does help out those who need to consolidate, like myself.
-
Indeed. It's a problem that stems from the incredibly wide range of deployment scenarios that pfSense can fill. You have people using it in enterprise situations where alternatives would be many $10,000's whilst at the same time people are replacing a $50 home router. Expectations and requirements across that range are going to vary wildly. ;)
The new package system may help. I believe it allows for all dependencies to be included in the package. That may mean larger downloads though. Perhaps it's time to run a local package repo?Steve
-
Packages must be reinstalled after a firmware update because the underlying OS could change in between, and there could be potential incompatibilities introduced in the meantime that new binaries will fix.
Sure, compatibility shims can help this but there is no way to avoid it and guarantee that things will work properly once the firmware update is done.
Also specifically in the case of NanoBSD the packages do not exist on the newly imaged slice so they have to be downloaded and reinstalled because they do not (and cannot) already exist/persist there.
To most end users the package reinstall is something that happens only when upgrading to a major release, which means every 6-9 months or so, give or take. The only people who feel the pain of the package reinstall process taking a while are those tracking snapshots very often. :-)
Also, HTTPS is the default out of the box, with port 80 redirecting to the HTTPS port in a way the browser will cope with neatly. HTTP has not been the default since 1.2.3.
-
Also, HTTPS is the default out of the box, with port 80 redirecting to the HTTPS port in a way the browser will cope with neatly. HTTP has not been the default since 1.2.3.
I know, and appreciate that. However someone should check out the behavior at the console. There are a few operations, like setting an interface's IP address and such which always end up with the system asking me if I want to revert the web configurator to http. Why would I? If I had it set to http, and it would ask me to revert to https, then that would be what's expected, but not the system asking me if I want to revert to a less secure method.
Could be that these are left-overs from the 1.2.3 era? -
If someone fubar's their certificate or similar, there has to be a way to revert to http on the console.
It's a failsafe.
-
If someone fubar's their certificate or similar, there has to be a way to revert to http on the console.
It's a failsafe.
I understand that part. If it would ask me in the context of resetting the web configurator password from the console, I'd understand, because then chances are someone can't get into the system anymore. But it seems setting an interface address or something like that has rather little to do with reverting to http for the web interface.
It's not a big deal, but it's just a minor nuisance, because if one quickly answers various prompts with
y [ret] y [ret], and then before one knows it, one has also reverted to http.Small pebble in the shoe, I can still walk ;)
-
When one is maintaining their firewall you'd think one would carefully examine any prompts to ensure that's what they really wanted to do ;-)
True that could be split off into its own option or moved, it's really there because of tradition - in 1.2.x that's where it was, as that menu option only dealt with resetting LAN functions.
Its scope was expanded for 2.x to cover other interfaces, but the other functions didn't get moved.
Inertia will probably keep it where it is, but I suppose that's up for debate.
-
When one is maintaining their firewall you'd think one would carefully examine any prompts to ensure that's what they really wanted to do ;-)
I agree, but then there's that infamous difference between theory and practice, between "should be" and "is"… ;)
Inertia will probably keep it where it is, but I suppose that's up for debate.
That's why I mention it :)
-
Clearly your package list dwarfs mine. ::)
and is well beyond what anyone should be running on a firewall, much less one with CF. 3-4 hours on CF is probably more like 5 minutes on a HD. In the vast majority of usage cases, where people are only upgrading when new official releases come out, package reinstallation is a requirement. It's not ideal when you're running snapshots and upgrading routinely, especially when it's on CF and you're setting the world record for number of services running on a firewall. You can always get into the source and disable that.
-
@cmb:
Clearly your package list dwarfs mine. ::)
and is well beyond what anyone should be running on a firewall, much less one with CF.
Well, at some point in time a real SSD is going in there, but that's still in use elsewhere until some data recovery on some other drive can be done. I'm not putting a drive in there, because this is a fanless device, and I rather keep it cool than add all the heat from a regular drive in that device. It is, however, not one of these dog-slow CF cards, but probably about comparable in speed to a slow 2.5" drive.
Also, it's not clear which of these packages end up surviving the evaluation period, but when it boils down to it, it's essentially all stuff that isn't "out there": mail, virus filtering, IDP, VPN, etc. and looking at the CPU, it clearly feels bored. Note also: this is essentially protecting a net with a half a dozen computers incl. a small web server for internal use behind it. So it's not like there are dozens of users hitting on the system, so it would be ridiculous to have several devices taking over various aspects of network border security.
So it's like a big network shrunk down in size, which is perfect for playing around with stuff.
It will be interesting to see how much faster the update will be next time, because before it went through my old firewall which as it turns out slowed down my net considerably even though it wasn't doing much besides running IPSec…
Anyway, maybe the updater could become smarter in the sense that it differentiates between minor and major version updates, that way if minor bug fixes are pushed out, not everything has to be reinstalled.
I guess one way of doing this could be to use the package system: differentiate between standard packages and optional/3rd-party packages. That way the entire pfSense system could be broken down into packages, which can already be updated individually without disrupting the rest of the system. That way only if there's a major OS upgrade is a new install required, while things like e.g. changes in IPSec would result in an upgrade of a standard package. -
It is, however, not one of these dog-slow CF cards, but probably about comparable in speed to a slow 2.5" drive.
Unfortunately in NanoBSD CF cards are booted with DMA disabled so it will be running at PIO4, quite a lot slower than a 2.5" HD, even if it's a super rapid UDMA card. I believe this is due to a bug in the way FreeBSDF handles IDE mounted CF cards? It's while since I looked into it. You could try try removing:
hw.ata.atapi_dma="0" hw.ata.ata_dma="0"
from /boot/loader.conf and see what happens. It may well fail to boot though so have a backup solution in place.
Steve