Working on getting OpenVPN server bridging to fly.
-
Is there something here that rings a bell that perhaps wouldn't be immediately obvious to me? Something that runs as a background agent? I suppose it's possible that something odd happens in the state table that times out, or maybe a buffer is consistently filling up, but I'm having a hard time placing my finger on what would cause this kind of behavior.
Strange, I cannot think of anything that runs in the background that would change anything.
-
I just timed it. 5 minutes on the dot. I can cron an ifconfig bridge0 deletem sis0/addm sis0 once every 4 mins to mitigate the problem, but sorta kill any kind of long-term constant-state communications. :P
Really have to ponder this. Doesn't appear to be a pf thing though.
-
Do a killall cron just to make sure its nothing in there stepping on it.
-
Okay, done. did a addm/deletem sis0 at 4:59:10 central time per my nice little mobile phone here. It's on the clock. We'll see how long it lasts. :D
-
Died at 5:04:20 pm central with no crons. Hmm….
addm/deletem sis0 of course revived it.
-
I'm out of time to work on this for now. I added a crontab to run the deletem/addm every 4 mins. It's a terrible, awful, dirty hack, but I'm hoping that the robustness of tcp/ip and associated apps will be able to resend and life will go on until I can figure out what is actually causing the issue to begin with. Any thoughts on debugging please post up! ;)
-
Couple things.
When it drops again, check ifconfig and look at the bridge status. Does it show blocking?
-
ifconfig bridge0 says - UP,BROADCAST,RUNNING,MULTICAST
To be fair, I'm not sure what causes an interface to go into BLOCKING mode, because I never (intentionally) use it. :\
I'm looking in the right place, right? Did my deletem/addm, came back. Shows the same thing.
-
Look underneath that, there is a blocking / forwarding / listening entry for each interface in the bridge.
-
When working, they read: learning, discover. Waiting for the next failure….
Failure happened. Same thing. LEARNING, DISCOVER. For grins I've enabled STP on both, although I really don't think this is a packet storm problem anymore, since I'm not seeing broadcasts coming across the bridge0, sis0, or tap0 interfaces. Probably a good measure anyway since at some point I need to duplicate this config on the other firewall.
-
You may be interested in this commit:
http://pfsense.com/cgi-bin/cvsweb.cgi/pfSense/usr/local/www/status_interfaces.php?rev=1.29.2.7;only_with_tag=RELENG_1
Shows the bridge status now under Status -> Interfaces
-
Hmm. Is it safe for me to grab that one file and plug it in, or is there something more formal I should do? (ie, cvs?)
-
Yeah, its safe. Simply replace /usr/local/www/status_interfaces.php with that new one.
-
Cool. They both show learning. Of course, after 5 mins it still dies, but they both show learning. ;)
Seriously, have to put this to rest for now. I'll come back to it later. :)
-
Testing remotely. Quick note - works great, except for a minor detail.
If you intend to use STP, DO NOT, I repeat, DO NOT, enable STP on the tap interface. Your actual hardware interface is fine, but doing so on the tap interface creates a really odd situation where traffic hits the endpoint tap interface, and gets to your bridge, but nothing ever returns. Disabling STP on the tap interface resolves that problem.
Otherwise all is well. Just need to figure out why CARP chokes after 5 mins.
-
Another update. Looked like all was working just fine, until the firewall seized to a halt. Same behavior as before too. It responds to ctrl-alt-del by trying to shut down, but fails to actually do so. Has to be hard rebooted.
When I get a chance to power cycle it, I'll see if I can set up watchdog to mitigate this side effect until I can find the root cause. Again, if you have any speculations as to the cause, post up and I'll try it. Also, anyone without carp that wants to try this, see what happens, let me know.
-
For the sake of discussion, I think I left off an option that might be causing an issue. Dunno yet:
dev-node tap-bridge
Here's the official OpenVPN docs on the matter. Suprised that I overlooked that directive.
http://openvpn.net/bridge.html
It claims that directive is only required under windows though. Another comment is this:
A common mistake that people make when manually configuring an Ethernet bridge is that they add their primary ethernet adapter to the bridge before they have set the IP and netmask of the bridge interface. The result is that the primary ethernet interface "loses" its settings, but the equivalent bridge interface settings have not yet been defined, so the net effect is a loss of connectivity on the ethernet interface.
So, despite what I was reading elsewhere, it appears that the openvpn folks would prefer we do this:
ifconfig sis0 up
ifconfig tap0 up
ifconfig bridge0 create
ifconfig bridge0 addm sis0 addm tap0
ifconfig bridge0 172.16.10.2 netmask 255.255.255.0The problem here of course is the impact this would have on CARP. I have sis0 in carp3, and I cannot do addm carp3. I don't know (and can't easily test at this moment) whether I can ifconfig bridge0 instead of sis0, and still have it able to join a carp cluster. If anyone wants to speak up on that point as well, please do. It will be about a week before I can safely test that (I think?). I might have an opportunity while in Montreal.
If this is indeed correct, then from pfSense's point of view, we need to able to change the lan interface (or in my case, opt interface) to be bridge0 and not sis0. That way all rules are being applied to the bridge and not to the physical interface, unless someone wants to step up with more information to say otherwise. I'm honestly just not finding much info in regards to FreeBSD, bridging, and rules re: pf, only that you should only create rules for one interface and not both, as it screws things up. I haven't found any documentation on whether rules should be applied specifically to the bridge, or to the physical ints.
Also, I'm puzzled by STP hosing things up on tap0. Doesn't make sense to me.
-
In Montreal now. Noticed that I can't actually set up a watchdog timer, as it requires kernel support (and it isn't in GENERIC), so oops. :)
Have to find another way for now.
Might I suggest we officially enable watchdog in the kernel? Seems like a very logical, sane thing to have in a firewall. If the kernel stops responding for x seconds, reboot the system.
-
We already support the GEOD watchdog but I do not plan on adding the SW_WATCHDOG as it may interfere with systems this late in the testing cycle.
We may be able to add it to 1.1.
-
Ah, cool. Thanks. Hopefully I'll have time later to rebuild with SW_WATCHDOG for my own purposes. Doesn't really fix the problem at hand, but makes me feel better to know the system will kick itself. ;)