Important Info: Inline IPS Mode with Suricata and VLANs
-
Inline IPS Mode Operation with VLANs
The Inline IPS Mode of blocking used in both the Suricata and Snort packages takes advantage of the netmap kernel device to intercept packets as they flow between the kernel's network stack and the physical NIC hardware driver. Netmap enables a userland application such as Suricata or Snort to intercept network traffic, inspect that traffic and compare it against the IDS/IPS rule signatures, and then drop packets that match a DROP rule.
But the netmap device currently has some limitations. It does not process VLAN tags, nor does it work properly with traffic shapers or limiters. When you use Inline IPS Mode on a VLAN-enabled interface, then you need to run the IDS/IPS engine on the parent interface of the VLAN. So for example, if your VLAN interface was vmx0.10 (which would be a VLAN interface with the assigned VLAN ID '10'), you should actually run the netmap device on the parent interface (so that would be vmx0 instead of vmx0.10).
The older netmap code that was in Suricata only opened a single host stack ring. That limited throughput as the single ring meant all traffic was restricted to processing on a single CPU core. So no matter how many CPU cores you had in your box, Suricata would only use one of them to process the traffic when using netmap with Inline IPS operation.
Recently the netmap code in Suricata was overhauled so that it supports the latest version 14 of the NETMAP_API. This new API version exposes multiple host stack rings when opening the kernel end of a network connection (a.ka. the host stack). You can now tell netmap to open as many host stack rings (or queues) as the physical NIC exposes. With the new netmap code, Suricata can now create a separate thread to service each NIC queue (or ring), and those separate threads have a matching host stack queue (ring) for reading and writing data. So traffic loads can be spread across multiple threads running on multiple cores when using Inline IPS Mode. This new code is slated to be introduced upstream in Suricata 7.0 due for release shortly. I have backported this new netmap code into the Suricata 6.0.3 binary currently used in pfSense. And the OPNsense guys have also put the updated netmap code into their Suricata development branch.
But the new netmap code in the Suricata binary exposed a bug in the Suricata package GUI code. When running Suricata on a VLAN interface with Inline IPS Mode using the netmap device, the VLAN's parent interface should be passed to Suricata (and thus eventually to netmap). There are two reasons for this. First, if you pass netmap the VLAN interface name, it will actually create an emulated netmap adapter for the interface (because the VLAN interface itself is actually a virtual device). This is a performance-limited device. It is a software construct, and is quite slow to process traffic. The second issue with passing a VLAN interface name is netmap itself is VLAN un-aware. The VLAN tags are not honored by the netmap device. So you gain nothing, and in fact lose performance, due to the emulated adapter that is created. Passing the parent interface name instead results in netmap opening the underlying physical interface device where it can take full advantage of any available multiple NIC queues (same as netmap rings).
So the soon-to-be-released 6.0.3_3 version of the Suricata GUI package will contain an important change. When you configure Suricata to run on a VLAN interface, the GUI code will automatically enable Suricata on the parent interface instead of the VLAN interface. A system log message will be logged to flag this change. It serves no useful purpose, and in fact will actually waste resources, to run multiple Suricata instances on VLANs defined on the same parent interface. As an example, assume you had the following VLANs defined:
vmx0.10 vmx0.20 vmx0.30
There is no reason to run three instances of Suricata here. Simply run a single instance on one of the VLANs. Suricata will automatically set itself up to run on the parent interface (vmx0 in the example given). Be sure promiscuous mode is enabled on the INTERFACE SETTINGS tab (it is "on" by default). Depending on exactly how your rules are written, you may need to customize the HOME_NET variable using this setup. You would want to include the IP subnets of all the defined VLANs in the HOME_NET variable running on the parent interface. Again, this is the default setting. Suricata will scan all locally-attached subnets and add them to the HOME_NET variable.
-
Would running on the parent interface mean that other VLANs are potentially blocked even when you only intend to inspect traffic passing through a single VLAN?
-
@marc05 said in Important Info: Inline IPS Mode with Suricata and VLANs:
Would running on the parent interface mean that other VLANs are potentially blocked even when you only intend to inspect traffic passing through a single VLAN?
I've been preparing to write a follow-up to this original post as I have learned quite a bit about netmap and VLANs over the last several weeks. The short answer is VLANs really should be avoided when using Inline IPS Mode. That's because the netmap kernel device, that is integral to how Inline IPS Mode blocks, does not understand nor does it parse VLAN tags. And in many instances (it's NIC driver dependent in large part), does not get to even see the VLAN tags.
I put a fix in the last Suricata update to try and make things better, but after further study I've concluded my fix is not ideal. There really is not a great way to handle VLANs automatically in the GUI code when using Inline IPS Mode with either of the two IDS/IPS packages (Snort and Suricata).
In my follow-on post that I'm still working on, I will try and explain a little better. But anyone running VLANs on an interface should really just forget trying to put the IDS/IPS on the same interface. You most certainly DO NOT want to run the instance on one of the VLANs. You are shooting yourself in the foot when doing so in terms of performance. And you can also make your firewall unstable doing that. The best way, and even this way is not ideal, is to run the IDS/IPS instance on the VLAN parent only. That of course then means you can't have different rules for different VLANS (when using Inline IPS Mode). Any rules you put on the parent will apply to the parent and all of the VLAN children running on that interface.
-
Hi Bill! I have an update on my situation. finally set up the inline mode on suricata on the parent interface of the lan (ix0) and on the WAN2 DHCP (em0) , on the WANpppoe i left on legacy as you suggested some time ago.
The only way i can run inline on ix0 without breaking the vlans is to disable some hw feature.
On shellcmd i haveifconfig ix0 -vlanhwcsum -vlanhwfilter -vlanhwtag
-
@xm4rcell0x said in Important Info: Inline IPS Mode with Suricata and VLANs:
Hi Bill! I have an update on my situation. finally set up the inline mode on suricata on the parent interface of the lan (ix0) and on the WAN2 DHCP (em0) , on the WANpppoe i left on legacy as you suggested some time ago.
The only way i can run inline on ix0 without breaking the vlans is to disable some hw feature.
On shellcmd i haveifconfig ix0 -vlanhwcsum -vlanhwfilter -vlanhwtag
Yep. Some drivers have hardware handling of VLAN tags. And with the way that netmap is plumbed into the FreeBSD kernel stack, it does not get to see any of those tags. Also, any VLAN interface is actually a virtual interface in the OS. It is not a "real" interface. So with netmap, that means it becomes a single queue "virtual NIC". When opening an instance on a VLAN interface, netmap will create its own virtual netmap adapter. This virtual adaptor causes a big performance hit compared to running on the physical adaptor.
Running on the parent is better as then netmap is running on an actual physical interface and can take advantage of how ever many queues the real NIC exposes. But even then, if the NIC hardware is doing things with checksum and VLAN tag offloading in hardware, netmap operation can get squirrely.
These are some of the details I've learned over the last few months. And many of them are not fixable without rewriting a large chunk of the way the kernel interacts with NIC drivers and the netmap device. So the problems are caused by the OS itself, and there is nothing the Suricata or Snort applications can do to fix it.
So the bottom line unfortunately remains this -- Snort and Suricata are not a good fit for VLAN interfaces when using Inline IPS Mode. Can it sort of work in some instances? Yes, but with poor performance compared to non-VLAN interfaces, and the potential to crash your firewall or freeze all traffic on the interface.
-
The only way i can run inline on ix0 without breaking the vlans is to disable some hw feature.
On shellcmd i haveifconfig ix0 -vlanhwcsum -vlanhwfilter -vlanhwtag
I came across this situation, of Suricata Inline with VLANs using the latest versions as of this posting, and to confirm that the above command does still work. Well for Intel based ix (tested with ixl).
What I have found though is that Suricata has to first startup and finished its initialization of the engine, then the above command can be executed. Performing the command before Suricata starts fails to allow for the VLANs to connect.Process (for Intel ix):
- Suricata Inline IPS only for the Parent Interface (ex LAN)
1a. No Suricata for the VLANs - Have the system running with Suricata operating. In this example, LAN is functional, but VLANs are offline.
- Execute a cron job, to start the ifconfig command about 5 minutes later (the time may vary base on hardware and network)
3a. Cron example (if using the WebGUI):
Minute: @reboot Hour to Day of Week fields are blank User: root Command: sleep 300 && ifconfig <NIC ID> -vlanhwcsum -vlanhwfilter -vlanhwtag
Where <NIC ID> would be the identifier, such as ix0, ixl1 of the interface.
4. Verify that a device on the VLAN has a connection.
5. Run an IDS test from the device such as: curl -A "BlackSun" www.google.com
6. Suricata should pop up the alert (granted depends on the rules in use) on the Parent Interface (ex: LAN), but have the IP Address of the Device (the subnet of the VLAN).A future possibility would be of executing a script, rather than the cron command above to check if the VLAN(s) are up (like a ping check) and if not, then perform the ifconfig command or maybe down/up the interface.
- Suricata Inline IPS only for the Parent Interface (ex LAN)
-
This post is deleted! -
@hamilton said in Important Info: Inline IPS Mode with Suricata and VLANs:
The only way i can run inline on ix0 without breaking the vlans is to disable some hw feature.
On shellcmd i haveifconfig ix0 -vlanhwcsum -vlanhwfilter -vlanhwtag
I came across this situation, of Suricata Inline with VLANs using the latest versions as of this posting, and to confirm that the above command does still work. Well for Intel based ix (tested with ixl).
What I have found though is that Suricata has to first startup and finished its initialization of the engine, then the above command can be executed. Performing the command before Suricata starts fails to allow for the VLANs to connect.Process (for Intel ix):
- Suricata Inline IPS only for the Parent Interface (ex LAN)
1a. No Suricata for the VLANs - Have the system running with Suricata operating. In this example, LAN is functional, but VLANs are offline.
- Execute a cron job, to start the ifconfig command about 5 minutes later (the time may vary base on hardware and network)
3a. Cron example (if using the WebGUI):
Minute: @reboot Hour to Day of Week fields are blank User: root Command: sleep 300 && ifconfig <NIC ID> -vlanhwcsum -vlanhwfilter -vlanhwtag
Where <NIC ID> would be the identifier, such as ix0, ixl1 of the interface.
4. Verify that a device on the VLAN has a connection.
5. Run an IDS test from the device such as: curl -A "BlackSun" www.google.com
6. Suricata should pop up the alert (granted depends on the rules in use) on the Parent Interface (ex: LAN), but have the IP Address of the Device (the subnet of the VLAN).A future possibility would be of executing a script, rather than the cron command above to check if the VLAN(s) are up (like a ping check) and if not, then perform the ifconfig command or maybe down/up the interface.
This might be something that could be integrated into the Suricata binary by the upstream folks (Suricata upstream, I mean). Probably what's happening is that during startup Suricata makes its own calls to the OS to set the promiscuous mode. Just guessing here, but perhaps that OS call also results in any other settings the user made (such as
-vlanhw*
customization settings) getting overwritten or reset to defaults ??When opening the netmap device Suricata makes a call to its own internal function called
SetIfaceFlags()
. That function in turn calls the OS to set promiscuous mode.There were also some FreeBSD bugs here as well, if I recall, where the NIC driver would accept, but still ignore, flags passed to disable hardware VLAN tagging and other hardware VLAN functionality. I don't recall which drivers were impacted, and I don't know if those bugs have been fixed yet.
- Suricata Inline IPS only for the Parent Interface (ex LAN)
-
Thanks for that...
And seems I spoke too early as the connection has stopped. So got to go back to figuring it out.
But at least there was some success. -
@hamilton said in Important Info: Inline IPS Mode with Suricata and VLANs:
Thanks for that...
And seems I spoke too early as the connection has stopped. So got to go back to figuring it out.
But at least there was some success.Sorry it did not prove to be a lasting solution. VLANs and the netmap kernel device used for inline IPS mode just do not seem to get along well with each other....
Hopefully things improve with that in the future.
-
@bmeeks said in Important Info: Inline IPS Mode with Suricata and VLANs:
Inline IPS Mode Operation with VLANs
The Inline IPS Mode of blocking used in both the Suricata and Snort packages takes advantage of the netmap kernel device to intercept packets as they flow between the kernel's network stack and the physical NIC hardware driver. Netmap enables a userland application such as Suricata or Snort to intercept network traffic, inspect that traffic and compare it against the IDS/IPS rule signatures, and then drop packets that match a DROP rule.
But the netmap device currently has some limitations. It does not process VLAN tags, nor does it work properly with traffic shapers or limiters. When you use Inline IPS Mode on a VLAN-enabled interface, then you need to run the IDS/IPS engine on the parent interface of the VLAN. So for example, if your VLAN interface was vmx0.10 (which would be a VLAN interface with the assigned VLAN ID '10'), you should actually run the netmap device on the parent interface (so that would be vmx0 instead of vmx0.10).
The older netmap code that was in Suricata only opened a single host stack ring. That limited throughput as the single ring meant all traffic was restricted to processing on a single CPU core. So no matter how many CPU cores you had in your box, Suricata would only use one of them to process the traffic when using netmap with Inline IPS operation.
Recently the netmap code in Suricata was overhauled so that it supports the latest version 14 of the NETMAP_API. This new API version exposes multiple host stack rings when opening the kernel end of a network connection (a.ka. the host stack). You can now tell netmap to open as many host stack rings (or queues) as the physical NIC exposes. With the new netmap code, Suricata can now create a separate thread to service each NIC queue (or ring), and those separate threads have a matching host stack queue (ring) for reading and writing data. So traffic loads can be spread across multiple threads running on multiple cores when using Inline IPS Mode. This new code is slated to be introduced upstream in Suricata 7.0 due for release shortly. I have backported this new netmap code into the Suricata 6.0.3 binary currently used in pfSense. And the OPNsense guys have also put the updated netmap code into their Suricata development branch.
But the new netmap code in the Suricata binary exposed a bug in the Suricata package GUI code. When running Suricata on a VLAN interface with Inline IPS Mode using the netmap device, the VLAN's parent interface should be passed to Suricata (and thus eventually to netmap). There are two reasons for this. First, if you pass netmap the VLAN interface name, it will actually create an emulated netmap adapter for the interface (because the VLAN interface itself is actually a virtual device). This is a performance-limited device. It is a software construct, and is quite slow to process traffic. The second issue with passing a VLAN interface name is netmap itself is VLAN un-aware. The VLAN tags are not honored by the netmap device. So you gain nothing, and in fact lose performance, due to the emulated adapter that is created. Passing the parent interface name instead results in netmap opening the underlying physical interface device where it can take full advantage of any available multiple NIC queues (same as netmap rings).
So the soon-to-be-released 6.0.3_3 version of the Suricata GUI package will contain an important change. When you configure Suricata to run on a VLAN interface, the GUI code will automatically enable Suricata on the parent interface instead of the VLAN interface. A system log message will be logged to flag this change. It serves no useful purpose, and in fact will actually waste resources, to run multiple Suricata instances on VLANs defined on the same parent interface. As an example, assume you had the following VLANs defined:
vmx0.10 vmx0.20 vmx0.30
There is no reason to run three instances of Suricata here. Simply run a single instance on one of the VLANs. Suricata will automatically set itself up to run on the parent interface (vmx0 in the example given). Be sure promiscuous mode is enabled on the INTERFACE SETTINGS tab (it is "on" by default). Depending on exactly how your rules are written, you may need to customize the HOME_NET variable using this setup. You would want to include the IP subnets of all the defined VLANs in the HOME_NET variable running on the parent interface. Again, this is the default setting. Suricata will scan all locally-attached subnets and add them to the HOME_NET variable.
I hope I am not digging up too old of a thread but I came across this when looking into whether this best practice changes when using VLANs defined at the hypervisor layer. I noticed in your example you use VMX0 but you also use the .10 etc which suggests you are running on ESXi but defining VLANs at the PfSense level instead of the ESXi level.
So my question is, if defining the VLANs at the ESXi level do you still only need to do the parent interface? or do you need to define an instance for each VLAN?
-
@bigjohns97 said in Important Info: Inline IPS Mode with Suricata and VLANs:
So my question is, if defining the VLANs at the ESXi level do you still only need to do the parent interface? or do you need to define an instance for each VLAN?
Not sure I'm fully understanding your question. When you say "defining the VLANs at the ESXi level", do you mean configuring VLAN operation for the ESXi virtual switch? If so, then the answer is "yes, you would run Suricata on the physical interface" because the pfSense guest is tagging/processing the traffic. The ESXi virtual switch then just behaves as any hardware switch would and routes the traffic according to the VLAN tags applied by pfSense (when talking about trunk ports).
To simplify, when pfSense is involved in the VLAN tagging operation by either adding the necessary tags, or routing traffic via the tags, then Suricata needs to run on the physical interface and not the VLAN virtual interface in pfSense. When I say "physical interface", that can be one of three possibilities as follows:
- For pfSense running on a bare metal machine, that means the physical NIC such as em0, igc1, etc.
- For pfSense running on a hypervisor with passthrough NICs, that's the same as #1 above because the guest OS will directly communicate with the hardware NIC via the passthrough.
- For pfSense running on a hypervisor with virtual hardware, you still run on the parent interface of the virtual hardware such as vmx0, vtx1, hn0, etc.
The VLAN ID nomenclature I referenced in the original post is how pfSense and FreeBSD identify VLAN interfaces. pfSense will use the physical interface name with a period and the VLAN ID appended.
-
@bmeeks That's just it though, when you define the vlan id on the vswitch the pfsense guest just sees it as another interface/network, it doesn't see any of the VLAN tags as they are stripped at the vswitch. It still does the routing based upon layer 3 but it isn't aware of any VLAN id's because they are defined at the hypervisor level.
-
@bigjohns97 said in Important Info: Inline IPS Mode with Suricata and VLANs:
@bmeeks That's just it though, when you define the vlan id on the vswitch the pfsense guest just sees it as another interface/network, it doesn't see any of the VLAN tags as they are stripped at the vswitch. It still does the routing based upon layer 3 but it isn't aware of any VLAN id's because they are defined at the hypervisor level.
Okay. I've never used ESXi at that level, so have no experience there. If pfSense simply sees it as either a different "physical" interface or just another IP stack, then you would run Suricata on whatever "interface" pfSense is showing on its INTERFACES menu.
Netmap operation is something that is happening inside the guest OS, so the rules I spelled out earlier apply only when the guest OS sees the VLAN tags themselves and needs to use them.
-
You have to use Virtual Guest Tagging (tag 4095) on the port group:
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-3BB93F2C-3872-4F15-AEA9-90DEDB6EA145.html -
@bmeeks I was thinking about this post and I have a possible solution although it is not elegant. I would like to hear your thoughts on this.
I like to optimize security so I’m naturally interested in Suricata and I want to keep using it in IPS mode.
Proposed solution:
- You use Suricata on the WAN interface, Inline IPS mode.
- You don’t use Suricata on the LAN interface(s) of this PfSense. But configure VLANs in this Pfsense.
- You place a managed switch in between capable of 802.1Q. You configure the ports that lead to clients as untagged.
- You create a second PfSense x Suricata machine. This one only passed through traffic like a switch and uses Suricata Inline IPS on the LAN side of things, which is untagged traffic only as the managed switch has removed the tag already for traffic passed on to this Pfsense. And the managed switch will reapply the tag when untagged traffic returns from this second PfSense x Suricata box.
I think this should work theoretically. I’ll try it out.
It might be a little overkill but I like to keep both VLANs and Inline IPS mode, so I was thinking ‘the tagging part is the problem’, the managed switch will place VLAN tags on the traffic it ingests and passed through to the main PfSense, because you’ve set that up in its configuration.
So the second PfSense x Suricata machine should be configured only for Suricata and not for routing or anything.
What do you think of this solution? I think it resolves the problem, you keep your VLANs and you keep Inline IPS mode.
I ofcourse don’t suggest to purchase a large number of PfSense x Suricata machines as this might be costly (or you could buy a smaller sized, but powerful enough machine for that section of the network), but at least for my small setup I think this will work. I’ll just apply the same GID:SID Mgmt files on both Suricata’s as I want to block most things and only allow what I need.
-
@cyb3rtr0nian said in Important Info: Inline IPS Mode with Suricata and VLANs:
@bmeeks I was thinking about this post and I have a possible solution although it is not elegant. I would like to hear your thoughts on this.
I like to optimize security so I’m naturally interested in Suricata and I want to keep using it in IPS mode.
Proposed solution:
- You use Suricata on the WAN interface, Inline IPS mode.
- You don’t use Suricata on the LAN interface(s) of this PfSense. But configure VLANs in this Pfsense.
- You place a managed switch in between capable of 802.1Q. You configure the ports that lead to clients as untagged.
- You create a second PfSense x Suricata machine. This one only passed through traffic like a switch and uses Suricata Inline IPS on the LAN side of things, which is untagged traffic only as the managed switch has removed the tag already for traffic passed on to this Pfsense. And the managed switch will reapply the tag when untagged traffic returns from this second PfSense x Suricata box.
I think this should work theoretically. I’ll try it out.
It might be a little overkill but I like to keep both VLANs and Inline IPS mode, so I was thinking ‘the tagging part is the problem’, the managed switch will place VLAN tags on the traffic it ingests and passed through to the main PfSense, because you’ve set that up in its configuration.
So the second PfSense x Suricata machine should be configured only for Suricata and not for routing or anything.
What do you think of this solution? I think it resolves the problem, you keep your VLANs and you keep Inline IPS mode.
I ofcourse don’t suggest to purchase a large number of PfSense x Suricata machines as this might be costly (or you could buy a smaller sized, but powerful enough machine for that section of the network), but at least for my small setup I think this will work. I’ll just apply the same GID:SID Mgmt files on both Suricata’s as I want to block most things and only allow what I need.
I think you should ditch the second pfSense and Suricata instance in this design and instead use a dedicated Linux box running Suricata using AF_PACKET mode for inline IPS. Suricata is much more performant on Linux now. Of course this would mean configuring Suricata on the Linux box using the CLI. You would have no GUI.
The problem with VLANs and Suricata on pfSense is the requirement to use the netmap kernel device for inline IPS mode. That device runs best on physical interfaces. To run on virtual interfaces such as VLANs it must run in an emulated mode that is much slower. Another issue is the requirement to use a host stack interface in the pfSense model. Suricata performs best when it can route traffic directly between two physical interfaces using inline IPS mode.
-
@bmeeks said in Important Info: Inline IPS Mode with Suricata and VLANs:
@cyb3rtr0nian said in Important Info: Inline IPS Mode with Suricata and VLANs:
@bmeeks I was thinking about this post and I have a possible solution although it is not elegant. I would like to hear your thoughts on this.
I like to optimize security so I’m naturally interested in Suricata and I want to keep using it in IPS mode.
Proposed solution:
- You use Suricata on the WAN interface, Inline IPS mode.
- You don’t use Suricata on the LAN interface(s) of this PfSense. But configure VLANs in this Pfsense.
- You place a managed switch in between capable of 802.1Q. You configure the ports that lead to clients as untagged.
- You create a second PfSense x Suricata machine. This one only passed through traffic like a switch and uses Suricata Inline IPS on the LAN side of things, which is untagged traffic only as the managed switch has removed the tag already for traffic passed on to this Pfsense. And the managed switch will reapply the tag when untagged traffic returns from this second PfSense x Suricata box.
I think this should work theoretically. I’ll try it out.
It might be a little overkill but I like to keep both VLANs and Inline IPS mode, so I was thinking ‘the tagging part is the problem’, the managed switch will place VLAN tags on the traffic it ingests and passed through to the main PfSense, because you’ve set that up in its configuration.
So the second PfSense x Suricata machine should be configured only for Suricata and not for routing or anything.
What do you think of this solution? I think it resolves the problem, you keep your VLANs and you keep Inline IPS mode.
I ofcourse don’t suggest to purchase a large number of PfSense x Suricata machines as this might be costly (or you could buy a smaller sized, but powerful enough machine for that section of the network), but at least for my small setup I think this will work. I’ll just apply the same GID:SID Mgmt files on both Suricata’s as I want to block most things and only allow what I need.
I think you should ditch the second pfSense and Suricata instance in this design and instead use a dedicated Linux box running Suricata using AF_PACKET mode for inline IPS. Suricata is much more performant on Linux now. Of course this would mean configuring Suricata on the Linux box using the CLI. You would have no GUI.
That’s useful information, better performance is always welcome. CLI is fine. I’m familiar with Linux, although still learning. I’ll learn the relevant CLI commands.
The problem with VLANs and Suricata on pfSense is the requirement to use the netmap kernel device for inline IPS mode. That device runs best on physical interfaces.
I understand, but I encountered the problem even when using Inline IPS mode on a physical parent interface on the LAN. Because I read what you said about not enabling IPS mode on a separate VLAN interface, instead on the parent physical interface. For me that would be fine, I don’t mind all rules will be the same for all VLANs. But in Legacy mode everything works like a charm, but as soon as Inline IPS mode is enabled all VLANs break and the VLAN aware AP becomes unreachable (also after everything is rebooted). (Also, I have other physical LAN interfaces running Inline mode not related to these VLANs, running fine without any problems.)
I think because this emulating netmap kernel device is used, the VLAN tag gets stripped from the traffic so it becomes untagged (and then it’s no longer clear where it needs to go because no VLAN tag), because it only understands untagged traffic.
So hence my proposal to move the IPS mode for LAN to the section of the network that is untagged.
To run on virtual interfaces such as VLANs it must run in an emulated mode that is much slower. Another issue is the requirement to use a host stack interface in the pfSense model. Suricata performs best when it can route traffic directly between two physical interfaces using inline IPS mode.
I don’t want to run Suricata on the discriminate VLANs, just on the parent interface. I heard hardware offloading might be a problem, but I already disabled that function. I get your advice regarding slower performance that is also undesirable, but I didn’t even get it to send any traffic after Inline was enabled, and as soon as I enable Legacy everything works perfectly again on the LAN interfaces.
Any thoughts or is the separate box the way to go here?
-
@cyb3rtr0nian said in Important Info: Inline IPS Mode with Suricata and VLANs:
I think because this emulating netmap kernel device is used, the VLAN tag gets stripped from the traffic so it becomes untagged (and then it’s no longer clear where it needs to go because no VLAN tag), because it only understands untagged traffic.
This is correct. The netmap implementation in FreeBSD does not deal with VLAN tags. They do indeed get stripped out. When I first read about the netmap device and was considering a true inline IPS mode of operation for Suricata (and Snort), netmap seemed like a perfect fit. But as I dived deeper into it and began creating the necessary interface code I came to realize that the FreeBSD netmap implementation is not ideally suited for inline IPS where you need to create a pipe between a physical NIC and the kernel (via the host stack). The host stack interface is what we use on pfSense to avoid consuming two physical interface ports for each network connection (two physical ports, IN and OUT, for the LAN or any other interface where you wanted to use Inline IPS Mode).
In the Linux implementation I mentioned, you would use two physical ports (for example,
em0
andem1
). Then Suricata would bridge those two physical ports and police the traffic between them by copying packets from one port to the other but analyzing them first. Any packets matching a DROP rule would not get copied. That's true Inline IPS operation, but it requires two physical ports per instance. That's not desirable for the vast majority of pfSense users.