Important Info: Inline IPS Mode with Suricata and VLANs
-
Inline IPS Mode Operation with VLANs
The Inline IPS Mode of blocking used in both the Suricata and Snort packages takes advantage of the netmap kernel device to intercept packets as they flow between the kernel's network stack and the physical NIC hardware driver. Netmap enables a userland application such as Suricata or Snort to intercept network traffic, inspect that traffic and compare it against the IDS/IPS rule signatures, and then drop packets that match a DROP rule.
But the netmap device currently has some limitations. It does not process VLAN tags, nor does it work properly with traffic shapers or limiters. When you use Inline IPS Mode on a VLAN-enabled interface, then you need to run the IDS/IPS engine on the parent interface of the VLAN. So for example, if your VLAN interface was vmx0.10 (which would be a VLAN interface with the assigned VLAN ID '10'), you should actually run the netmap device on the parent interface (so that would be vmx0 instead of vmx0.10).
The older netmap code that was in Suricata only opened a single host stack ring. That limited throughput as the single ring meant all traffic was restricted to processing on a single CPU core. So no matter how many CPU cores you had in your box, Suricata would only use one of them to process the traffic when using netmap with Inline IPS operation.
Recently the netmap code in Suricata was overhauled so that it supports the latest version 14 of the NETMAP_API. This new API version exposes multiple host stack rings when opening the kernel end of a network connection (a.ka. the host stack). You can now tell netmap to open as many host stack rings (or queues) as the physical NIC exposes. With the new netmap code, Suricata can now create a separate thread to service each NIC queue (or ring), and those separate threads have a matching host stack queue (ring) for reading and writing data. So traffic loads can be spread across multiple threads running on multiple cores when using Inline IPS Mode. This new code is slated to be introduced upstream in Suricata 7.0 due for release shortly. I have backported this new netmap code into the Suricata 6.0.3 binary currently used in pfSense. And the OPNsense guys have also put the updated netmap code into their Suricata development branch.
But the new netmap code in the Suricata binary exposed a bug in the Suricata package GUI code. When running Suricata on a VLAN interface with Inline IPS Mode using the netmap device, the VLAN's parent interface should be passed to Suricata (and thus eventually to netmap). There are two reasons for this. First, if you pass netmap the VLAN interface name, it will actually create an emulated netmap adapter for the interface (because the VLAN interface itself is actually a virtual device). This is a performance-limited device. It is a software construct, and is quite slow to process traffic. The second issue with passing a VLAN interface name is netmap itself is VLAN un-aware. The VLAN tags are not honored by the netmap device. So you gain nothing, and in fact lose performance, due to the emulated adapter that is created. Passing the parent interface name instead results in netmap opening the underlying physical interface device where it can take full advantage of any available multiple NIC queues (same as netmap rings).
So the soon-to-be-released 6.0.3_3 version of the Suricata GUI package will contain an important change. When you configure Suricata to run on a VLAN interface, the GUI code will automatically enable Suricata on the parent interface instead of the VLAN interface. A system log message will be logged to flag this change. It serves no useful purpose, and in fact will actually waste resources, to run multiple Suricata instances on VLANs defined on the same parent interface. As an example, assume you had the following VLANs defined:
vmx0.10 vmx0.20 vmx0.30
There is no reason to run three instances of Suricata here. Simply run a single instance on one of the VLANs. Suricata will automatically set itself up to run on the parent interface (vmx0 in the example given). Be sure promiscuous mode is enabled on the INTERFACE SETTINGS tab (it is "on" by default). Depending on exactly how your rules are written, you may need to customize the HOME_NET variable using this setup. You would want to include the IP subnets of all the defined VLANs in the HOME_NET variable running on the parent interface. Again, this is the default setting. Suricata will scan all locally-attached subnets and add them to the HOME_NET variable.
-
Would running on the parent interface mean that other VLANs are potentially blocked even when you only intend to inspect traffic passing through a single VLAN?
-
@marc05 said in Important Info: Inline IPS Mode with Suricata and VLANs:
Would running on the parent interface mean that other VLANs are potentially blocked even when you only intend to inspect traffic passing through a single VLAN?
I've been preparing to write a follow-up to this original post as I have learned quite a bit about netmap and VLANs over the last several weeks. The short answer is VLANs really should be avoided when using Inline IPS Mode. That's because the netmap kernel device, that is integral to how Inline IPS Mode blocks, does not understand nor does it parse VLAN tags. And in many instances (it's NIC driver dependent in large part), does not get to even see the VLAN tags.
I put a fix in the last Suricata update to try and make things better, but after further study I've concluded my fix is not ideal. There really is not a great way to handle VLANs automatically in the GUI code when using Inline IPS Mode with either of the two IDS/IPS packages (Snort and Suricata).
In my follow-on post that I'm still working on, I will try and explain a little better. But anyone running VLANs on an interface should really just forget trying to put the IDS/IPS on the same interface. You most certainly DO NOT want to run the instance on one of the VLANs. You are shooting yourself in the foot when doing so in terms of performance. And you can also make your firewall unstable doing that. The best way, and even this way is not ideal, is to run the IDS/IPS instance on the VLAN parent only. That of course then means you can't have different rules for different VLANS (when using Inline IPS Mode). Any rules you put on the parent will apply to the parent and all of the VLAN children running on that interface.
-
Hi Bill! I have an update on my situation. finally set up the inline mode on suricata on the parent interface of the lan (ix0) and on the WAN2 DHCP (em0) , on the WANpppoe i left on legacy as you suggested some time ago.
The only way i can run inline on ix0 without breaking the vlans is to disable some hw feature.
On shellcmd i haveifconfig ix0 -vlanhwcsum -vlanhwfilter -vlanhwtag
-
@xm4rcell0x said in Important Info: Inline IPS Mode with Suricata and VLANs:
Hi Bill! I have an update on my situation. finally set up the inline mode on suricata on the parent interface of the lan (ix0) and on the WAN2 DHCP (em0) , on the WANpppoe i left on legacy as you suggested some time ago.
The only way i can run inline on ix0 without breaking the vlans is to disable some hw feature.
On shellcmd i haveifconfig ix0 -vlanhwcsum -vlanhwfilter -vlanhwtag
Yep. Some drivers have hardware handling of VLAN tags. And with the way that netmap is plumbed into the FreeBSD kernel stack, it does not get to see any of those tags. Also, any VLAN interface is actually a virtual interface in the OS. It is not a "real" interface. So with netmap, that means it becomes a single queue "virtual NIC". When opening an instance on a VLAN interface, netmap will create its own virtual netmap adapter. This virtual adaptor causes a big performance hit compared to running on the physical adaptor.
Running on the parent is better as then netmap is running on an actual physical interface and can take advantage of how ever many queues the real NIC exposes. But even then, if the NIC hardware is doing things with checksum and VLAN tag offloading in hardware, netmap operation can get squirrely.
These are some of the details I've learned over the last few months. And many of them are not fixable without rewriting a large chunk of the way the kernel interacts with NIC drivers and the netmap device. So the problems are caused by the OS itself, and there is nothing the Suricata or Snort applications can do to fix it.
So the bottom line unfortunately remains this -- Snort and Suricata are not a good fit for VLAN interfaces when using Inline IPS Mode. Can it sort of work in some instances? Yes, but with poor performance compared to non-VLAN interfaces, and the potential to crash your firewall or freeze all traffic on the interface.
-
The only way i can run inline on ix0 without breaking the vlans is to disable some hw feature.
On shellcmd i haveifconfig ix0 -vlanhwcsum -vlanhwfilter -vlanhwtag
I came across this situation, of Suricata Inline with VLANs using the latest versions as of this posting, and to confirm that the above command does still work. Well for Intel based ix (tested with ixl).
What I have found though is that Suricata has to first startup and finished its initialization of the engine, then the above command can be executed. Performing the command before Suricata starts fails to allow for the VLANs to connect.Process (for Intel ix):
- Suricata Inline IPS only for the Parent Interface (ex LAN)
1a. No Suricata for the VLANs - Have the system running with Suricata operating. In this example, LAN is functional, but VLANs are offline.
- Execute a cron job, to start the ifconfig command about 5 minutes later (the time may vary base on hardware and network)
3a. Cron example (if using the WebGUI):
Minute: @reboot Hour to Day of Week fields are blank User: root Command: sleep 300 && ifconfig <NIC ID> -vlanhwcsum -vlanhwfilter -vlanhwtag
Where <NIC ID> would be the identifier, such as ix0, ixl1 of the interface.
4. Verify that a device on the VLAN has a connection.
5. Run an IDS test from the device such as: curl -A "BlackSun" www.google.com
6. Suricata should pop up the alert (granted depends on the rules in use) on the Parent Interface (ex: LAN), but have the IP Address of the Device (the subnet of the VLAN).A future possibility would be of executing a script, rather than the cron command above to check if the VLAN(s) are up (like a ping check) and if not, then perform the ifconfig command or maybe down/up the interface.
- Suricata Inline IPS only for the Parent Interface (ex LAN)
-
This post is deleted! -
@hamilton said in Important Info: Inline IPS Mode with Suricata and VLANs:
The only way i can run inline on ix0 without breaking the vlans is to disable some hw feature.
On shellcmd i haveifconfig ix0 -vlanhwcsum -vlanhwfilter -vlanhwtag
I came across this situation, of Suricata Inline with VLANs using the latest versions as of this posting, and to confirm that the above command does still work. Well for Intel based ix (tested with ixl).
What I have found though is that Suricata has to first startup and finished its initialization of the engine, then the above command can be executed. Performing the command before Suricata starts fails to allow for the VLANs to connect.Process (for Intel ix):
- Suricata Inline IPS only for the Parent Interface (ex LAN)
1a. No Suricata for the VLANs - Have the system running with Suricata operating. In this example, LAN is functional, but VLANs are offline.
- Execute a cron job, to start the ifconfig command about 5 minutes later (the time may vary base on hardware and network)
3a. Cron example (if using the WebGUI):
Minute: @reboot Hour to Day of Week fields are blank User: root Command: sleep 300 && ifconfig <NIC ID> -vlanhwcsum -vlanhwfilter -vlanhwtag
Where <NIC ID> would be the identifier, such as ix0, ixl1 of the interface.
4. Verify that a device on the VLAN has a connection.
5. Run an IDS test from the device such as: curl -A "BlackSun" www.google.com
6. Suricata should pop up the alert (granted depends on the rules in use) on the Parent Interface (ex: LAN), but have the IP Address of the Device (the subnet of the VLAN).A future possibility would be of executing a script, rather than the cron command above to check if the VLAN(s) are up (like a ping check) and if not, then perform the ifconfig command or maybe down/up the interface.
This might be something that could be integrated into the Suricata binary by the upstream folks (Suricata upstream, I mean). Probably what's happening is that during startup Suricata makes its own calls to the OS to set the promiscuous mode. Just guessing here, but perhaps that OS call also results in any other settings the user made (such as
-vlanhw*
customization settings) getting overwritten or reset to defaults ??When opening the netmap device Suricata makes a call to its own internal function called
SetIfaceFlags()
. That function in turn calls the OS to set promiscuous mode.There were also some FreeBSD bugs here as well, if I recall, where the NIC driver would accept, but still ignore, flags passed to disable hardware VLAN tagging and other hardware VLAN functionality. I don't recall which drivers were impacted, and I don't know if those bugs have been fixed yet.
- Suricata Inline IPS only for the Parent Interface (ex LAN)
-
Thanks for that...
And seems I spoke too early as the connection has stopped. So got to go back to figuring it out.
But at least there was some success. -
@hamilton said in Important Info: Inline IPS Mode with Suricata and VLANs:
Thanks for that...
And seems I spoke too early as the connection has stopped. So got to go back to figuring it out.
But at least there was some success.Sorry it did not prove to be a lasting solution. VLANs and the netmap kernel device used for inline IPS mode just do not seem to get along well with each other....
Hopefully things improve with that in the future.
-
@bmeeks said in Important Info: Inline IPS Mode with Suricata and VLANs:
Inline IPS Mode Operation with VLANs
The Inline IPS Mode of blocking used in both the Suricata and Snort packages takes advantage of the netmap kernel device to intercept packets as they flow between the kernel's network stack and the physical NIC hardware driver. Netmap enables a userland application such as Suricata or Snort to intercept network traffic, inspect that traffic and compare it against the IDS/IPS rule signatures, and then drop packets that match a DROP rule.
But the netmap device currently has some limitations. It does not process VLAN tags, nor does it work properly with traffic shapers or limiters. When you use Inline IPS Mode on a VLAN-enabled interface, then you need to run the IDS/IPS engine on the parent interface of the VLAN. So for example, if your VLAN interface was vmx0.10 (which would be a VLAN interface with the assigned VLAN ID '10'), you should actually run the netmap device on the parent interface (so that would be vmx0 instead of vmx0.10).
The older netmap code that was in Suricata only opened a single host stack ring. That limited throughput as the single ring meant all traffic was restricted to processing on a single CPU core. So no matter how many CPU cores you had in your box, Suricata would only use one of them to process the traffic when using netmap with Inline IPS operation.
Recently the netmap code in Suricata was overhauled so that it supports the latest version 14 of the NETMAP_API. This new API version exposes multiple host stack rings when opening the kernel end of a network connection (a.ka. the host stack). You can now tell netmap to open as many host stack rings (or queues) as the physical NIC exposes. With the new netmap code, Suricata can now create a separate thread to service each NIC queue (or ring), and those separate threads have a matching host stack queue (ring) for reading and writing data. So traffic loads can be spread across multiple threads running on multiple cores when using Inline IPS Mode. This new code is slated to be introduced upstream in Suricata 7.0 due for release shortly. I have backported this new netmap code into the Suricata 6.0.3 binary currently used in pfSense. And the OPNsense guys have also put the updated netmap code into their Suricata development branch.
But the new netmap code in the Suricata binary exposed a bug in the Suricata package GUI code. When running Suricata on a VLAN interface with Inline IPS Mode using the netmap device, the VLAN's parent interface should be passed to Suricata (and thus eventually to netmap). There are two reasons for this. First, if you pass netmap the VLAN interface name, it will actually create an emulated netmap adapter for the interface (because the VLAN interface itself is actually a virtual device). This is a performance-limited device. It is a software construct, and is quite slow to process traffic. The second issue with passing a VLAN interface name is netmap itself is VLAN un-aware. The VLAN tags are not honored by the netmap device. So you gain nothing, and in fact lose performance, due to the emulated adapter that is created. Passing the parent interface name instead results in netmap opening the underlying physical interface device where it can take full advantage of any available multiple NIC queues (same as netmap rings).
So the soon-to-be-released 6.0.3_3 version of the Suricata GUI package will contain an important change. When you configure Suricata to run on a VLAN interface, the GUI code will automatically enable Suricata on the parent interface instead of the VLAN interface. A system log message will be logged to flag this change. It serves no useful purpose, and in fact will actually waste resources, to run multiple Suricata instances on VLANs defined on the same parent interface. As an example, assume you had the following VLANs defined:
vmx0.10 vmx0.20 vmx0.30
There is no reason to run three instances of Suricata here. Simply run a single instance on one of the VLANs. Suricata will automatically set itself up to run on the parent interface (vmx0 in the example given). Be sure promiscuous mode is enabled on the INTERFACE SETTINGS tab (it is "on" by default). Depending on exactly how your rules are written, you may need to customize the HOME_NET variable using this setup. You would want to include the IP subnets of all the defined VLANs in the HOME_NET variable running on the parent interface. Again, this is the default setting. Suricata will scan all locally-attached subnets and add them to the HOME_NET variable.
I hope I am not digging up too old of a thread but I came across this when looking into whether this best practice changes when using VLANs defined at the hypervisor layer. I noticed in your example you use VMX0 but you also use the .10 etc which suggests you are running on ESXi but defining VLANs at the PfSense level instead of the ESXi level.
So my question is, if defining the VLANs at the ESXi level do you still only need to do the parent interface? or do you need to define an instance for each VLAN?
-
@bigjohns97 said in Important Info: Inline IPS Mode with Suricata and VLANs:
So my question is, if defining the VLANs at the ESXi level do you still only need to do the parent interface? or do you need to define an instance for each VLAN?
Not sure I'm fully understanding your question. When you say "defining the VLANs at the ESXi level", do you mean configuring VLAN operation for the ESXi virtual switch? If so, then the answer is "yes, you would run Suricata on the physical interface" because the pfSense guest is tagging/processing the traffic. The ESXi virtual switch then just behaves as any hardware switch would and routes the traffic according to the VLAN tags applied by pfSense (when talking about trunk ports).
To simplify, when pfSense is involved in the VLAN tagging operation by either adding the necessary tags, or routing traffic via the tags, then Suricata needs to run on the physical interface and not the VLAN virtual interface in pfSense. When I say "physical interface", that can be one of three possibilities as follows:
- For pfSense running on a bare metal machine, that means the physical NIC such as em0, igc1, etc.
- For pfSense running on a hypervisor with passthrough NICs, that's the same as #1 above because the guest OS will directly communicate with the hardware NIC via the passthrough.
- For pfSense running on a hypervisor with virtual hardware, you still run on the parent interface of the virtual hardware such as vmx0, vtx1, hn0, etc.
The VLAN ID nomenclature I referenced in the original post is how pfSense and FreeBSD identify VLAN interfaces. pfSense will use the physical interface name with a period and the VLAN ID appended.
-
@bmeeks That's just it though, when you define the vlan id on the vswitch the pfsense guest just sees it as another interface/network, it doesn't see any of the VLAN tags as they are stripped at the vswitch. It still does the routing based upon layer 3 but it isn't aware of any VLAN id's because they are defined at the hypervisor level.
-
@bigjohns97 said in Important Info: Inline IPS Mode with Suricata and VLANs:
@bmeeks That's just it though, when you define the vlan id on the vswitch the pfsense guest just sees it as another interface/network, it doesn't see any of the VLAN tags as they are stripped at the vswitch. It still does the routing based upon layer 3 but it isn't aware of any VLAN id's because they are defined at the hypervisor level.
Okay. I've never used ESXi at that level, so have no experience there. If pfSense simply sees it as either a different "physical" interface or just another IP stack, then you would run Suricata on whatever "interface" pfSense is showing on its INTERFACES menu.
Netmap operation is something that is happening inside the guest OS, so the rules I spelled out earlier apply only when the guest OS sees the VLAN tags themselves and needs to use them.
-
You have to use Virtual Guest Tagging (tag 4095) on the port group:
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-3BB93F2C-3872-4F15-AEA9-90DEDB6EA145.html