Just a firewall, in hardware.
-
What is stopping you getting a mainstream server board with 2x8 PCI-e slots and putting an i350-t4 in one?
-
Id like to keep tdp low, otherwise I don't really know what processor or board hardware works best. That's kinda what I'm asking, surely there's something that can handle a gigabit connection but not much more.
-
Any of the T series processors will peak at 35W, and typically run well below that (eg i5-4570T) undervolting the processor will shave more power off. I have mine running passively with an aftermarket heatsink. Going below that gets tricky for high performance unless you go atom, then boards become the limiting factor at the moment.
An i350 4 port Nic uses 5W. That is pretty low in the scheme of things as well, onboard may shave 1-2w off that but it'd be marginal.
There are the new broad well xeons which may be of interest…
As for pass through, when you do that you expose the box behind to whatever it is, really not gaining any security. Better off using NAT and passing through ports if required
-
In my opinion this design idea add cost, complexity (which by itself lowers security as much as exposing virtualized NICs) and break the intended low power scheme.
I would either…
Get VM-host hardware that meet the requirements to also host pfSense. Pro: low power, low cost, "clean" design.
OR
Get the hardware required for a decent pfSense box and implement a VM-host behind it for other things. Pro: best security, "clean" design, and messing with the VM-server will never affect internet connectivity or security. -
Just curious as to what your exact worry is regarding VMs and having non-passed-through NICs for a pfsense setup. I'd imagine there's a ton of people out there who do this, and I myself use hyper-v and find that it works great.
It just seems like a lot of hassle and trouble/money for "possible" security risks that you're not even sure that you'll eliminate. You're going to expose a NIC to external traffic one way or another. There's hardly a popular software solution out there that hasn't had security issues in the past, be it java, flash, windows, linux, apple, or even openssl.
-
Just curious as to what your exact worry is regarding VMs and having non-passed-through NICs for a pfsense setup.
Well with a VM you do have one or two (depending on type of hypervisor) additional operating systems controlling the NIC, adding attack vectors below your actual firewall operating system.
Call me old-fashioned but I won't virtualize my perimeter firewall until I can have dedicated NICs, at least on the WAN interface. I totally agree with the OPs security concerns.
I'd imagine there's a ton of people out there who do this…
A ton of people does it because it's easy, attractive and saves money, not because it offers the absolute best security.
You're going to expose a NIC to external traffic one way or another.
True but with dedicated hardware or virtualized NIC passthrough it is controlled only by FreeBSD, an operating system that have been tuned for network and security applications for 20+ years.
-
@P3R:
Just curious as to what your exact worry is regarding VMs and having non-passed-through NICs for a pfsense setup.
Well with a VM you do have one or two (depending on type of hypervisor) additional operating systems controlling the NIC, adding attack vectors below your actual firewall operating system.
Call me old-fashioned but I won't virtualize my perimeter firewall until I can have dedicated NICs, at least on the WAN interface. I totally agree with the OPs security concerns.
With hyper-v you disable the host OS' access to the adapter (it's a simple checkbox) and connect it to a virtual switch. To me it's the same thing as hooking a modem to a switch which is only hooked to the VM. Are you worried there's some underlying exploit in the host OS? If so, what? It has no access to the NIC, it exists but there's no IP or any form of connection. The only thing I could see being concerned about is if the port had some sort if ipmi or intel management system hooked to it, but that would get passed along with the adapter anyways I'd think.
Just trying to understand the concerns.
-
Yeah - I'd pretty much scrap your big-money all-hardware plan and get 1 or two boxes with enough processor to handle all your needs and run VMs.
VMs do have some issues, but not the issues you are worried about.
-
With hyper-v you disable the host OS' access to the adapter (it's a simple checkbox) and connect it to a virtual switch.
That sounds absolutely safe as long as the feature works only exactly as documented, won't ever have any bugs, MS will never implement any backdoors (even if threatened by authorities), and you trust the administrator to never ever make the mistake of attaching any other VM to it or untick the checkbox.
Personally I would still feel more comfortable with NICs dedicated (pass-through) to a single VM. At the very least, it reduces the risk for administrative mistakes.
Are you worried there's some underlying exploit in the host OS?
I wouldn't call it worry (at least not constantly) but I'm aware that every software that I use will have bugs that may develop into exploits in the future (any other approach I would consider naive) and I do try to keep up to speed on the latest news around what I use. The software that is directly exposed to the internet (both the firewall itself and the open services) gets a bit more attention in that regard from me.
I respect that a lot of people (many smarter and more educated than me) think the many obvious advantages with mature virtualization products outweigh the added risks but if someone doesn't even recognize that an additional software layer between the hardware and firewall OS means added risk, then I probably will not take that person as seriously regarding security.
My firewalls will probably be virtualized in one way or the other some years into the future but I'm not there yet. I can live with being seen as old-fasioned… ;)
-
Yes you can do this all on hardware and yes you can run pfsense in "transparent mode".
Anything you can do with VMs you can do with hardware. Its just more expensive.
Go for it. I run all hardware in a few places, but I will probably go to VM's on one machine in all those places eventually.
(I also distrust anything microsoft touches)
-
hypervisors are generally safe enough to run firewalls if you wish to be protected from random_hacker_x.
if you wish to be safe from government spying i would suggest you stop using anything that has an intel/amd/arm /…. cpu. theres no telling whats in them black boxes.
i'd suggest you create your own cpu and also, engineer your own NIC's.all major telco's are also (willingly) in cahoots with the government. their equipement is all compromised stuff delivered by cisco.
it's known that numberous TOR exitnodes are government honeypots, so creating your own private darknet with fancy and secure protocol would be advisable:)
-
Some of your sarcasm dripped on me by accident…
However, I think Putin is taking you dead seriously and planning to do exactly as you say.
-
That sounds absolutely safe as long as the feature works only exactly as documented, won't ever have any bugs, MS will never implement any backdoors (even if threatened by authorities), and you trust the administrator to never ever make the mistake of attaching any other VM to it or untick the checkbox.
Personally I would still feel more comfortable with NICs dedicated (pass-through) to a single VM. At the very least, it reduces the risk for administrative mistakes.
For the bugs/backdoor issues, the same could be said for any software involved in networking, be it tomato, open-wrt, or your parent's default d-link router, etc. Maybe I'm just not as paranoid as I should be, but I'd think that if microsoft wanted to put a backdoor into something, they'd put it in something more ubiquitous than a hyper-v virtual switch. They'd put it in windows update and/or IE. Bugs are everywhere, but microsoft has been pretty adamant about network layer isolation as this is something desired in enterprise for various reasons.
Also, I'm the admin, so I'm not overly worried about me unticking a box. Even if I did, Comcast is pretty strict about MAC addresses, to the point of it being painful. Anything without the VM's mac address would be denied all internet access. Believe me, it took a good amount of time to get things working.
I wouldn't call it worry (at least not constantly) but I'm aware that every software that I use will have bugs that may develop into exploits in the future (any other approach I would consider naive) and I do try to keep up to speed on the latest news around what I use. The software that is directly exposed to the internet (both the firewall itself and the open services) gets a bit more attention in that regard from me.
I respect that a lot of people (many smarter and more educated than me) think the many obvious advantages with mature virtualization products outweigh the added risks but if someone doesn't even recognize that an additional software layer between the hardware and firewall OS means added risk, then I probably will not take that person as seriously regarding security.
My firewalls will probably be virtualized in one way or the other some years into the future but I'm not there yet. I can live with being seen as old-fasioned… ;)
I agree, there's always risks. Even with mature software like openssl people find big exploits in. I just don't think a virtualized environment adds that much of a risk. With the way that hyper-v works, the only thing I feel is exposed would be the driver, and those are going to be exposed no matter what environment you're on.
Something CAN be said for having your firewall separate in terms of downtime. If I have to do some maintenance and shut the server down, I would lose internet unless I had some sort of backup in place. Most people probably don't do a lot of hardware maintenance to appliances, which is their big selling point to me. I have an HTPC that can run my pfsense hyper-v VM if necessary instead.
-
For the bugs/backdoor issues, the same could be said for any software involved in networking, be it tomato, open-wrt, or your parent's default d-link router, etc.
But we're not here to talk about those, are we… ;)
Maybe I'm just not as paranoid as I should be…
It's your network and your security, I don't care. But you did ask why others were not as keen on virtualizing their firewalls and I answered.
Bugs are everywhere…
True. But since this seems to be the only argument presented for why virtualization is secure enough, I don't see us getting any further with this discussion.
Also, I'm the admin…
So am I and I make mistakes.
…so I'm not overly worried about me unticking a box.
You picked out a single thing that I put in there more for fun. What about the other thousand things that you could do wrong, are you convinced Comcast will save you there as well…
Wether we admit it or not, everyone makes mistakes. In my opinion, a network and system design that try to minimize the possibility of making mistakes and the consequences of those made is more secure.
With the way that hyper-v works…
You come back to this all the time. I don't think virtualization is unsecure when it works exactly the way it was designed and documented. I only worry about when it doesn't…
The bottom line: the risks with virtualized firwalls is acceptable to some because they love the many other advantages with it but less software layers is always more secure than more software layers.
Then again, I'm trying to setup my virtualized lab firewall as we speak. ;D If I could only get that damn pass-through of NICs to work...
-
@P3R:
You picked out a single thing that I put in there more for fun. What about the other thousand things that you could do wrong, are you convinced Comcast will save you there as well…
Wether we admit it or not, everyone makes mistakes. In my opinion, a network and system design that try to minimize the possibility of making mistakes and the consequences of those made is more secure.
I'm unsure of whether you've dealt with hyper-v (I will admit I have not dealt with xen or esxi very much, aside from knowing that my hardware is about 85% compatible with it and 15% not), but it's a pretty common joke that they spend 2.5 years per checkbox. It's really not an in-depth and complicated software, and it's even less flexible (imo) than something such as virtualbox. There's only a handful of mistakes that I could make, and of those they're all covered by the MAC address issues. It's not so much comcast "saving" me, it's just that there's no internet traffic allowed as I don't pay for multiple IP addresses. Worse case scenario is that traffic is routed directly to my host OS for a few hours. That certainly wouldn't be ideal, but the number of exposed ports is minuscule in the event that the OS detects a "public" network. Again, this is absolutely worst case scenario; I honestly can't think of how this could even happen unless I made a deliberate change that would result in loss of internet. This would be immediately apparent.
You come back to this all the time. I don't think virtualization is unsecure when it works exactly the way it was designed and documented. I only worry about when it doesn't…
That's what I'm wondering, how could it not work properly? I've been using it for years and never noticed any issues; what type of problems concern you? I would think that anything that bugged out would result in a blue screen/reboot or simply a crash of the network before anything else. Most of these virtualization technologies are deployed in scales far beyond the breadth of what the hobbyist/prosumer could ever remotely afford, all across the world. If there were problems we likely wouldn't be the first ones to find them.
-
Wow.
I didn't think I'd have to explain how CVEs work on this forum. Forgive me for leaving pages of relevant information out of each of these things I'm discussing because it can be all found on the Internet or in a really good book on the subject.
Since it's already been used as an example, a week before heartbleed https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0160 showed up, everyone would have said "what vulnerability?" The fact that it was in pretty much every still-used version of openssl means that the vulnerability was always there, but nobody knew about it.
So let's look at how vulnerabilities work:
-
Programmers write software.
-
People and machines test that software, hopefully before it's released.
-
Sometimes issues are found which either cause instability or security issues or some other unwanted behavior.
-
Sometimes (but not always) those issues are reported back to the development team
-
Sometimes (but not always) the issues reported back are fixed
-
Sometimes the issue was found by a bad guy but not by a good guy.
-
In any case, it takes significant time before the issue is fixed, which means from inception to the time it's fixed the software is vulnerable.
-
If it was a bad guy who found the issue it will almost certainly not be reported until the bad guy has used it for his/her nefarious purposes.
-
During the hours/weeks/months/years between release and IMPLEMENTATION of the fix by the maintainer of the specific hardware running the software, the software is vulnerable and may have been exploited by a bad guy.
-
If the issue was discovered by a good guy, it will probably be submitted to a vulnerability database if it has security implications
-
The folks who handle vulnerability databases work through their often large list of reports to find out which are valid, and evaluate the impact of the issue.
-
If the issue is critical and easy to exploit then they often contact the author before putting it on their public list.
What all that means is that any list you see, any exploit you hear about in the news, was known by a good guy days, weeks or months before you ever hear about it. If the issue was discovered because of an active 'in-the-wild' exploit then it may have been known and used by the bad guys for years before being caught.
In programming, there are accepted code quality metrics that software has a certain number of bugs per thousand lines of code. It's a statistical average, but generally speaking if your software is lower than average you have to wonder what's lurking in there that you don't know about. Programmers strive for a small number of overall issues (fixing known ones), and testers strive for a large number of known issues, meaning they've hopefully found most of them.
Heck with it, I googled it and this is the first hit. The first bit seems to be saying what I intend: http://users.ece.cmu.edu/~koopman/des_s99/sw_testing/
All that being taken into account, I'm pretty confident that while there are not many known vulnerabilities to the software on my KVM host system, they almost certainly exist. Just as I'm pretty confident that pfSense has vulnerabilities that nobody knows about yet, and possibly also vulnerabilities that are known to the black hats but not by the maintainers, for all of this.
So let's get on to hardware and software layers.
-
It's pretty much a law of nature that the more complicated something is the more likely it is to have problems.
-
Virtualization support in processors improves speed, security and reduces complexity of software needed to host VMs
-
VT-d on a board means there is hardware support to give a device to a VM. If present, the host software needs only to do a little bit of work to hand the software over to the appropriate VM.
-
If VT-d is not present then host software needs to be more intimately involved in order to pass off functionality.
-
In any event, firewall software directly on the metal is less complicated than firewall software in a VM. As was previously mentioned here, it's also more expensive.
-
The security implications of an attacker getting onto the VM host are much worse than simply breaking into a guest.
So getting back to my intent with my network:
-
I want a small, cheaper appliance which will be a full-time firewall on hardware that needs to handle specific firewall rules, including passing VPN traffic down to a VPN endpoint.
-
Directly attached to that will be my c2758 box, which will have KVM on it and will host a pfSense full install
-
The c2758 does NOT have VT-d support, so there is a software layer on the host system which handles WAN traffic to some degree. Which is why I want something simpler upstream.
Tacked onto the end of this, it's my belief that two separate firewalls are more secure than a single firewall. If for some reason an intruder compromises my outer defense they would still have to deal with another firewall which they will not have tools to reach from the outer one. This may be a false assumption but it's at least another brick in the wall.
-
-
…
@P3R:You come back to this all the time. I don't think virtualization is unsecure when it works exactly the way it was designed and documented. I only worry about when it doesn't…
That's what I'm wondering, how could it not work properly? I've been using it for years and never noticed any issues; what type of problems concern you? I would think that anything that bugged out would result in a blue screen/reboot or simply a crash of the network before anything else. Most of these virtualization technologies are deployed in scales far beyond the breadth of what the hobbyist/prosumer could ever remotely afford, all across the world. If there were problems we likely wouldn't be the first ones to find them.
Part of what makes vulnerabilities so hard to find is that when software is used as originally intended by the authors the vulnerability is not evident. Black hats, if the code is closed-source, can interrogate software with invalid inputs or unexpected situations not anticipated by the developers and get behavior which is outside the scope a normal user would encounter. If it's open source, they can do their own code review and look for vulnerabilities, but IMO open source is less likely to be vulnerable simply because more people are watching anything with critical exposure.
One simple example is when a simple web form is put up, a user can inject sql into a text field and have that sql execute against the database if proper care was not taken to prevent it. A normal user won't even think of trying something like that, but somebody with bad intent certainly would be interested.
What's on the list of CVEs doesn't bother me, it's what's not on the list.
-
It's not so much comcast "saving" me, it's just that there's no internet traffic allowed as I don't pay for multiple IP addresses.
It was you that started this part of the discussion by asking what others saw as potential issues with virtualizing firewalls and yet you keep referring to very specific things about your situation, that you think makes you safe from all possible adminstrative mistakes.
I don't believe that you're totally invulnerable from the consequences of your mistakes but I'm not smart enough to think about every different misconfiguration in all scenarios, so I can't give you a detailed example of when it could be dangerous.
Let's just say that for most of the rest of the world, human errors is one of the risks and with a virtualized firewall that risk is higher than with dedicated hardware.
…what type of problems concern you?
I've told you several times now and I'm sorry but I don't think you will get it any better if I tell you the same things again. I think we have to accept that we don't understand each other.
-
So getting back to my intent with my network:
Great!
I'm sorry for adding to the off-topic part but in the beginning I really thought it could be an interesting addition to the discussion. I was wrong. :-[
[quote]The c2758 does NOT have VT-d support…I gave you my point of view and recommendations for better solutions in my first post to the thread.…it's my belief that two separate firewalls are more secure than a single firewall.
Yes that is a valid point but normally you would then want two firewalls of different origin to minimize the risk that they share the same vulnerabilities. Even if we like pfSense, you'd lose much of the two-firewalls-in-a-row-advantage if both are the same. Maybe a true appliance type of firewall could be better as your first level of defense then? You'd have plenty to choose from at whatever price level you feel is acceptable.
-
Part of what makes vulnerabilities so hard to find is that when software is used as originally intended by the authors the vulnerability is not evident. Black hats, if the code is closed-source, can interrogate software with invalid inputs or unexpected situations not anticipated by the developers and get behavior which is outside the scope a normal user would encounter. If it's open source, they can do their own code review and look for vulnerabilities, but IMO open source is less likely to be vulnerable simply because more people are watching anything with critical exposure.
One simple example is when a simple web form is put up, a user can inject sql into a text field and have that sql execute against the database if proper care was not taken to prevent it. A normal user won't even think of trying something like that, but somebody with bad intent certainly would be interested.
What's on the list of CVEs doesn't bother me, it's what's not on the list.
I realize and appreciate that you're being informative and attempting to help me understand, but I already know what sql injection attacks are and how to program stored procedures specifically to avoid them. I also know how VT-x and VT-d both work. However, I don't know everything, far from it. That being said, I seem to be outnumbered, so I'll just finish up with a final post or two and let the thread get back to its original topic of your hardware. I do agree that there are chances to introduce bugs and vulnerabilities by using a virtualized platform, though I also feel that the bugs and vulnerabilities are so few and far between and non-businesses are such a non-target that the risk increase is absolutely minuscule. With my (admittedly limited) knowledge, I feel that the hypervisors are insulated enough from the network layer of the WAN, that any bugs that should be concerning are much more likely to happen with exposed services than the hypervisor. What I mean is that my vent or web server are MASSIVELY more likely to be targeted for vulnerabilities than the rather obscure surface area of a hyper-v virtual adapter that's insulated against the host OS being exposed to the WAN.
Do you guys not have exposed services? I keep mine to an absolute minimum, but most networks have something exposed. I VLAN them off to a separate network before anyone tries to explain that to me.