Just a firewall, in hardware.
-
Hi,
I have one of these: http://www.supermicro.com/products/motherboard/Atom/X10/A1SRM-LN7F-2758.cfm
I intend this to be a VM host, upon which will reside pfSense for UTM and security as well as some other things as hardware capabilities permit.
The problem is, I don't like the idea of a VM host being exposed to the outside. Every VM hypervisor I've looked at has had significant security issues in the past, and I don't like the risk. This box has VT-x but not VT-d, so I can't donate the nic directly to the VM with no further involvement from the host.
So what I'd like is a good, solid board or appliance which can easily take on a gigabit Internet connection and handle basic firewall rules and enough logging to let me know when somebody's doing something fishy.
I want:
-
Intel CPU
-
Intel nics – 2 of them. I wouldn't mind more being present but don't intend to use them right now.
-
4g RAM, preferably can max at 8
-
Use embedded image, log to another box.
-
At least one 8-lane pcie-v3 slot to handle a 10gbps nic just in case my scenario changes.
My intent is for this appliance to be the external access point, have pfSense on the bare metal, and have a cable from that to my c2758 box. The c2758 box will host a VM containing a full install including VPN and will handle logging for the external appliance too.
I'm not sure if it's possible, but it would be really nice if the c2758 box could get the public IP address on the WAN port. I know my cable modem gives the public address to my existing router, but I'm not sure if pfSense can act like an invisible pass-through device.
Public IPs will be at a premium, meaning probably I'll have exactly one. I've never done a VPN before but for the moment I'm going to need 6to4 and a VPN on it.
Thanks.
-
-
Does anyone have recommendations?
Thanks.
-
What is stopping you getting a mainstream server board with 2x8 PCI-e slots and putting an i350-t4 in one?
-
Id like to keep tdp low, otherwise I don't really know what processor or board hardware works best. That's kinda what I'm asking, surely there's something that can handle a gigabit connection but not much more.
-
Any of the T series processors will peak at 35W, and typically run well below that (eg i5-4570T) undervolting the processor will shave more power off. I have mine running passively with an aftermarket heatsink. Going below that gets tricky for high performance unless you go atom, then boards become the limiting factor at the moment.
An i350 4 port Nic uses 5W. That is pretty low in the scheme of things as well, onboard may shave 1-2w off that but it'd be marginal.
There are the new broad well xeons which may be of interest…
As for pass through, when you do that you expose the box behind to whatever it is, really not gaining any security. Better off using NAT and passing through ports if required
-
In my opinion this design idea add cost, complexity (which by itself lowers security as much as exposing virtualized NICs) and break the intended low power scheme.
I would either…
Get VM-host hardware that meet the requirements to also host pfSense. Pro: low power, low cost, "clean" design.
OR
Get the hardware required for a decent pfSense box and implement a VM-host behind it for other things. Pro: best security, "clean" design, and messing with the VM-server will never affect internet connectivity or security. -
Just curious as to what your exact worry is regarding VMs and having non-passed-through NICs for a pfsense setup. I'd imagine there's a ton of people out there who do this, and I myself use hyper-v and find that it works great.
It just seems like a lot of hassle and trouble/money for "possible" security risks that you're not even sure that you'll eliminate. You're going to expose a NIC to external traffic one way or another. There's hardly a popular software solution out there that hasn't had security issues in the past, be it java, flash, windows, linux, apple, or even openssl.
-
Just curious as to what your exact worry is regarding VMs and having non-passed-through NICs for a pfsense setup.
Well with a VM you do have one or two (depending on type of hypervisor) additional operating systems controlling the NIC, adding attack vectors below your actual firewall operating system.
Call me old-fashioned but I won't virtualize my perimeter firewall until I can have dedicated NICs, at least on the WAN interface. I totally agree with the OPs security concerns.
I'd imagine there's a ton of people out there who do this…
A ton of people does it because it's easy, attractive and saves money, not because it offers the absolute best security.
You're going to expose a NIC to external traffic one way or another.
True but with dedicated hardware or virtualized NIC passthrough it is controlled only by FreeBSD, an operating system that have been tuned for network and security applications for 20+ years.
-
@P3R:
Just curious as to what your exact worry is regarding VMs and having non-passed-through NICs for a pfsense setup.
Well with a VM you do have one or two (depending on type of hypervisor) additional operating systems controlling the NIC, adding attack vectors below your actual firewall operating system.
Call me old-fashioned but I won't virtualize my perimeter firewall until I can have dedicated NICs, at least on the WAN interface. I totally agree with the OPs security concerns.
With hyper-v you disable the host OS' access to the adapter (it's a simple checkbox) and connect it to a virtual switch. To me it's the same thing as hooking a modem to a switch which is only hooked to the VM. Are you worried there's some underlying exploit in the host OS? If so, what? It has no access to the NIC, it exists but there's no IP or any form of connection. The only thing I could see being concerned about is if the port had some sort if ipmi or intel management system hooked to it, but that would get passed along with the adapter anyways I'd think.
Just trying to understand the concerns.
-
Yeah - I'd pretty much scrap your big-money all-hardware plan and get 1 or two boxes with enough processor to handle all your needs and run VMs.
VMs do have some issues, but not the issues you are worried about.
-
With hyper-v you disable the host OS' access to the adapter (it's a simple checkbox) and connect it to a virtual switch.
That sounds absolutely safe as long as the feature works only exactly as documented, won't ever have any bugs, MS will never implement any backdoors (even if threatened by authorities), and you trust the administrator to never ever make the mistake of attaching any other VM to it or untick the checkbox.
Personally I would still feel more comfortable with NICs dedicated (pass-through) to a single VM. At the very least, it reduces the risk for administrative mistakes.
Are you worried there's some underlying exploit in the host OS?
I wouldn't call it worry (at least not constantly) but I'm aware that every software that I use will have bugs that may develop into exploits in the future (any other approach I would consider naive) and I do try to keep up to speed on the latest news around what I use. The software that is directly exposed to the internet (both the firewall itself and the open services) gets a bit more attention in that regard from me.
I respect that a lot of people (many smarter and more educated than me) think the many obvious advantages with mature virtualization products outweigh the added risks but if someone doesn't even recognize that an additional software layer between the hardware and firewall OS means added risk, then I probably will not take that person as seriously regarding security.
My firewalls will probably be virtualized in one way or the other some years into the future but I'm not there yet. I can live with being seen as old-fasioned… ;)
-
Yes you can do this all on hardware and yes you can run pfsense in "transparent mode".
Anything you can do with VMs you can do with hardware. Its just more expensive.
Go for it. I run all hardware in a few places, but I will probably go to VM's on one machine in all those places eventually.
(I also distrust anything microsoft touches)
-
hypervisors are generally safe enough to run firewalls if you wish to be protected from random_hacker_x.
if you wish to be safe from government spying i would suggest you stop using anything that has an intel/amd/arm /…. cpu. theres no telling whats in them black boxes.
i'd suggest you create your own cpu and also, engineer your own NIC's.all major telco's are also (willingly) in cahoots with the government. their equipement is all compromised stuff delivered by cisco.
it's known that numberous TOR exitnodes are government honeypots, so creating your own private darknet with fancy and secure protocol would be advisable:)
-
Some of your sarcasm dripped on me by accident…
However, I think Putin is taking you dead seriously and planning to do exactly as you say.
-
That sounds absolutely safe as long as the feature works only exactly as documented, won't ever have any bugs, MS will never implement any backdoors (even if threatened by authorities), and you trust the administrator to never ever make the mistake of attaching any other VM to it or untick the checkbox.
Personally I would still feel more comfortable with NICs dedicated (pass-through) to a single VM. At the very least, it reduces the risk for administrative mistakes.
For the bugs/backdoor issues, the same could be said for any software involved in networking, be it tomato, open-wrt, or your parent's default d-link router, etc. Maybe I'm just not as paranoid as I should be, but I'd think that if microsoft wanted to put a backdoor into something, they'd put it in something more ubiquitous than a hyper-v virtual switch. They'd put it in windows update and/or IE. Bugs are everywhere, but microsoft has been pretty adamant about network layer isolation as this is something desired in enterprise for various reasons.
Also, I'm the admin, so I'm not overly worried about me unticking a box. Even if I did, Comcast is pretty strict about MAC addresses, to the point of it being painful. Anything without the VM's mac address would be denied all internet access. Believe me, it took a good amount of time to get things working.
I wouldn't call it worry (at least not constantly) but I'm aware that every software that I use will have bugs that may develop into exploits in the future (any other approach I would consider naive) and I do try to keep up to speed on the latest news around what I use. The software that is directly exposed to the internet (both the firewall itself and the open services) gets a bit more attention in that regard from me.
I respect that a lot of people (many smarter and more educated than me) think the many obvious advantages with mature virtualization products outweigh the added risks but if someone doesn't even recognize that an additional software layer between the hardware and firewall OS means added risk, then I probably will not take that person as seriously regarding security.
My firewalls will probably be virtualized in one way or the other some years into the future but I'm not there yet. I can live with being seen as old-fasioned… ;)
I agree, there's always risks. Even with mature software like openssl people find big exploits in. I just don't think a virtualized environment adds that much of a risk. With the way that hyper-v works, the only thing I feel is exposed would be the driver, and those are going to be exposed no matter what environment you're on.
Something CAN be said for having your firewall separate in terms of downtime. If I have to do some maintenance and shut the server down, I would lose internet unless I had some sort of backup in place. Most people probably don't do a lot of hardware maintenance to appliances, which is their big selling point to me. I have an HTPC that can run my pfsense hyper-v VM if necessary instead.
-
For the bugs/backdoor issues, the same could be said for any software involved in networking, be it tomato, open-wrt, or your parent's default d-link router, etc.
But we're not here to talk about those, are we… ;)
Maybe I'm just not as paranoid as I should be…
It's your network and your security, I don't care. But you did ask why others were not as keen on virtualizing their firewalls and I answered.
Bugs are everywhere…
True. But since this seems to be the only argument presented for why virtualization is secure enough, I don't see us getting any further with this discussion.
Also, I'm the admin…
So am I and I make mistakes.
…so I'm not overly worried about me unticking a box.
You picked out a single thing that I put in there more for fun. What about the other thousand things that you could do wrong, are you convinced Comcast will save you there as well…
Wether we admit it or not, everyone makes mistakes. In my opinion, a network and system design that try to minimize the possibility of making mistakes and the consequences of those made is more secure.
With the way that hyper-v works…
You come back to this all the time. I don't think virtualization is unsecure when it works exactly the way it was designed and documented. I only worry about when it doesn't…
The bottom line: the risks with virtualized firwalls is acceptable to some because they love the many other advantages with it but less software layers is always more secure than more software layers.
Then again, I'm trying to setup my virtualized lab firewall as we speak. ;D If I could only get that damn pass-through of NICs to work...
-
@P3R:
You picked out a single thing that I put in there more for fun. What about the other thousand things that you could do wrong, are you convinced Comcast will save you there as well…
Wether we admit it or not, everyone makes mistakes. In my opinion, a network and system design that try to minimize the possibility of making mistakes and the consequences of those made is more secure.
I'm unsure of whether you've dealt with hyper-v (I will admit I have not dealt with xen or esxi very much, aside from knowing that my hardware is about 85% compatible with it and 15% not), but it's a pretty common joke that they spend 2.5 years per checkbox. It's really not an in-depth and complicated software, and it's even less flexible (imo) than something such as virtualbox. There's only a handful of mistakes that I could make, and of those they're all covered by the MAC address issues. It's not so much comcast "saving" me, it's just that there's no internet traffic allowed as I don't pay for multiple IP addresses. Worse case scenario is that traffic is routed directly to my host OS for a few hours. That certainly wouldn't be ideal, but the number of exposed ports is minuscule in the event that the OS detects a "public" network. Again, this is absolutely worst case scenario; I honestly can't think of how this could even happen unless I made a deliberate change that would result in loss of internet. This would be immediately apparent.
You come back to this all the time. I don't think virtualization is unsecure when it works exactly the way it was designed and documented. I only worry about when it doesn't…
That's what I'm wondering, how could it not work properly? I've been using it for years and never noticed any issues; what type of problems concern you? I would think that anything that bugged out would result in a blue screen/reboot or simply a crash of the network before anything else. Most of these virtualization technologies are deployed in scales far beyond the breadth of what the hobbyist/prosumer could ever remotely afford, all across the world. If there were problems we likely wouldn't be the first ones to find them.
-
Wow.
I didn't think I'd have to explain how CVEs work on this forum. Forgive me for leaving pages of relevant information out of each of these things I'm discussing because it can be all found on the Internet or in a really good book on the subject.
Since it's already been used as an example, a week before heartbleed https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0160 showed up, everyone would have said "what vulnerability?" The fact that it was in pretty much every still-used version of openssl means that the vulnerability was always there, but nobody knew about it.
So let's look at how vulnerabilities work:
-
Programmers write software.
-
People and machines test that software, hopefully before it's released.
-
Sometimes issues are found which either cause instability or security issues or some other unwanted behavior.
-
Sometimes (but not always) those issues are reported back to the development team
-
Sometimes (but not always) the issues reported back are fixed
-
Sometimes the issue was found by a bad guy but not by a good guy.
-
In any case, it takes significant time before the issue is fixed, which means from inception to the time it's fixed the software is vulnerable.
-
If it was a bad guy who found the issue it will almost certainly not be reported until the bad guy has used it for his/her nefarious purposes.
-
During the hours/weeks/months/years between release and IMPLEMENTATION of the fix by the maintainer of the specific hardware running the software, the software is vulnerable and may have been exploited by a bad guy.
-
If the issue was discovered by a good guy, it will probably be submitted to a vulnerability database if it has security implications
-
The folks who handle vulnerability databases work through their often large list of reports to find out which are valid, and evaluate the impact of the issue.
-
If the issue is critical and easy to exploit then they often contact the author before putting it on their public list.
What all that means is that any list you see, any exploit you hear about in the news, was known by a good guy days, weeks or months before you ever hear about it. If the issue was discovered because of an active 'in-the-wild' exploit then it may have been known and used by the bad guys for years before being caught.
In programming, there are accepted code quality metrics that software has a certain number of bugs per thousand lines of code. It's a statistical average, but generally speaking if your software is lower than average you have to wonder what's lurking in there that you don't know about. Programmers strive for a small number of overall issues (fixing known ones), and testers strive for a large number of known issues, meaning they've hopefully found most of them.
Heck with it, I googled it and this is the first hit. The first bit seems to be saying what I intend: http://users.ece.cmu.edu/~koopman/des_s99/sw_testing/
All that being taken into account, I'm pretty confident that while there are not many known vulnerabilities to the software on my KVM host system, they almost certainly exist. Just as I'm pretty confident that pfSense has vulnerabilities that nobody knows about yet, and possibly also vulnerabilities that are known to the black hats but not by the maintainers, for all of this.
So let's get on to hardware and software layers.
-
It's pretty much a law of nature that the more complicated something is the more likely it is to have problems.
-
Virtualization support in processors improves speed, security and reduces complexity of software needed to host VMs
-
VT-d on a board means there is hardware support to give a device to a VM. If present, the host software needs only to do a little bit of work to hand the software over to the appropriate VM.
-
If VT-d is not present then host software needs to be more intimately involved in order to pass off functionality.
-
In any event, firewall software directly on the metal is less complicated than firewall software in a VM. As was previously mentioned here, it's also more expensive.
-
The security implications of an attacker getting onto the VM host are much worse than simply breaking into a guest.
So getting back to my intent with my network:
-
I want a small, cheaper appliance which will be a full-time firewall on hardware that needs to handle specific firewall rules, including passing VPN traffic down to a VPN endpoint.
-
Directly attached to that will be my c2758 box, which will have KVM on it and will host a pfSense full install
-
The c2758 does NOT have VT-d support, so there is a software layer on the host system which handles WAN traffic to some degree. Which is why I want something simpler upstream.
Tacked onto the end of this, it's my belief that two separate firewalls are more secure than a single firewall. If for some reason an intruder compromises my outer defense they would still have to deal with another firewall which they will not have tools to reach from the outer one. This may be a false assumption but it's at least another brick in the wall.
-
-
…
@P3R:You come back to this all the time. I don't think virtualization is unsecure when it works exactly the way it was designed and documented. I only worry about when it doesn't…
That's what I'm wondering, how could it not work properly? I've been using it for years and never noticed any issues; what type of problems concern you? I would think that anything that bugged out would result in a blue screen/reboot or simply a crash of the network before anything else. Most of these virtualization technologies are deployed in scales far beyond the breadth of what the hobbyist/prosumer could ever remotely afford, all across the world. If there were problems we likely wouldn't be the first ones to find them.
Part of what makes vulnerabilities so hard to find is that when software is used as originally intended by the authors the vulnerability is not evident. Black hats, if the code is closed-source, can interrogate software with invalid inputs or unexpected situations not anticipated by the developers and get behavior which is outside the scope a normal user would encounter. If it's open source, they can do their own code review and look for vulnerabilities, but IMO open source is less likely to be vulnerable simply because more people are watching anything with critical exposure.
One simple example is when a simple web form is put up, a user can inject sql into a text field and have that sql execute against the database if proper care was not taken to prevent it. A normal user won't even think of trying something like that, but somebody with bad intent certainly would be interested.
What's on the list of CVEs doesn't bother me, it's what's not on the list.
-
It's not so much comcast "saving" me, it's just that there's no internet traffic allowed as I don't pay for multiple IP addresses.
It was you that started this part of the discussion by asking what others saw as potential issues with virtualizing firewalls and yet you keep referring to very specific things about your situation, that you think makes you safe from all possible adminstrative mistakes.
I don't believe that you're totally invulnerable from the consequences of your mistakes but I'm not smart enough to think about every different misconfiguration in all scenarios, so I can't give you a detailed example of when it could be dangerous.
Let's just say that for most of the rest of the world, human errors is one of the risks and with a virtualized firewall that risk is higher than with dedicated hardware.
…what type of problems concern you?
I've told you several times now and I'm sorry but I don't think you will get it any better if I tell you the same things again. I think we have to accept that we don't understand each other.
-
So getting back to my intent with my network:
Great!
I'm sorry for adding to the off-topic part but in the beginning I really thought it could be an interesting addition to the discussion. I was wrong. :-[
[quote]The c2758 does NOT have VT-d support…I gave you my point of view and recommendations for better solutions in my first post to the thread.…it's my belief that two separate firewalls are more secure than a single firewall.
Yes that is a valid point but normally you would then want two firewalls of different origin to minimize the risk that they share the same vulnerabilities. Even if we like pfSense, you'd lose much of the two-firewalls-in-a-row-advantage if both are the same. Maybe a true appliance type of firewall could be better as your first level of defense then? You'd have plenty to choose from at whatever price level you feel is acceptable.
-
Part of what makes vulnerabilities so hard to find is that when software is used as originally intended by the authors the vulnerability is not evident. Black hats, if the code is closed-source, can interrogate software with invalid inputs or unexpected situations not anticipated by the developers and get behavior which is outside the scope a normal user would encounter. If it's open source, they can do their own code review and look for vulnerabilities, but IMO open source is less likely to be vulnerable simply because more people are watching anything with critical exposure.
One simple example is when a simple web form is put up, a user can inject sql into a text field and have that sql execute against the database if proper care was not taken to prevent it. A normal user won't even think of trying something like that, but somebody with bad intent certainly would be interested.
What's on the list of CVEs doesn't bother me, it's what's not on the list.
I realize and appreciate that you're being informative and attempting to help me understand, but I already know what sql injection attacks are and how to program stored procedures specifically to avoid them. I also know how VT-x and VT-d both work. However, I don't know everything, far from it. That being said, I seem to be outnumbered, so I'll just finish up with a final post or two and let the thread get back to its original topic of your hardware. I do agree that there are chances to introduce bugs and vulnerabilities by using a virtualized platform, though I also feel that the bugs and vulnerabilities are so few and far between and non-businesses are such a non-target that the risk increase is absolutely minuscule. With my (admittedly limited) knowledge, I feel that the hypervisors are insulated enough from the network layer of the WAN, that any bugs that should be concerning are much more likely to happen with exposed services than the hypervisor. What I mean is that my vent or web server are MASSIVELY more likely to be targeted for vulnerabilities than the rather obscure surface area of a hyper-v virtual adapter that's insulated against the host OS being exposed to the WAN.
Do you guys not have exposed services? I keep mine to an absolute minimum, but most networks have something exposed. I VLAN them off to a separate network before anyone tries to explain that to me.
-
It was you that started this part of the discussion by asking what others saw as potential issues with virtualizing firewalls and yet you keep referring to very specific things about your situation, that you think makes you safe from all possible adminstrative mistakes.
I don't believe that you're totally invulnerable from the consequences of your mistakes but I'm not smart enough to think about every different misconfiguration in all scenarios, so I can't give you a detailed example of when it could be dangerous.
This is true, I pointed out specifics for my circumstances. There is always a chance of operator error with anything, and I will concede that adding a layer such as virtualization adds to those chances, however minimally. I will contend that this type of situation is a set-it-and-forget-it situation for most people, and that configuration issues can occur anywhere and everywhere.
I've told you several times now and I'm sorry but I don't think you will get it any better if I tell you the same things again. I think we have to accept that we don't understand each other.
I've asked for some specific examples of what you feel might go wrong, something like "the hardware abstraction layer might break" or something, but you've just said "bugs," "user error," or "possible vulnerabilities." My point is that there are potential vulnerabilities in everything, but if you're THAT afraid of bugs/vulnerabilities you're always going to have reasons to not use something. The very essence of hyper-v is the restriction of virtual machines and networks, it's inherent in the very function of how vswitches work. At this point you can point to openssl and heartbleed and say that things designed for protection can fail too, to which I would say that, in the event that this does become an issue, you will be so far down the totem pole in terms of people to attack you wouldn't be able to be seen imo. 99% of the stuff that hits my firewall is attempts to connect to an unsecure/barely secured SSH server or an exposed SQL database using the SA user, of which I'm fairly certain all of it is automated by a script just scouring the internet. I worked at an MSP that had its own data center for 6 years while I was in high school and college. They had exactly one instance of someone getting hacked, and when we looked into it, it was because their webserver administrator password was "password." That was far less destructive than the genius they hired who tried to format his ipod on a client's server, and ended up wiping the entire server, but that's a different story.
Yes that is a valid point but normally you would then want two firewalls of different origin to minimize the risk that they share the same vulnerabilities. Even if we like pfSense, you'd lose much of the two-firewalls-in-a-row-advantage if both are the same. Maybe a true appliance type of firewall could be better as your first level of defense then? You'd have plenty to choose from at whatever price level you feel is acceptable.
If you're going to get to this point, you might as well make sure that the NICs are different brands, just incase there's some sort of firmware issue that could be exploited. You should also make sure you use different ECC RAM modules for them incase they're susceptible to bit flipping attacks. Also different processor brands incase A feature on one of them could be exploited
-
@P3R,
I'm still looking at options for hardware, and using some of your recommendations as starting points.
My outer firewall will probably be Linux. It's something I know, and I can compile a kernel with almost all the features removed and throw together an insanely small flash card image with nothing on it which is unnecessary. I can script the install to a flash card, and software updates would be swapping out the flash card.
This way there won't even be any support for features I don't want, and no way to instigate a connection from that box inward.
I posted all that because it seems you don't understand it, or maybe it's just the nature of vulnerabilities. You don't anticipate where a future bug or vulnerability might be. If you knew where to look, someone could just pay more attention to that bit of code and everything would be cool.
Anything which is designed by a human can and probably does have faults. Our minds are not all-encompassing, and if a scenario comes up we hadn't thought of, then it's extremely possible that our hardware or software is lacking.
I really don't think the "small target" theory holds much water. If nothing else, attackers are interested in a new place to attack from, even if my computers don't hold any data they're interested in. And you can't anticipate the reasoning for what a black hat does either. The NSA is looking for intelligence it can use for fulfillment of its mission, and probably people who work there have a few of their own personal interests as well, who knows? Some guys want your money, some guys want your identity, some guys want to break anything they can touch, and some guys are after information of a different sort.
The universe we don't know is much bigger than the one we know. I intend to do what I can about the things I know, and account for as much that I don't know as I'm able.
-
Since you already have the knowledge, the Linux approach is perfect. If you search this forum you will find many suggestions of non-expensive hardware that should be adequate even for high speeds if necessary, considering the fairly simple task your outer firewall will have.
-
OP, it sounds like you know what you want, so I apologize for derailing this topic with conjecture. I'm curious as to what you plan to run on this appliance; in your first post you mention you want logging incase there's "something fishy" going on. Does that include packet filtering on the gigabit internet connection? It sounds like you'll need something ITX size atleast if you plan on being prepared for 10GB , do you have any size preferences or sound preferences?
I'm not good at picking out appliances-like devices, but it may help some other people offer suggestions
-
I have a very specific idea of what I want to do, but I'm terrible at picking hardware. Which is why I started this thread.
The 10gbps nic isn't actually planned, but it would be nice to have something that could handle it. Thinking about it now, I should remove that as a requirement because I think it automatically bumps me up in price beyond any reasonable over-sizing of a gigabit connection.
I would prefer ecc-capable hardware though, which I don't think I added to my original spec.
Logging: My intent is a read-only boot for the outside firewall. Perhaps even dhcp-boot, and maintain the image on a server inside. Logging would necessarily be abbreviated so it can fit inside of a gigabit line. I guess that could technically be a reason for a third nic.
-
I have a very specific idea of what I want to do, but I'm terrible at picking hardware. Which is why I started this thread.
The 10gbps nic isn't actually planned, but it would be nice to have something that could handle it. Thinking about it now, I should remove that as a requirement because I think it automatically bumps me up in price beyond any reasonable over-sizing of a gigabit connection.
I would prefer ecc-capable hardware though, which I don't think I added to my original spec.
Logging: My intent is a read-only boot for the outside firewall. Perhaps even dhcp-boot, and maintain the image on a server inside. Logging would necessarily be abbreviated so it can fit inside of a gigabit line. I guess that could technically be a reason for a third nic.
Read-only booting can be done using a CD-R, or a USB stick with hardware write-protect switch. Configuration needs to be somewhere tho, maybe on a separate writable storage device? Using network booting or server-served images isn't going to help you security-wise if the server would be compromised. If you want to protect yourself from local hardware access hacks, network-booting only helps if your image server cannot be compromised the same way the router can be.
Regarding hardware: would a X10SLV-Q and a Xeon E3 v3 (use a Dynatron K199 cooler) not fit the bill? There is no iKVM on that board, but other than that it gets pretty much everything done and can be mounted in a 1U high case (be it an actual rack case or a wallmounted or desktop case). It's about 500 euros to setup here, not sure what the prices would be at your location.
-
Is there a price range/region for what you're looking for? Maybe power considerations? I know you have an atom board for your virtualization box, which leads me to think you'd want something similar in power point to that box. I will say, that if you want something relatively solid but lightweight processing wise, you could use a TS-140 which sell for cheap on amazon. They come with the lowest end i3 there is (4130) and 4GB of ecc 1600mhz memory, which fits your original specs. It also has 3x pci-express expansion slots, 2 of which could easily support a modern NIC card. I will say that the bios is a bit "meh" on it though. With an old gigabit CT card I had laying around it booted fine and fast. With the intel 82571GB dual port card I grabbed off of ebay it would hang at the bios screen for a minute and a half before reluctantly booting. It did work fine once booted, I just think it has some sort of code in the bios that looks for vendor information. It also could have been due to the UEFI booting, but I didn't test non UEFI. Mine idles at 30 watts from the wall with an SSD and an AMD 270x video card in it. I re-purposed it as an HTPC with a backup copy of my pfsense. They also have lots of USB3 ports, so you could get a write protect switch USB3 drive and use that.
If you were looking for something more appliance like, the netgate store looks like it has some really neat light-weight/powered appliances that could suit your purpose. They have devices that run off of the SD cards like what you originally mentioned. They'd use less power, take up less space, and also in general be more appliance-y.
-
For price point, I guess I would be looking for "cheaper." Meaning, cheaper than some sort of QuickAssist-enabled hardware. Netgate has a RCC-VE-2440 appliance for $350 USD, I guess if the alternative were more than $250 I would have to think hard before going with the 2440. It's non-ecc which is a drag but it's already set up, which kinda makes up for it.
Size, I was thinking about 1u or an appliance. Power, I would hope for something less than my c2758 draws.
I guess a reiteration of specs based on what I think now:
-
Cheaper than QuickAssist hardware
-
1u or possibly desktop, 1u preferred
-
Probably going to be Linux
-
Prefer Intel
-
2x Intel gigabit nics
-
Ability to boot from dhcp or usb/msata
-
Start with 4g ecc, would like ability for more
-
PCIe-v3x8 would be nice, but not required.
-
Capable of easily handling gigabit routing and firewall duties.
-
Heavy lifting passed through to pfSense VM image (snort, VPN, etc)
-
-
If (quote) "all the heavy lifting is going to be done by the Pfsense vm"
My advice would be to look at secondhand Haswell grade processors (i3, et al)
That said the aforementioned Amazon buy looks to be very cheap/worthwhile for this task.
I would go for an i3 over an atom because it has avx2, and that allows some serious speed with dpdk…
-
Hi folks,
there are often two camps if someone is talking about running pfSense VMs, the only ones love this
and consider but the other ones hate it and don´t want drive it in productive networks.@kroberts
Did you perhaps thought about installing OpenBSD and let pfSense running in a jail?
Could be a solution for as I see it right.Intel CPU
Intel Xeon E3 or Xeon E5 or the new one D-1500 would be great to know at first
for us to come closer to the point and guess you something.
For what exactly this pfSense appliance should run? Tasks? Users? Throughput?Intel nics – 2 of them. I wouldn't mind more being present but don't
intend to use them right now.Tyan S5530
ASRock D-1500 Platform
Supermicro D-1500 platform4g RAM, preferably can max at 8
Using ECC RAM can be good because the VPN keys are generated in RAM.
Alix APU 1C4 - little dog
Soekris net6801 (Q4/2015) - small bear
Lanner FW-8895 - great beastUse embedded image, log to another box.
In some cases related to the security it will be good, but then you can install as
recommended pfSense on one "normal" box and the Squid, snort, logging and AV
tasks on another one.At least one 8-lane pcie-v3 slot to handle a 10gbps nic just in case my scenario changes.
HotLava Systems Multiport NICs
High port density and much power by using original Intel chip sets can savemoney and PCIe slots
as I see it right.Cheaper than QuickAssist hardware
Ok at this point I want that we both think about what you really want and/or
what you really need! The word "cheap" contingent on 10 GBit/s is here clearly
a thinking false of yours! 10 GBit/s is not cheap and will not be cheap. related to
the backside of the pfSense, I mean the connection to a DMZ or LAN Switch it
will perhaps going, but 10 GBit/s at the front side, the WAN side I mean, we
are talking about two different things and both are not cheap!pfSense is still OpenSource but this means not it can handle every stuff on a
35 € hardware.1u or possibly desktop, 1u preferred
Probably going to be LinuxAs a Squid Proxy with AV, SquidGuard, snorting and logging ok, therefore Linux will be
also great, perhaps ClearOS or CentOS based. But this is not related to the pfSense
hardware you are asking here.How urgent is vpn encryption in your scenario?
For how many peoples you have to set this box?
What kind and how much traffic is running through this Box?Is a smaller Box for pfSense and a greater one behind this box
as a Squid, Snort, AV and logging proxy better for you? -
This post is deleted!