AWS Amazon Graviton Support
-
Is there a plan to add support to Amazon's EC2 Graviton (ARM)? It seems that FreeBSD would happily run there, according to this guy https://www.daemonology.net/blog/2022-05-23-FreeBSD-Graviton-3.html but it would seem that pfSense kernel and base system are going somewhere else.
-
-
There are no plans I'm aware of (yet). I don't think there would be any technical barrier but it would require some development work.
What was your use case for this?Steve
-
@stephenw10 The graviton EC2 instances are apparently more cost-effective than the amd64 ones. The "use case" would be the same, to run pfSense in AWS? (not sure if that's what you're asking)
-
Ah, so simply the cost/throughput advantage?
-
@stephenw10 Do you have any numbers, although unofficial, of how the arm64 builds perform against the amd64 version? It's hard to do a proper cost/throughput analysis without an actual live run but it would seem that the ARM builds of everything else are at least more efficient (to the cost aspect of the analysis). For the throughput/performance I guess it would depend on the amount of optimizations and HW functions available.
Even without the analysis in, there's still an incentive to try arm64 since the "entry level" instances are so different. You could run pfSense on an
m
or even at
type of instance but for a router you would probably need to run acXn
instead, orcXgn
, which is the reason for this post (numbers as of 2022/08/01 and prices in USD):- the smallest amd64 would be a
c5n.large
which has 2vCPU/5.3GB of RAM and costs $0.108/h - the equivalent Graviton would be a
c6gn.large
which has 2vCPU/4GB of RAM and costs $0.0864/h - you also have a smaller Graviton option, the
c6gn.medium
which has 1vCPU/2GB of RAM and costs $0.0432/h - for reference, a general purpose
m6a.large
(2vCPU/8GB) AMD based costs $0.0864/h; anm5.large
(2vCPU/8GB) Intel based costs $0.096/h - a
t3.large
(2vCPU/8GB) costs $0.0832/h, very similar to them6a.large
with the caveat that is not a dedicated resource (those 2vCPU are "shared"). The positive would be that you can go very small witht
, up tot3.nano
(1vCPU/0.5GB) which costs $0.0052/h but none of those would be a good fit for a production deployment.
With this in mind, and assuming that the performance would be comparable between arm64 and amd64, there's a potential for some users to save over half of the budget in processing power (which is probably overprovisioned today).
The main reason to go with
cXn
instances is not only the "size" but the bandwidth available, which goes from ~25Gbps for the smaller ones up to 100Gbps for the metal ones. Keep in mind that with a general purposem5.large
you get a crazy amount of RAM for a router (8GB) and ~10Gbps bandwidth. - the smallest amd64 would be a
-
I have no numbers for that. As far as I know there have been no arm AWS builds and no plans for any as of now. Let me see if anything is planned internally....