pfSense not enabling port
-
Da pictures... The planning, Phase 1 is completed.
The aggregation switch is on its way so that will be phase 2 cable wise by next weekend. Planning on getting another TopTon on Black Friday. will also then get the dual port 10GbE x520 Fiber card for the TrueNAS.
NetworkPhases 2.1-Day 1.jpg
NetworkPhases 2.1-Day 2.jpg
NetworkPhases 2.1-Cabling 1.0.jpg -
@georgelza said in pfSense not enabling port:
press enough buttons and things get working, hehehehe
;)
now to unpack a bit what is what and then update diagrams for my own understanding...then onto next phase...
G
Glad to see that you finally got it to work... it's a great feeling!
And now you know what cables and modules to use going forward. And as you already found out, there are likely others that will work as well. I suppose you were just unlucky with the one's you had...And who knows, there might be a solution with the Topton NIC, to make it like those DELL modules as well. I guess the thing is that since there is firmware in those modules, DAC's as well, and even though protocols are "standardized", it's still a matter of interpretation sometimes... And some vendors, like Intel, choose to "blacklist" not supported devices inside some of their cards... Hence the command you tried earlier to "allow_unsupported device".
One thought about your issue with the Docker application not "finding" VLAN 40...
You can of course add a specific VLAN tag to each interface you assign to a VM. And you don't have to set the bridge or anything to be "VLAN aware". But in this case Proxmox will Untag all traffic towards the VM and Tag it going out of the bridge towards the switch..
And since docker can only "see" what the host VM is receiving (untagged traffic), neither docker not the containers will be able to attach to any specific VLAN.
To change the bridge interface (vmbr0, vmbr40) into a trunk interface, simply make it VLAN aware from the UI.
Then it will pass any and all VLAN's from 2-4094, or something like that.And then on the VM OS, you have to handle the VLAN's. Which I have never tried myself, but basically in docker you will have to create and interface which you can then assign to the container. Something looking like this I suppose...
docker network create -d macvlan
--subnet=192.168.40.0/24
--gateway=192.168.40.1
-o parent=eth0.40
vlan40_netdocker run -d --name mycontainername --network vlan40_net some_image
-
@georgelza said in pfSense not enabling port:
Planning on getting another TopTon on Black Friday. will also then get the dual port 10GbE x520 Fiber card for the TrueNAS.
Remember the issues you had here... so when looking for x520 cards, do some googling and perhaps you can make use of your DELL links... When I got my X520's, I had read about how Intel is picky with which modules they accept. They actually whitelists (in the firmware of the card) which modules they accept, and anything that doesn't match is an "unsupported device". So I went for a Fujitsu made Intel card and have had no trouble with it. There are plenty out there and I hear Mellanox cards are great as well.
-
@Gblenn said in pfSense not enabling port:
One thought about your issue with the Docker application not "finding" VLAN 40...
this was not docker, this was purely a ubuntu vm on proxmox, trying to remember now if it auto dhcp picked up ip in the end...
i did end rebuilding the vm, thinking that maybe when i build it and the network publish internally via proxmox was not 100% it might have impacted the vm so once i knew the networks were now more "right" redid the vm.. lessons learned with how i want to manage the ip ranges etc, as i build broke down and rebuild...
ps I ALSO did end making vlan40 just lan40... as it's the only network sitting on that port on the pfsense and will have a dedicated port on the proxmox hosts.
@Gblenn said in pfSense not enabling port:
Remember the issues you had here... so when looking for x520 cards, do some googling and perhaps you can make use of your DELL links... When I got my X520's, I had read about how Intel is picky with which modules they accept. They actually whitelists (in the firmware of the card) which modules they accept, and anything that doesn't match is an "unsupported device". So I went for a Fujitsu made Intel card and have had no trouble with it. There are plenty out there and I hear Mellanox cards are great as well.
Will need to find exactly what i'm looking for before the day and then hope it's there on the day to add to basket at a black friday special price.
luckily the Dell/EMC transcievers work happily in the Unifi switches so thats at least 1/2 off. looking at the prices... I now have a known workable solution locally available... so might do the transcievers locally... also
he he he. this thread turned into a network/proxmox/pfsense rabbit hole... hope it ends being useful for others... as i've noticed allot of ppl now deploy their pfsense as a guest on proxmox... so maybe this will help them.
i'm seriously impressed with these little topton devices, packs a mean amount of power in a nicely packaged solution.
with this i5 1335 and i5 1240 being 2 very proxmox option CPU's and think the U300E being awesome as a physical pfSense host, got some serious single thread capabilities.now to figure out how i want to do the storage... know i know to little and probable allot to make better...
G
-
@georgelza said in pfSense not enabling port:
ps I ALSO did end making vlan40 just lan40... as it's the only network sitting on that port on the pfsense and will have a dedicated port on the proxmox hosts.
Re the above, this might flip back as a vLan... will have to see if I patch the aggregation into the ix0 port on the pfsense or hang it off the Pro Max, like the idea of the Pro Max as i then better expose this on my Unifi Manager, most of the traffic across it will only be on it... and the little that might be from another network will be small volume so having to then go up to the pfSense as it will transverse a network/thus FW rule and then back down, don't think it's a major issue.
for those thats been following this thread, basically back at the phase 3a vs 3b discussion, which becomes more executable now that we have known working components.
G -
ha ha ha... and there i thought I was finished with questions...
Guess this more Proxmox, but might be of value to people
(sorry the Proxmox forum seem to be dead and very few sections, nothing where I can see I can even ask this).So with lets say limited 10GbE ports on the aggregation switch.
And then also not wanting to just eat up ports on my Pro Max. I do have to run some of the traffic over the same links.For the Proxmox I see the following.
Main management interface
Main client connectivity
Proxmox cluster interconnect
Storage.Mapped to my network:
Main management => lan / 172.16.10.0/24
Main client connectivity => lan
Proxmox cluster +> vlan30
Storage => vlan40Currently have lan and vlan30 on the 2.5GbE copper and then vlan40 on the 10GbE Fiber.
the other side of the storage is the TrueNAS which will have 2 x 10GbE Fiber connections.
Comment, an alternate could be to move vlan30 also onto the fiber.At the moment, the Proxmox install just ate up the entire 1TB NVMe card...
Thinking a shared CEPH volume might be a good idea, to host the VM images on and the ISO's... so thinking of redoing pmox1 installed, giving ProxMox hypervisor say 200Gb and then making the balance available to Ceph.
For now it won't mean much but when I start adding the other nodes then it will become "valuable"G
-
@georgelza What is your thinking re Ceph and splitting the drive for Proxmox?
Perhaps you could use that for backup of VM's ans stuff, but I'm not so sure that is a good idea? You want to keep those disks alive for a long time so I'd try to keep the disk activity low.Perhaps it's a better idea to install a Proxmox Backup Server as a VM on one of the Proxmox machines. And then assign some disks space on your TrueNAS Scale to it, for VM backups. You can always store a few backups directly on the Proxmox machines as well, for somewhat quicker restore.
Wrt the port mappings, I wouldn't necessarily limit any port on Proxmox to a single VLAN. I mean, you can have all this on a single port if you want:
Main management => lan / 172.16.10.0/24
Main client connectivity => lan
Proxmox cluster +> vlan30
Storage => vlan40I have two Proxmox machines with 2.5G ports and one with 1G port on the mobo, and they all have 10G SFP+ added. I assign Ports and VLANs to VM's freely and depending on need. So anything that may require high throughput, is assigned one or more 10G port. And if I want it to be in a particular VLAN, I set that in Proxmox. (so it is as if it was hanging off an Untagged port on a switch).
I also have some VLANs that only exist on my switches (no interfce or DCHP on pfsense). They are used to create switches withing the switches so to say. One thing I use it for is to "tunnel" the LTE router LAN port from our top floor, to my failover WAN on pfsense. This port is assigned on the 2.5G port on Proxmox. Same port being used for a couple of other VM's as well... In this case I have two SFP+ ports passed thru (IOMMU) to pfsense, so they can't be used for any other VM's
I mean the logical connections or assignments rather, don't have to define or limit the way you physically connect things. And I'm sort of liking 3A more in that regard, as it is cleaner and simpler on the pfsense side, and the Trunk between your two switches is 10G..
And I'm wondering about the dual connection to the NAS? What disks are you using? Would you really expect throughput to reach even 10G?
I'm wondring if this discussion should continue on the Virtualization section?
Where perhaps we could always discuss the option of virtualizing also pfsense, and the benefits and possible drawbacks of that...
I mean the i5-P1240 seems pretty much on par with my i5-11400 which is simply killing it in my setup on Proxmox... Even the 10400 that I also have gives me similar perfomance on a 10G fiber connection. 8+ Gbps speedtest with Suricata running in legacy mode. -
Hi hi.
The idea of splitting the 1TB NVME into 200/800 is 200 for ProxMox Hypervisor and then 800 into a Ceph shared Volume that will be expanded across the nodes, direct OS access for fast IO.
For backup I can push that to my TrueNAS, 100TB should be enough....
Mind you not a bad idea to also use that for the ISO images as a shared source rather.As for port mapping, was actually thinking about just running all as vLan's over the 10GbE fiber. but also with this install and the pfSense work realised having 2 ways to get onto device that work independent is not a bad idea, allow me to reconfigure/change one without me disconnecting myself.
I was also liking 3A, but then there is also the adage of dont hang switches of switches. and the traffic on the aggregation will 98% be just between the machines on it... but then that also means, just hang it off the 10GbE SFP of the ProMax. Guess i will play with this once the aggregation gets here and see.
As for the dual connection to NAS, fair comment... these are jsut 7200rpm spinning rust... probably just do it because i can... and won't have a need for that extra port anytime soon anyhow.
I'm def not keen on a virtual pfSense, I like the stand alone nature of the physical build, seperate from the rest of my environment.
my pfsense is build on the U300E processor.
G
-
@georgelza Well as long as you keep space for the VM's which will be dependent on the disk size per VM. ISO's typically don't use up that much space... I mean there are only so many ISO's you would keep... so I don't think I would spend to much energy on setting that up...
I get the thing about the dual connection on the NAS, and I would probably do the same thing, "because you can"... Happens all the time, but I guess I'm thinking freeing up switch ports. Which on the other hand, you don't need I suppose... But in reality you have 3 ports connected to the NAS, counting the 2.5G, which seems like a bit of waste? But then again, because you can...
Aha so a different CPU, with fewer cores and performance threads. BTW, how did you set this up with pfsense, given that there is a difference between the cores? Can it manage the internal processes across different types of cores?
-
Hi there
Yes, ISO's don't use much space, but i also don't want to have multiple copies on all the units, I'd rather have one store where they are shared.
As for the VM's the idea is to store their "VM images" on the Ceph storage. which is shared across the cluster, which mean the VM can be started on any node. With the current the VM image needs to be copied over to the target node.
the pfsense is running on a decimated TopTon server,
Proxmox is running on a different server.
the pfSense was a new installed and then a backup/restore of my config from the original celeron box and allot if trying to get things working.
Sort of how this thread started ;)
As for how pfSense uses the cores, that we would need to ask them, my previous server had 4 low end celeron's. this thing got some very very beefy new cores compared.G
-
guessing i'm hoping someone with more Proxmox experience than me can comment on using a Ceph volume as the vm image storage. Other option is to place the VM images on my NAS, but thing that even with the 10GbE would not be desirable.
need to get a good answer on this before i pull trigger on rebuild pmox1.
G
-
@georgelza I see your point with the Ceph storage, which sounds like a good idea!
I suppose the question is if Proxmox will safeguard against starting the same VM on two machines, which would create a conflict of course.
I want to think it does, and the one way to be sure is when you have tested it...Using the NAS for storage instead will probably make things a bit slow, compared to the built in nvme's. Then again, I don't suppose you will be starting and stopping VM's all the time? Except in the beginning when you are setting things up.
-
Yes... the VM is started via the data centre and that won't allow you to start it twice. You will need to clone it and give it new name and IP.
I'd prefer to have the VM Images on local mirror via Ceph, gives me speed and Ceph will make sure there is a copy on another node.
Would like someone else to chirp in here... confirm this works with Proxmox. know other Hypervisors allow this.
G
-
have a look:
https://www.youtube.com/watch?v=a7OMi3bw0pQ
The guy talks about storage.Guess I need to think, do I use ZFS or Ceph.
G
-
... discovery...
if you've been following this thread, we did not assign a 172.16.40.0/24 address to the physical port of the Topton, the thinking it's primarily a passthrough for vlan40 onto the hosted guest VM's...
Forgetting that the pmox# itself will be mounting a NFS volume to be shared... thus it will itself need a source ip on that port... vmbr40, otherwise it does not know who it is...
Been the root cause of my NFS mount problems the last 3-4 days. The second I assigned 172.16.40.51 to the first node NFS stabilised and was working immediately on click... on the node.
G
-
ask...
other than the official proxmox forum which does not seem to have much activity, anyone aware of a active/responsive proxmox community...
otherwise wondering if we can get the admin's here to create a proxmox section ;)G
-
@georgelza said in pfSense not enabling port:
Yes... the VM is started via the data centre and that won't allow you to start it twice. You will need to clone it and give it new name and IP.
I'd prefer to have the VM Images on local mirror via Ceph, gives me speed and Ceph will make sure there is a copy on another node.
Would like someone else to chirp in here... confirm this works with Proxmox. know other Hypervisors allow this.
G
Yes that is my understanding as well, although I have not tried it. And I totally agree that using the local nvme's will give you way more speed.
I still suggest creating a PBS VM (Proxmox Backup Server) and perhaps map e.g. a disk on your TrueNAS for that. I've had a few instanses where I have wanted to "go back in time" and restore something from a few weeks back even. Typically because I messed up and didn't realize it until some time later.
other than the official proxmox forum which does not seem to have much activity, anyone aware of a active/responsive proxmox community...
otherwise wondering if we can get the admin's here to create a proxmox section ;)There is a virtualization section already, with plenty Proxmox activity...
https://forum.netgate.com/category/33/virtualization