Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    pfSense not enabling port

    Scheduled Pinned Locked Moved General pfSense Questions
    145 Posts 4 Posters 11.0k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • georgelzaG
      georgelza @Gblenn
      last edited by

      @Gblenn

      Hi hi.

      The idea of splitting the 1TB NVME into 200/800 is 200 for ProxMox Hypervisor and then 800 into a Ceph shared Volume that will be expanded across the nodes, direct OS access for fast IO.

      For backup I can push that to my TrueNAS, 100TB should be enough....
      Mind you not a bad idea to also use that for the ISO images as a shared source rather.

      As for port mapping, was actually thinking about just running all as vLan's over the 10GbE fiber. but also with this install and the pfSense work realised having 2 ways to get onto device that work independent is not a bad idea, allow me to reconfigure/change one without me disconnecting myself.

      I was also liking 3A, but then there is also the adage of dont hang switches of switches. and the traffic on the aggregation will 98% be just between the machines on it... but then that also means, just hang it off the 10GbE SFP of the ProMax. Guess i will play with this once the aggregation gets here and see.

      As for the dual connection to NAS, fair comment... these are jsut 7200rpm spinning rust... probably just do it because i can... and won't have a need for that extra port anytime soon anyhow.

      I'm def not keen on a virtual pfSense, I like the stand alone nature of the physical build, seperate from the rest of my environment.

      my pfsense is build on the U300E processor.

      G

      G 1 Reply Last reply Reply Quote 0
      • G
        Gblenn @georgelza
        last edited by

        @georgelza Well as long as you keep space for the VM's which will be dependent on the disk size per VM. ISO's typically don't use up that much space... I mean there are only so many ISO's you would keep... so I don't think I would spend to much energy on setting that up...

        I get the thing about the dual connection on the NAS, and I would probably do the same thing, "because you can"... Happens all the time, but I guess I'm thinking freeing up switch ports. Which on the other hand, you don't need I suppose... But in reality you have 3 ports connected to the NAS, counting the 2.5G, which seems like a bit of waste? But then again, because you can... ๐Ÿ˜

        Aha so a different CPU, with fewer cores and performance threads. BTW, how did you set this up with pfsense, given that there is a difference between the cores? Can it manage the internal processes across different types of cores?

        georgelzaG 1 Reply Last reply Reply Quote 0
        • georgelzaG
          georgelza @Gblenn
          last edited by

          @Gblenn

          Hi there

          Yes, ISO's don't use much space, but i also don't want to have multiple copies on all the units, I'd rather have one store where they are shared.

          As for the VM's the idea is to store their "VM images" on the Ceph storage. which is shared across the cluster, which mean the VM can be started on any node. With the current the VM image needs to be copied over to the target node.

          the pfsense is running on a decimated TopTon server,
          Proxmox is running on a different server.
          the pfSense was a new installed and then a backup/restore of my config from the original celeron box and allot if trying to get things working.
          Sort of how this thread started ;)
          As for how pfSense uses the cores, that we would need to ask them, my previous server had 4 low end celeron's. this thing got some very very beefy new cores compared.

          G

          1 Reply Last reply Reply Quote 0
          • georgelzaG
            georgelza
            last edited by

            guessing i'm hoping someone with more Proxmox experience than me can comment on using a Ceph volume as the vm image storage. Other option is to place the VM images on my NAS, but thing that even with the 10GbE would not be desirable.

            need to get a good answer on this before i pull trigger on rebuild pmox1.

            G

            G 1 Reply Last reply Reply Quote 0
            • G
              Gblenn @georgelza
              last edited by

              @georgelza I see your point with the Ceph storage, which sounds like a good idea!
              I suppose the question is if Proxmox will safeguard against starting the same VM on two machines, which would create a conflict of course.
              I want to think it does, and the one way to be sure is when you have tested it... ๐Ÿ˜‚

              Using the NAS for storage instead will probably make things a bit slow, compared to the built in nvme's. Then again, I don't suppose you will be starting and stopping VM's all the time? Except in the beginning when you are setting things up.

              georgelzaG 1 Reply Last reply Reply Quote 0
              • georgelzaG
                georgelza @Gblenn
                last edited by

                @Gblenn

                Yes... the VM is started via the data centre and that won't allow you to start it twice. You will need to clone it and give it new name and IP.

                I'd prefer to have the VM Images on local mirror via Ceph, gives me speed and Ceph will make sure there is a copy on another node.

                Would like someone else to chirp in here... confirm this works with Proxmox. know other Hypervisors allow this.

                G

                G 1 Reply Last reply Reply Quote 0
                • georgelzaG
                  georgelza
                  last edited by georgelza

                  have a look:

                  https://www.youtube.com/watch?v=a7OMi3bw0pQ
                  The guy talks about storage.

                  Guess I need to think, do I use ZFS or Ceph.

                  G

                  1 Reply Last reply Reply Quote 0
                  • georgelzaG
                    georgelza
                    last edited by

                    ... discovery...

                    if you've been following this thread, we did not assign a 172.16.40.0/24 address to the physical port of the Topton, the thinking it's primarily a passthrough for vlan40 onto the hosted guest VM's...

                    Forgetting that the pmox# itself will be mounting a NFS volume to be shared... thus it will itself need a source ip on that port... vmbr40, otherwise it does not know who it is...

                    Been the root cause of my NFS mount problems the last 3-4 days. The second I assigned 172.16.40.51 to the first node NFS stabilised and was working immediately on click... on the node.

                    G

                    1 Reply Last reply Reply Quote 0
                    • georgelzaG
                      georgelza
                      last edited by

                      ask...

                      other than the official proxmox forum which does not seem to have much activity, anyone aware of a active/responsive proxmox community...
                      otherwise wondering if we can get the admin's here to create a proxmox section ;)

                      G

                      1 Reply Last reply Reply Quote 0
                      • G
                        Gblenn @georgelza
                        last edited by

                        @georgelza said in pfSense not enabling port:

                        @Gblenn

                        Yes... the VM is started via the data centre and that won't allow you to start it twice. You will need to clone it and give it new name and IP.

                        I'd prefer to have the VM Images on local mirror via Ceph, gives me speed and Ceph will make sure there is a copy on another node.

                        Would like someone else to chirp in here... confirm this works with Proxmox. know other Hypervisors allow this.

                        G

                        Yes that is my understanding as well, although I have not tried it. And I totally agree that using the local nvme's will give you way more speed.

                        I still suggest creating a PBS VM (Proxmox Backup Server) and perhaps map e.g. a disk on your TrueNAS for that. I've had a few instanses where I have wanted to "go back in time" and restore something from a few weeks back even. Typically because I messed up and didn't realize it until some time later.

                        other than the official proxmox forum which does not seem to have much activity, anyone aware of a active/responsive proxmox community...
                        otherwise wondering if we can get the admin's here to create a proxmox section ;)

                        There is a virtualization section already, with plenty Proxmox activity...
                        https://forum.netgate.com/category/33/virtualization

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post
                        Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.