Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Embedded version on CF card larger than 4gb

    Scheduled Pinned Locked Moved Hardware
    8 Posts 3 Posters 2.4k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • O
      otis80085
      last edited by

      I am wondering if it's possible to run the embedded version on a CF card that's larger than 4gb.

      I'm interested in installing the squid package and I'm wondering if the files + logs will make the 4gb card kind of crowded.  Any help is appreciated.

      Thanks!

      1 Reply Last reply Reply Quote 0
      • M
        matguy
        last edited by

        It's less of a crowded issue and more of a wearing out the card issue.  Eventually (maybe a few months, maybe a few years) the card will likely wear out from too many writes/re-writes to the same points.

        1 Reply Last reply Reply Quote 0
        • O
          otis80085
          last edited by

          That being the case, is the embedded version not recommended?  Are there CF cards I can get that reduce the likelihood of failure?

          I really like the low power consumption and small form factor that the embedded version provides, but I'm not too excited about buying new CF cards throughout the year.

          1 Reply Last reply Reply Quote 0
          • M
            matguy
            last edited by

            The Embedded/nano version doesn't inherently take much less power, it's mainly catered towards reducing writes to the disk, for the exact reason I mentioned above.  As a side effect, it might idle a disk more often.  Otherwise, aside from a few packages that inherently write to disk a lot that may not be supported, they're fairly similar.  On the other hand, for your mentioned purpose, as a squid proxy, that inherently writes a lot to disk, that breaks that model.  And it breaks the possible power savings from idle disk(s).

            For the most part, you're may be best with a semi-small modern SSD that supports modern wear leveling (like almost any current standard Sandforce based 32GB SSD.)  The power consumption difference is likely very small, quite possibly in the just a few watts range, not that we know what the rest of your hardware is, either.

            Theoretically, some CF cards do some wear leveling, but not likely to the degree that modern SSD's do.  Maybe a shiny new 64GB CF card would last quite a while, but it's not a known factor like modern SSD's are.  While some theoretically can move less used data to more "worn" blocks to free up more cells for wear leveling, like modern SSD's do, it's rarely clear from the manufacturer if it does that.

            But, it's up to you.  Really, it may last years in such a use state, or it may last a few months.  It's hard to say.

            1 Reply Last reply Reply Quote 0
            • E
              Ecnerwal
              last edited by

              You can draw your own conclusions from the available information. What I got that seemed pretty clear to me was that old-fashined rotating disk can still be the best bang for the buck, under a set of cost benefit analysis that applies to my conditions…

              In particular, "slow" disk reads are far faster than our network access, irrespective of their speed .vs. SSD or even in-memory (RAM) caching. RAM is the clear win for caching speed, and it does not care how many reads and writes are made - but it does not offer the benefits that  caching large chunks of data on disk does in terms of reducing network load - when MS or Apple comes out with a gigabye update, or several 100MB updates, if the cache actually works for it, the difference in doing 20 machines is mind-boggling.

              From skimming (I don't claim to have read the whole thing, I'm not all that into SSD anyway) another thread on the forum, it's pretty clear [Skimming a bit more of it, "[b]a subject of considerable debate", perhaps] that squid-caching to SSD will kill it young, even if it will last longer than a flash-card. http://forum.pfsense.org/index.php/topic,34381.0.html

              I'm not even close yet on getting cache right on the pfsense go-round - I had a smoothwall doing it well for a while, but that's a generation back, and I got sick of their tendency to cripple the free version (I work for a school and have minimal budget for this stuff.)

              pfSense on i5 3470/DQ77MK/16GB/500GB

              1 Reply Last reply Reply Quote 0
              • M
                matguy
                last edited by

                @Ecnerwal:

                You can draw your own conclusions from the available information. What I got that seemed pretty clear to me was that old-fashined rotating disk can still be the best bang for the buck, under a set of cost benefit analysis that applies to my conditions…

                In particular, "slow" disk reads are far faster than our network access, irrespective of their speed .vs. SSD or even in-memory (RAM) caching. RAM is the clear win for caching speed, and it does not care how many reads and writes are made - but it does not offer the benefits that  caching large chunks of data on disk does in terms of reducing network load - when MS or Apple comes out with a gigabye update, or several 100MB updates, if the cache actually works for it, the difference in doing 20 machines is mind-boggling.

                From skimming (I don't claim to have read the whole thing, I'm not all that into SSD anyway) another thread on the forum, it's pretty clear [Skimming a bit more of it, "[b]a subject of considerable debate", perhaps] that squid-caching to SSD will kill it young, even if it will last longer than a flash-card. http://forum.pfsense.org/index.php/topic,34381.0.html

                I'm not even close yet on getting cache right on the pfsense go-round - I had a smoothwall doing it well for a while, but that's a generation back, and I got sick of their tendency to cripple the free version (I work for a school and have minimal budget for this stuff.)

                That thread is chock full of FUD stemming from small sets of empirical data.  There were a few runs of earlier SSD's that had firmware issues leading to early demise, many of these had been used in pfSense installs.  Since they failed, many assumed that pfSense kills SSD's in general.

                In reality, even a pfSense install with a fairly heavy hit Squid cache should not be any more harsh, most likely much less harsh, on an SSD than a well used Windows machine; be it server or desktop.  A Windows machine generally hits its page file pretty often on a normal Windows machine, even when it has a lot of RAM.  That constant movement of a small space on an SSD was hard on early models.  Some people ran Windows off CF cards, even, and had issues quickly (I've seen people RAID them, it was… interesting.)

                From those issues further research on wear leveling was important and came to modern wear leveling we see now.  While it came in various flavors over time, it really seemed to come in to its own when Sandforce controllers became a common standard in SSD's.  From then on SSD's have been very reliable, firmware issues aside (mostly on certain brands. coughOCZcough)  But, even with Firmware issues, that's much better than it used to be, and some brands didn't have much at all.

                Still, of course, you'll hear about DOA's or failures within a month or 2, but you hear this about -any- kind of hardware, even non-computer hardware, like Microwave Ovens.  And, you're going to hear about the failures 10x more than the non failures, people don't complain that something works.

                I would rather use a modern SSD with pfSense than a spinning disk any day.  I would probably rather do a modern SSD than a RAID of spinning disks.  At that point, I'd probably just want to mirror a couple SSDs, if I was really worried about it.

                Of course, there are extreme examples that may prove to be too much for a small SSD, but size is your friend when it comes to wear leveling.  Most SSD sizes double at the next model size, so going from a 32GB SSD to a 64GB SSD doubles the wear leveling capability.  Still worried about it, go up again.  Now, that doesn't exactly mean double the longevity, that probably means way more than double.  Another trick you can pull is to make your partitions small, it can't write to areas it doesn't have a partition for, but the SSD can still wear level against it!  Now you've got all sorts of space for the drive to swap out nearly worn out cells with.  Want a heavily hit Squid cache, get a 128GB SSD, but only make the partition 32GB.  It's a scary squid cache that wears that out and SSD's aren't nearly as expensive as they used to be.  120GB - 128GB SSD's are hanging around $100 - $110 these days.  It's almost difficult to find 2 new spinning drives for that cheap to even mirror them.

                1 Reply Last reply Reply Quote 0
                • E
                  Ecnerwal
                  last edited by

                  Not difficult at all.

                  If I wanted to go tiny, $30-40 each will buy new SATA drives in the 160-320GB sizes, and $45 each will buy 5400rpm 500GB drives. Another $10 or so bumps it up to 7200. So SSD is still a good factor of 10X more $ per capacity…it needs to be enough of a win on other factors to make that worthwhile, and for me, it's not there yet. Plus, I get to use the whole thing, which is 5 times bigger than the one you are using 1/4 of. Different priorities win in different places.

                  pfSense on i5 3470/DQ77MK/16GB/500GB

                  1 Reply Last reply Reply Quote 0
                  • M
                    matguy
                    last edited by

                    Of course the argument can go all day long, but my 128GB SSD example was the extreme case for very heavily hit squid caches.  Obviously, a smaller cache and/or one with less heavy read/write cycles would be even less expensive, which probably covers most pfSense installs.

                    By price I went by whatever new hard drives were cheapest on Newegg, I think the cheapest was a 250GB drive, but 500's were cheap too.

                    Obviously, you'll make choices that work for you, my main point was that modern SSD's aren't inherently failure prone for pfSense.

                    1 Reply Last reply Reply Quote 0
                    • First post
                      Last post
                    Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.