Squid loads very slow



  • I have a pfsense and its been running for almost a year. However, these past few days when I reboot my server, it takes time before squid load. It was around 30 mins before it will successfully load. Resulting to users to wait for 30 mins before they could browse the net.

    To start my settings, I have lots of rules in squidguard. My Hard disk cache size is 10000MB. Cache system is ufs. Memory cache size is 1024 and my physical ram is 3.5GB. Minimum object size is 0. Maximum object size is 4. Maximum object size in RAM is 32. Could this contribute to slowness of squid? Current size of cache is now 4.3GB. Could everyone explain how squid loads. If there are websites or link, just let me know.



  • did you enabled sofupdate during pfsense install on /var?



  • I don't remember. All was default when I install it. Is there a way to verify it?

    2.0.2-RC3][root@foo.com]/root(1): mount -v
    /dev/da0s1a on / (ufs, local, fsid aac43c4e83a11e3d)
    devfs on /dev (devfs, local, fsid 00ff000101000000)
    /dev/md0 on /var/run (ufs, local, fsid cdb674509202ef8b)
    devfs on /var/dhcpd/dev (devfs, local, fsid 01ff000101000000)



  • Another thing I notice is that whenever I change or modify on Proxy Server, it takes a long time before it save the changes. After I click Save button, it takes about approximately 30 mins. before it finish loading. Before it was running smoothly. I'm not sure if upgrading to the latest kernel could be a culprit. I'm thinking to reinstall squid.



  • How many RAM dod you have reserved for squid ?
    You can try to delete the squid cache folder instead new installtion of squid. After doing this and restarting squid it could take some time for reinitialization. But after that it should restart faster.

    When using "ufs" you can read this link:
    http://ivoras.sharanet.org/blog/tree/2010-11-19.ufs-read-ahead.html
    On my pfsense I set it to "128".



  • Thanks Nachtfalke for the response. Really appreciate it. I'll try to do a cleanup and will get back to this post. Thanks!



  • I've tested it but i got same results running bonnie++

    read-ahead=32
    Version  1.96       –----Sequential Output------ --Sequential Input- --Random-
    Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    p1.hub.unb.br 3536M   969  99 103497  11 44489   6  1886  98 115350  12 187.0   4
    Latency             10814us     134ms     227ms   22310us   52645us    2767ms

    read-ahead=128
    Version  1.96       –----Sequential Output------ --Sequential Input- --Random-
    Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    p1.hub.unb.br 3536M   985  99 107617  12 42409   6  1903  99 113706  13 172.1   4
    Latency              9913us     162ms     408ms    9821us   55553us    2894ms



  • Forgot to inform you that I already did this also before I create this post.

    http://doc.pfsense.org/index.php/Squid_Package_Tuning



  • another results on a virutal machine using bonnie package v2

    standard file system with read_ahead=32
    Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
              100 102990 89.8 203047 93.1 520031 96.2 198078 99.2 1183719 101.0 38408.0 146.7

    standard file system with read_ahead=128
    Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
              100 166494 99.4 216921 88.6 516720 91.4 202810 99.9 1448189 100.0 43889.5 150.3

    soft-updates file system with read_ahead=32
    Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
              100 134635 90.1 192540 73.1 357890 97.0 204878 99.9 1414560 101.3 41190.4 154.4

    soft-updates file system with read_ahead=128
    Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
              100 165053 99.8 235127 93.7 451421 97.8 188664 100.0 1270644 100.9 44058.7 159.6


Log in to reply