Lots of users on limited bandwidth.



  • I am trying to accomplish a task somewhat difficult ….

    i have limited bandwidth (4mb) and am trying to squeeze the best i can for lets say 40+ people. ya its horrible but what can you do...
    so for this i have setup a captive portal which authorizes via a freeradius based solution.
    i have created the usernames and have limited them to 128/128.

    what i would like to accomplish is the most caching possible with squid and at the same time enforce block ads/porn/proxy sites, etc.
    i realize most sites are dynamic but to be honest its been forever since i've seen msn.com or yahoo.com even change their intro page so why is it they set their images to dynamic and not static so i can cache them?

    i have so far installed and setup the captive portal successfully. i have then also setup some basic settings in squid like so:

    memory cache size: 2000 (my machine has 4gigs)
    min object size: 0
    max object size: 1024000
    max object size in ram: left at default 32 ... dont understand it too much

    i have also chose "heap LFUDA" thinking it maybe best for my situation.

    now i am going around setting up squidguard however i am curious whats the best filter to use out of the four on here? http://www.squidguard.org/blacklists.html

    whats more important is how can you accomplish full speed for a client thats limited via captive portal when he pulls cache contents from the pfsense box? so in other words -- net = limited speed but stuff from cache should be fast.

    any help is greatly appreciated!



  • If everyone has the same limit, no need to do it via RADIUS. To accomplish what you're looking for, the easiest option would be limiting people via limiters, and adding rules above the rules with limiters to allow traffic to the proxy, so only traffic not hitting the proxy is limited. But that has another caveat - you can't differentiate outside of Squid what traffic it's pulling from cache vs. what it's pulling from the Internet, so it effectively disables any ability to limit traffic that gets proxied (except from within Squid itself).



  • @cylent:

    so in other words – net = limited speed but stuff from cache should be fast.

    Check Squid's Zero Penalty Hit feature.



  • cmb: i understand what you're saying but they wont have the same limit. right now i just have a small group with 128/128. i will have additional groups added.

    dhatz:
    from what i read you have to have squid 2.6 and up.
    we have Squid Cache: Version 2.7.STABLE9
    i've actually installed Pfsense_Lusca's version; http://code.google.com/p/pfsense-cacheboy/wiki/Pfsense_Lusca

    my config currently looks like this:

    _# Do not edit manually !
    http_port 10.111.63.41:3128 transparent
    http_port 127.0.0.1:80 transparent
    icp_port 0

    pid_filename /var/run/squid.pid
    cache_effective_user proxy
    cache_effective_group proxy
    error_directory /usr/local/etc/squid/errors/English
    icon_directory /usr/local/etc/squid/icons
    visible_hostname localhost
    cache_mgr admin@localhost
    access_log /var/squid/log/access.log
    cache_log /var/squid/log/cache.log
    cache_store_log none
    shutdown_lifetime 0 seconds

    Allow local network(s) on interface(s)

    acl localnet src  10.111.63.0/255.255.255.0
    via off
    uri_whitespace strip
    dns_nameservers 127.0.0.1

    cache_mem 2000 MB
    maximum_object_size_in_memory 1024000 KB
    memory_replacement_policy heap LFUDA
    cache_replacement_policy heap LFUDA
    cache_dir coss /var/squid/coss 100000 max-size=4096 block-size=8192
    cache_dir aufs /var/squid/cache 10000 16 256 min-size=4096
    minimum_object_size 0 KB
    maximum_object_size 1000 MB
    offline_mode off
    cache_swap_low 90
    cache_swap_high 95

    No redirector configured

    Setup some default acls

    acl all src 0.0.0.0/0.0.0.0
    acl localhost src 127.0.0.1/255.255.255.255
    acl safeports port 21 70 80 210 280 443 488 563 591 631 777 901 10777 3128 1025-65535
    acl sslports port 443 563 10777
    acl manager proto cache_object
    acl purge method PURGE
    acl connect method CONNECT
    acl partialcontent_req req_header Range .*
    #acl dynamic urlpath_regex cgi-bin ?
    include /usr/local/etc/squid/include.conf
    #cache deny dynamic
    http_access allow manager localhost

    http_access deny manager
    http_access allow purge localhost
    http_access deny purge
    http_access deny !safeports
    http_access deny CONNECT !sslports

    Always allow localhost connections

    http_access allow localhost

    quick_abort_min 0 KB
    quick_abort_max 0 KB
    quick_abort_pct 75
    range_offset_limit 0 MB
    request_body_max_size 0 allow all
    reply_body_max_size 0 deny all

    Custom options

    zph_mode tos
    zph_local 0x04
    zph_parent 0
    zph_option 136

    redirect_program /usr/local/bin/squidGuard -c /usr/local/etc/squidGuard/squidGuard.conf
    redirector_bypass on
    redirect_children 3

    Allow local network(s) on interface(s)

    http_access allow localnet

    Default block all to be sure

    http_access deny all_

    i am just having trouble figuring out how to bypass the speed limit to achieve full speed for cached object.
    also what variables / settings would be best to achieve more cached items?



  • this is literally becoming a nightmare.

    i've searched up and down and cant figure this out …

    all i want to do is:

    1. give my captive portal users items from the cache at full speed.
    2. get squid to cache as much as possible.

    can anyone help? please? (my config for squid is above)


Log in to reply