Squid ssl https cache pdf files
-
yes i know. and yes i WANT to break one special site because over 200 employes download same files from on special site. this pdf files are quit big, about 100mb.
0/ Perhaps, download the file once and stick it on a fileserver on your own network? With the heck would be 200 people redownloading the same damned 100MB+ (WTF?!?!) PDF over and over again?
1/ https://www.google.com/?gws_rd=ssl#q=squid+ignore+cache-control -
no not exakt the same, its about 40000 pdf files on the server, differnt size and they will be modified sometimes. i google since 2 days :) i tried many many different settings with this refresh_pattern with no luck.
-
they will be modified sometimes
That's probably why they don't want them cached in the first place?!? But you're gonna cache them anyway and let 200 people use wrong files? ::) ::) ::)
Sounds like a wonderful plan altogether.
-
I would be curious to see the raw access.log for those PDF cache misses. A file with a static name and a static size should cache perfectly unless you've explicitly told it not to via squid directives.
-
they will be modified sometimes
That's probably why they don't want them cached in the first place?!? But you're gonna cache them anyway and let 200 people use wrong files? ::) ::) ::)
Sounds like a wonderful plan altogether.
do you have a better idea? this pdf files will be changed every week or month. the major problem is when this 200 people start there shift, they will download there catalogs they need. so 20 people need catalog wkz2348901, 35 catalog wkzrh23892, 30 catalog wkz2839uid. it takes extrem long to download this, because the same file is downloading from different people. next day the same again, but the people need other catalogs.
the system is based on this: https://de.atlassian.com/software/confluence
like i said its about 40000 catalogs, sure i could give someone new job called "pdf manager". he will search everyday for changed pdf files, will download them and put it on a local server :) :)
what i want is:
pdf file is downloaded only once and will be cached for 24 hours, so all other people during the day will get this from cache…
PS: i tried differnt website downloader (winhttrack), to create a "mirror" of the page local. its not working, because in summary there are over 800gb data on the server :(
-
Hahahahaha… Atlassian. Yeah. I've had the "honor" to deal with their JIRA clusterfuck. Yeah, I have a definitely better idea. Run like hell from their supershitty products!!! Get a usable workflow. 200 people downloading 100+ megs PDFs all the time from internet - over and over again - ain't one of them.
Oh - and considering their wonderful "solution"/"product" comes at a hefty price - you should contact them and ask about solutions to the "workflow" they've designed. Instead of posting here. They'll probably suggest to use your own server instead of the cloud variant. At that point, you'd better discuss emergency migration plans. Or you can hire an admin who's gonna commit suicide soon. The Atlassian products rank somewhere at Lotus Notes/Domino level among users, considering their usability. It can only be made worse by deploying SAP. ;D
-
Did you tried to increase the Debug level and tried to find the reason there ?
Maybe it would help to understand why these happen.
-
What do you want to debug here? The entire workflow is broken. How on earth is a cloud hosting a terabyte worth of insanely huge PDF files which keep changing and that users need to work with locally all the time a sane way to do things?
-
What do you want to debug here? The entire workflow is broken. How on earth is a cloud hosting a terabyte worth of insanely huge PDF files which keep changing and that users need to work with locally all the time a sane way to do things?
I agree ::)
-
this confluence system is not in my hand, we (with our 200 people) are only 1 of x locations they access this system. in near future they dont change anything. so when this sslbump pdf cache not working, then i let it be as it is. its okay, in some month we got a 100Mbit internet cable :)
-
this confluence system is not in my hand, we (with our 200 people) are only 1 of x locations they access this system. in near future they dont change anything. so when this sslbump pdf cache not working, then i let it be as it is. its okay, in some month we got a 100Mbit internet cable :)
Nevertheless, although not in your hand and perhaps not your own design choice, this is the drawback of such strangely design solution.
Cloud based approach may have, in some cases, some added value, either as target design or transition solution while designing something else but constraints have to be clearly understood and trying to bypass it, like you do, introducing some unexpected component in the middle (here your proxy cache) will just break the way is works, regardless how initial design is perceived.
-
I've never heard of that website (probably because I'm in the US). Anyway, I didn't have an account but I did poke around the site until I found a PDF link to test - this cached successfully without a problem on my end but then again it's not really a large file.
https://www.atlassian.com/legal/privacy-policy/pageSections/0/contentFullWidth/00/content_files/file/document/2015-06-23-atlassian-privacy-policy.pdf
Possibility #1 - If your caching currently works and your SSL is setup correctly, there might just be a limitation with the "Maximum object size" under the "Local Cache" Tab of Squid. If you want to cache a 100Mb file this setting should be at least "100000" as it represents kilobytes. I currently have mine set to 300000.
Possibility #2 - perhaps you have an proxy exception rule applied to either an IP address or URL which could be linked to a hosted CDN. If you don't use any proxy exception rules then you can ignore this, but
if you do you might try disabling the rule temporarily and simply retest.I've personally setup two Aliases for this specific reason "Proxy_Bypass_Hosts" and "Proxy_Bypass_Ranges". I use these specifically to whitelist sites, IP's and/or IP Ranges using ARIN and Robtex when addressing problem applications or services.
-
@JStyleTech:
Possibility #1 - If your caching currently works and your SSL is setup correctly, there might just be a limitation with the "Maximum object size" under the "Local Cache" Tab of Squid. If you want to cache a 100Mb file this setting should be at least "100000" as it represents kilobytes. I currently have mine set to 300000.
richie1985 allready post his squid.conf
and "maximum_object_size" is set to 512000 KB@JStyleTech:
Possibility #2 - perhaps you have an proxy exception rule applied to either an IP address or URL which could be linked to a hosted CDN. If you don't use any proxy exception rules then you can ignore this, but
if you do you might try disabling the rule temporarily and simply retest.I've personally setup two Aliases for this specific reason "Proxy_Bypass_Hosts" and "Proxy_Bypass_Ranges". I use these specifically to whitelist sites, IP's and/or IP Ranges using ARIN and Robtex when addressing problem applications or services.
Cand find anything that point to an exeption in the squid.conf