From beuc at beuc.net Wed May 6 11:41:01 2020 From: beuc at beuc.net (Sylvain Beucler) Date: Wed, 6 May 2020 13:41:01 +0200 Subject: Detecting and fixing VSV00004 in older releases In-Reply-To: <38565a84-215c-a378-67df-3dbf704dd5a5@beuc.net> References: <9ecc5065-709e-7bd7-f023-a7e58b885916@beuc.net> <38565a84-215c-a378-67df-3dbf704dd5a5@beuc.net> Message-ID: Hi, On 24/04/2020 13:23, Sylvain Beucler wrote: > On 23/04/2020 07:40, Dridi Boukelmoune wrote: >> On Sat, Apr 18, 2020 at 7:18 PM Sylvain Beucler wrote: >>> I'm part of the Debian LTS (Long Term Support) team, I'm checking what >>> Debian varnish packages are affected by CVE-2019-20637, and how to fix them. >>> >>> In particular, we ship 4.0.2 and 5.0.0, where cache_req_fsm.c is too >>> different to apply the git patch with good confidence. >>> >>> I appreciate that these versions are not officially supported anymore by >>> the Varnish project. Since it is common in GNU/Linux distros to provide >>> security fixes to users of packaged releases when feasible, I'm >>> classifying this vulnerability and looking for a fix. >> >> EOL series are definitely not a priority and I have other things to >> look at before I can dive into this. So I will eventually revisit this >> thread, or maybe someone will beat me to it if you're lucky. >> >>> Is there a patch for older Varnish releases, or failing that, a >>> proof-of-concept that would help me trigger and fix the vulnerability? >> >> Not that I'm aware of. >> >>> Note: to determine whether the versions are affected, and possibly >>> backport the patch, I tried to reproduce the issue following the >>> detailed advisory but without success, including on a vanilla 6.0.4: >> >> If the advisory is inaccurate we will definitely want to amend it. > > Thanks for your answer. > > Do we know in what version Trygve T?nnesland triggered the vulnerability? To put it differently, how would one make sure that applying bd7b3d6d47ccbb5e1747126f8e2a297f38e56b8c fixes the issue in a Debian version not explicitly referenced in VS0004, such as 6.1.1? Regards, Sylvain Beucler Debian LTS Team From batanun at hotmail.com Fri May 8 17:13:02 2020 From: batanun at hotmail.com (Batanun B) Date: Fri, 8 May 2020 17:13:02 +0000 Subject: Varnish intermittently returns incomplete images Message-ID: Our Varnish (test environment) intermittently returns incomplete images. So the binary content is not complete. When requesting the image from the backend directly (using curl), the complete image is returned every time (I tested 1000 times using a script). This happens intermittently. Sometimes Varnish returns the complete image, sometimes half of it, sometimes 20% etc... The incomplete image is returned quickly, so I don't think there is a timeout involved (we have not configured any specific timeout in varnish). I see nothing special in varnishlog when this happens. But I don't know how to troubleshoot this in a good way. Any suggestions? -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Fri May 8 17:34:17 2020 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Fri, 8 May 2020 10:34:17 -0700 Subject: Varnish intermittently returns incomplete images In-Reply-To: References: Message-ID: Hi, Do you have objects that are sensibly smaller that your images in your cache? What you are describing sounds like LRU failure (check nuke_limit in "varnishadm param.show"), basically, on a miss, varnish couldn't evict enough objects and make room for the new object, so it had to truncate it and throw it away. If that's the issue, you can increase nuke_limit, or get a bigger cache, or segregate small and large objects into different storages. -- Guillaume Quintard On Fri, May 8, 2020 at 10:14 AM Batanun B wrote: > Our Varnish (test environment) intermittently returns incomplete images. > So the binary content is not complete. When requesting the image from the > backend directly (using curl), the complete image is returned every time (I > tested 1000 times using a script). > > This happens intermittently. Sometimes Varnish returns the complete image, > sometimes half of it, sometimes 20% etc... The incomplete image is returned > quickly, so I don't think there is a timeout involved (we have not > configured any specific timeout in varnish). > > I see nothing special in varnishlog when this happens. But I don't know > how to troubleshoot this in a good way. Any suggestions? > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From batanun at hotmail.com Fri May 8 18:23:08 2020 From: batanun at hotmail.com (Batanun B) Date: Fri, 8 May 2020 18:23:08 +0000 Subject: Varnish intermittently returns incomplete images In-Reply-To: References: , Message-ID: Hi, Well, sure there are some objects that are rather big (for a regular web site, up to maybe 50 MB), but most objects are maybe 10-100 kB. The last image I tried, that had intermittent problems, was about 700 kB. Some numbers from varnishstat: MAIN.uptime: 23+01:28:11 MAIN.n_lru_nuked: 887748 MAIN.n_lru_limited: 459 SMA.s0.c_bytes: 17.59G SMA.s0.c_freed: 17.49G SMA.s0.g_bytes: 99.34M SMA.s0.g_space: 607.18K "n_lru_nuked" seems high. Would you recommend a bigger cache in this case? Below is the output from "varnishadm param.show". I'm suspecting that when we did the initial tweaking (actually only focusing on the vcl logic, not cache sizes) we glanced at this output when the server was recently started, and didn't have much traffic. Now the server has been running for a while, and the traffic has increased (still testing environment only though). ------ accept_filter - acceptor_sleep_decay 0.9 (default) acceptor_sleep_incr 0.000 [seconds] (default) acceptor_sleep_max 0.050 [seconds] (default) auto_restart on [bool] (default) backend_idle_timeout 60.000 [seconds] (default) backend_local_error_holddown 10.000 [seconds] (default) backend_remote_error_holddown 0.250 [seconds] (default) ban_cutoff 0 [bans] (default) ban_dups on [bool] (default) ban_lurker_age 60.000 [seconds] (default) ban_lurker_batch 1000 (default) ban_lurker_holdoff 0.010 [seconds] (default) ban_lurker_sleep 0.010 [seconds] (default) between_bytes_timeout 60.000 [seconds] (default) cc_command exec gcc -g -O2 -fdebug-prefix-map=/build/varnish-ZKkrdt/varnish-6.0.6=. -fstack-protector-strong -Wformat -Werror=format-security -Wall -Werror -Wno-error=unused-result -pthread -fpic -shared -Wl,-x -o %o %s (default) cli_limit 48k [bytes] (default) cli_timeout 60.000 [seconds] (default) clock_skew 10 [seconds] (default) clock_step 1.000 [seconds] (default) connect_timeout 3.500 [seconds] (default) critbit_cooloff 180.000 [seconds] (default) debug none (default) default_grace 10.000 [seconds] (default) default_keep 0.000 [seconds] (default) default_ttl 120.000 [seconds] (default) esi_iovs 10 [struct iovec] (default) feature none (default) fetch_chunksize 16k [bytes] (default) fetch_maxchunksize 0.25G [bytes] (default) first_byte_timeout 60.000 [seconds] (default) gzip_buffer 32k [bytes] (default) gzip_level 6 (default) gzip_memlevel 8 (default) h2_header_table_size 4k [bytes] (default) h2_initial_window_size 65535b [bytes] (default) h2_max_concurrent_streams 100 [streams] (default) h2_max_frame_size 16k [bytes] (default) h2_max_header_list_size 2147483647b [bytes] (default) h2_rx_window_increment 1M [bytes] (default) h2_rx_window_low_water 10M [bytes] (default) http_gzip_support on [bool] (default) http_max_hdr 64 [header lines] (default) http_range_support on [bool] (default) http_req_hdr_len 8k [bytes] (default) http_req_size 32k [bytes] (default) http_resp_hdr_len 8k [bytes] (default) http_resp_size 32k [bytes] (default) idle_send_timeout 60.000 [seconds] (default) listen_depth 1024 [connections] (default) lru_interval 2.000 [seconds] (default) max_esi_depth 5 [levels] (default) max_restarts 4 [restarts] (default) max_retries 4 [retries] (default) nuke_limit 50 [allocations] (default) pcre_match_limit 10000 (default) pcre_match_limit_recursion 20 (default) ping_interval 3 [seconds] (default) pipe_timeout 60.000 [seconds] (default) pool_req 10,100,10 (default) pool_sess 10,100,10 (default) pool_vbo 10,100,10 (default) prefer_ipv6 off [bool] (default) rush_exponent 3 [requests per request] (default) send_timeout 600.000 [seconds] (default) shm_reclen 255b [bytes] (default) shortlived 10.000 [seconds] (default) sigsegv_handler on [bool] (default) syslog_cli_traffic on [bool] (default) tcp_fastopen off [bool] (default) tcp_keepalive_intvl 75.000 [seconds] (default) tcp_keepalive_probes 9 [probes] (default) tcp_keepalive_time 7200.000 [seconds] (default) thread_pool_add_delay 0.000 [seconds] (default) thread_pool_destroy_delay 1.000 [seconds] (default) thread_pool_fail_delay 0.200 [seconds] (default) thread_pool_max 5000 [threads] (default) thread_pool_min 100 [threads] (default) thread_pool_reserve 0 [threads] (default) thread_pool_stack 48k [bytes] (default) thread_pool_timeout 300.000 [seconds] (default) thread_pool_watchdog 60.000 [seconds] (default) thread_pools 2 [pools] (default) thread_queue_limit 20 (default) thread_stats_rate 10 [requests] (default) timeout_idle 5.000 [seconds] (default) timeout_linger 0.050 [seconds] (default) vcc_allow_inline_c off [bool] (default) vcc_err_unref on [bool] (default) vcc_unsafe_path on [bool] (default) vcl_cooldown 600.000 [seconds] (default) vcl_dir /etc/varnish:/usr/share/varnish/vcl (default) vcl_path /etc/varnish:/usr/share/varnish/vcl (default) vmod_dir /usr/lib/varnish/vmods (default) vmod_path /usr/lib/varnish/vmods (default) vsl_buffer 4k [bytes] (default) vsl_mask -ObjProtocol,-ObjStatus,-ObjReason,-ObjHeader,-VCL_trace,-WorkThread,-Hash,-VfpAcct,-H2RxHdr,-H2RxBody,-H2TxHdr,-H2TxBody (default) vsl_reclen 255b [bytes] (default) vsl_space 80M [bytes] (default) vsm_free_cooldown 60.000 [seconds] (default) vsm_space 1M [bytes] (default) workspace_backend 64k [bytes] (default) workspace_client 64k [bytes] (default) workspace_session 0.50k [bytes] (default) workspace_thread 2k [bytes] (default) ------ ________________________________ From: Guillaume Quintard Sent: Friday, May 8, 2020 7:34 PM To: Batanun B Cc: varnish-misc at varnish-cache.org Subject: Re: Varnish intermittently returns incomplete images Hi, Do you have objects that are sensibly smaller that your images in your cache? What you are describing sounds like LRU failure (check nuke_limit in "varnishadm param.show"), basically, on a miss, varnish couldn't evict enough objects and make room for the new object, so it had to truncate it and throw it away. If that's the issue, you can increase nuke_limit, or get a bigger cache, or segregate small and large objects into different storages. -- Guillaume Quintard On Fri, May 8, 2020 at 10:14 AM Batanun B > wrote: Our Varnish (test environment) intermittently returns incomplete images. So the binary content is not complete. When requesting the image from the backend directly (using curl), the complete image is returned every time (I tested 1000 times using a script). This happens intermittently. Sometimes Varnish returns the complete image, sometimes half of it, sometimes 20% etc... The incomplete image is returned quickly, so I don't think there is a timeout involved (we have not configured any specific timeout in varnish). I see nothing special in varnishlog when this happens. But I don't know how to troubleshoot this in a good way. Any suggestions? _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From batanun at hotmail.com Fri May 8 18:26:34 2020 From: batanun at hotmail.com (Batanun B) Date: Fri, 8 May 2020 18:26:34 +0000 Subject: Varnish intermittently returns incomplete images In-Reply-To: References: , Message-ID: also... could you explain this part for me? "so it had to truncate it and throw it away" Why does it have to truncate it? Why not avoid caching it, and returning it as is, from the backend, untouched? ________________________________ From: Guillaume Quintard Sent: Friday, May 8, 2020 7:34 PM To: Batanun B Cc: varnish-misc at varnish-cache.org Subject: Re: Varnish intermittently returns incomplete images Hi, Do you have objects that are sensibly smaller that your images in your cache? What you are describing sounds like LRU failure (check nuke_limit in "varnishadm param.show"), basically, on a miss, varnish couldn't evict enough objects and make room for the new object, so it had to truncate it and throw it away. If that's the issue, you can increase nuke_limit, or get a bigger cache, or segregate small and large objects into different storages. -- Guillaume Quintard On Fri, May 8, 2020 at 10:14 AM Batanun B > wrote: Our Varnish (test environment) intermittently returns incomplete images. So the binary content is not complete. When requesting the image from the backend directly (using curl), the complete image is returned every time (I tested 1000 times using a script). This happens intermittently. Sometimes Varnish returns the complete image, sometimes half of it, sometimes 20% etc... The incomplete image is returned quickly, so I don't think there is a timeout involved (we have not configured any specific timeout in varnish). I see nothing special in varnishlog when this happens. But I don't know how to troubleshoot this in a good way. Any suggestions? _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Fri May 8 18:33:13 2020 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Fri, 8 May 2020 11:33:13 -0700 Subject: Varnish intermittently returns incomplete images In-Reply-To: References: Message-ID: Good question. This is because by default varnish streams the response, so it starts sending what it has, even though it's unsure it can actually deliver. When the eviction strikes, it just aborts the transaction. The problem with "just" passing the data to the user is that there may be more than one and things get really complicated. Getting a bigger cache would help, and segregating the storages (smaller than 1MB, and bigger than 1MB for example) would too -- Guillaume Quintard On Fri, May 8, 2020 at 11:28 AM Batanun B wrote: > also... could you explain this part for me? "so it had to truncate it and > throw it away" > Why does it have to truncate it? Why not avoid caching it, and returning > it as is, from the backend, untouched? > ------------------------------ > *From:* Guillaume Quintard > *Sent:* Friday, May 8, 2020 7:34 PM > *To:* Batanun B > *Cc:* varnish-misc at varnish-cache.org > *Subject:* Re: Varnish intermittently returns incomplete images > > Hi, > > Do you have objects that are sensibly smaller that your images in your > cache? > > What you are describing sounds like LRU failure (check nuke_limit in > "varnishadm param.show"), basically, on a miss, varnish couldn't evict > enough objects and make room for the new object, so it had to truncate it > and throw it away. > > If that's the issue, you can increase nuke_limit, or get a bigger cache, > or segregate small and large objects into different storages. > > -- > Guillaume Quintard > > > On Fri, May 8, 2020 at 10:14 AM Batanun B wrote: > > Our Varnish (test environment) intermittently returns incomplete images. > So the binary content is not complete. When requesting the image from the > backend directly (using curl), the complete image is returned every time (I > tested 1000 times using a script). > > This happens intermittently. Sometimes Varnish returns the complete image, > sometimes half of it, sometimes 20% etc... The incomplete image is returned > quickly, so I don't think there is a timeout involved (we have not > configured any specific timeout in varnish). > > I see nothing special in varnishlog when this happens. But I don't know > how to troubleshoot this in a good way. Any suggestions? > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From batanun at hotmail.com Fri May 8 18:48:25 2020 From: batanun at hotmail.com (Batanun B) Date: Fri, 8 May 2020 18:48:25 +0000 Subject: Varnish intermittently returns incomplete images In-Reply-To: References: , Message-ID: ok. Interesting. ?? 99.99% of the time that this is happening (after adjusting the cache sizes), I would say that it would be only a single user requesting the image. So it would make sense it it was possible to configure Varnish so it could handle that scenario in a better way. What happens when multiple users request the same resource, and this nuke problem happens is less of a problem, and those times a broken could be acceptable (but preferably it would start serving each request separately, uncached, fetching from the backend each time). I will increase the cache size, and look into splitting it into two storages. But, I'm guessing you mean that the small objects should be cached in-memory, and the larger ones on disk? It would make much more sense if it cached less "popular" objects on disk, and more "popular" objects in memory, and only considering the object size when the in-memory cache starts to get full. Is it possible to configure Varnish to handle that in a smart and dynamic way? ________________________________ From: Guillaume Quintard Sent: Friday, May 8, 2020 8:33 PM To: Batanun B Cc: varnish-misc at varnish-cache.org Subject: Re: Varnish intermittently returns incomplete images Good question. This is because by default varnish streams the response, so it starts sending what it has, even though it's unsure it can actually deliver. When the eviction strikes, it just aborts the transaction. The problem with "just" passing the data to the user is that there may be more than one and things get really complicated. Getting a bigger cache would help, and segregating the storages (smaller than 1MB, and bigger than 1MB for example) would too -- Guillaume Quintard On Fri, May 8, 2020 at 11:28 AM Batanun B > wrote: also... could you explain this part for me? "so it had to truncate it and throw it away" Why does it have to truncate it? Why not avoid caching it, and returning it as is, from the backend, untouched? ________________________________ From: Guillaume Quintard > Sent: Friday, May 8, 2020 7:34 PM To: Batanun B > Cc: varnish-misc at varnish-cache.org > Subject: Re: Varnish intermittently returns incomplete images Hi, Do you have objects that are sensibly smaller that your images in your cache? What you are describing sounds like LRU failure (check nuke_limit in "varnishadm param.show"), basically, on a miss, varnish couldn't evict enough objects and make room for the new object, so it had to truncate it and throw it away. If that's the issue, you can increase nuke_limit, or get a bigger cache, or segregate small and large objects into different storages. -- Guillaume Quintard On Fri, May 8, 2020 at 10:14 AM Batanun B > wrote: Our Varnish (test environment) intermittently returns incomplete images. So the binary content is not complete. When requesting the image from the backend directly (using curl), the complete image is returned every time (I tested 1000 times using a script). This happens intermittently. Sometimes Varnish returns the complete image, sometimes half of it, sometimes 20% etc... The incomplete image is returned quickly, so I don't think there is a timeout involved (we have not configured any specific timeout in varnish). I see nothing special in varnishlog when this happens. But I don't know how to troubleshoot this in a good way. Any suggestions? _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc _______________________________________________ varnish-misc mailing list varnish-misc at varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Fri May 8 19:12:35 2020 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Fri, 8 May 2020 12:12:35 -0700 Subject: Varnish intermittently returns incomplete images In-Reply-To: References: Message-ID: No no, with Varnish open-source you really want all your stuff in memory anyway. What I meant is really to have two storages based on object size. Imagine you have a 50MB object that needs to be stored, and your nuke_limit is 50. If you only have one store, then you could easily evict 50 100KB objects, forcing you to fail the transaction. But if you have a pool that is dedicated to bigger objects, you know that each object you evict from it is at least 1MB big, so you cannot fail the transaction due to nuke_limit -- Guillaume Quintard On Fri, May 8, 2020 at 11:50 AM Batanun B wrote: > ok. Interesting. ?? > > 99.99% of the time that this is happening (after adjusting the cache > sizes), I would say that it would be only a single user requesting the > image. So it would make sense it it was possible to configure Varnish so it > could handle that scenario in a better way. What happens when multiple > users request the same resource, and this nuke problem happens is less of a > problem, and those times a broken could be acceptable (but preferably it > would start serving each request separately, uncached, fetching from the > backend each time). > > I will increase the cache size, and look into splitting it into two > storages. But, I'm guessing you mean that the small objects should be > cached in-memory, and the larger ones on disk? It would make much more > sense if it cached less "popular" objects on disk, and more "popular" > objects in memory, and only considering the object size when the in-memory > cache starts to get full. Is it possible to configure Varnish to handle > that in a smart and dynamic way? > ------------------------------ > *From:* Guillaume Quintard > *Sent:* Friday, May 8, 2020 8:33 PM > *To:* Batanun B > *Cc:* varnish-misc at varnish-cache.org > *Subject:* Re: Varnish intermittently returns incomplete images > > Good question. This is because by default varnish streams the response, so > it starts sending what it has, even though it's unsure it can actually > deliver. When the eviction strikes, it just aborts the transaction. > > The problem with "just" passing the data to the user is that there may be > more than one and things get really complicated. > > Getting a bigger cache would help, and segregating the storages (smaller > than 1MB, and bigger than 1MB for example) would too > -- > Guillaume Quintard > > > On Fri, May 8, 2020 at 11:28 AM Batanun B wrote: > > also... could you explain this part for me? "so it had to truncate it and > throw it away" > Why does it have to truncate it? Why not avoid caching it, and returning > it as is, from the backend, untouched? > ------------------------------ > *From:* Guillaume Quintard > *Sent:* Friday, May 8, 2020 7:34 PM > *To:* Batanun B > *Cc:* varnish-misc at varnish-cache.org > *Subject:* Re: Varnish intermittently returns incomplete images > > Hi, > > Do you have objects that are sensibly smaller that your images in your > cache? > > What you are describing sounds like LRU failure (check nuke_limit in > "varnishadm param.show"), basically, on a miss, varnish couldn't evict > enough objects and make room for the new object, so it had to truncate it > and throw it away. > > If that's the issue, you can increase nuke_limit, or get a bigger cache, > or segregate small and large objects into different storages. > > -- > Guillaume Quintard > > > On Fri, May 8, 2020 at 10:14 AM Batanun B wrote: > > Our Varnish (test environment) intermittently returns incomplete images. > So the binary content is not complete. When requesting the image from the > backend directly (using curl), the complete image is returned every time (I > tested 1000 times using a script). > > This happens intermittently. Sometimes Varnish returns the complete image, > sometimes half of it, sometimes 20% etc... The incomplete image is returned > quickly, so I don't think there is a timeout involved (we have not > configured any specific timeout in varnish). > > I see nothing special in varnishlog when this happens. But I don't know > how to troubleshoot this in a good way. Any suggestions? > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From beuc at beuc.net Tue May 12 16:27:18 2020 From: beuc at beuc.net (Sylvain Beucler) Date: Tue, 12 May 2020 18:27:18 +0200 Subject: Detecting and fixing VSV00004 in older releases In-Reply-To: References: <9ecc5065-709e-7bd7-f023-a7e58b885916@beuc.net> <38565a84-215c-a378-67df-3dbf704dd5a5@beuc.net> Message-ID: <5b1af69d-5d94-e942-b9c0-b668e8092d97@beuc.net> Hi, (adding security contact in Cc:) On 06/05/2020 13:41, Sylvain Beucler wrote: > On 24/04/2020 13:23, Sylvain Beucler wrote: >> On 23/04/2020 07:40, Dridi Boukelmoune wrote: >>> On Sat, Apr 18, 2020 at 7:18 PM Sylvain Beucler wrote: >>>> I'm part of the Debian LTS (Long Term Support) team, I'm checking what >>>> Debian varnish packages are affected by CVE-2019-20637, and how to fix them. >>>> >>>> In particular, we ship 4.0.2 and 5.0.0, where cache_req_fsm.c is too >>>> different to apply the git patch with good confidence. >>>> >>>> I appreciate that these versions are not officially supported anymore by >>>> the Varnish project. Since it is common in GNU/Linux distros to provide >>>> security fixes to users of packaged releases when feasible, I'm >>>> classifying this vulnerability and looking for a fix. >>> >>> EOL series are definitely not a priority and I have other things to >>> look at before I can dive into this. So I will eventually revisit this >>> thread, or maybe someone will beat me to it if you're lucky. >>> >>>> Is there a patch for older Varnish releases, or failing that, a >>>> proof-of-concept that would help me trigger and fix the vulnerability? >>> >>> Not that I'm aware of. >>> >>>> Note: to determine whether the versions are affected, and possibly >>>> backport the patch, I tried to reproduce the issue following the >>>> detailed advisory but without success, including on a vanilla 6.0.4: >>> >>> If the advisory is inaccurate we will definitely want to amend it. >> >> Thanks for your answer. >> >> Do we know in what version Trygve T?nnesland triggered the vulnerability? > > To put it differently, how would one make sure that applying > bd7b3d6d47ccbb5e1747126f8e2a297f38e56b8c fixes the issue in a Debian > version not explicitly referenced in VS0004, such as 6.1.1? AFAICS no GNU/Linux distribution was able to fix their stable releases so far. We'd greatly appreciate information on reproducing the issue (such as configuration file and curl request), to determine if our packages are affected and whether we properly fixed them when attempting to backport the fix. Cf. the start of the thread for my current attempt https://varnish-cache.org/lists/pipermail/varnish-misc/2020-April/026854.html In case you currently don't have the resources, would you mind (privately) sharing the finder's contact with me so I can gather more information? Regards, Sylvain Beucler Debian LTS Team From dridi at varni.sh Tue May 12 17:00:45 2020 From: dridi at varni.sh (Dridi Boukelmoune) Date: Tue, 12 May 2020 17:00:45 +0000 Subject: Detecting and fixing VSV00004 in older releases In-Reply-To: <5b1af69d-5d94-e942-b9c0-b668e8092d97@beuc.net> References: <9ecc5065-709e-7bd7-f023-a7e58b885916@beuc.net> <38565a84-215c-a378-67df-3dbf704dd5a5@beuc.net> <5b1af69d-5d94-e942-b9c0-b668e8092d97@beuc.net> Message-ID: Hello Sylvain, > >> Do we know in what version Trygve T?nnesland triggered the vulnerability? It was first discovered on Varnish Enterprise, and once the origin of the leak was identified we surveyed older and newer releases and fixed the ones listed in the advisory. > > To put it differently, how would one make sure that applying > > bd7b3d6d47ccbb5e1747126f8e2a297f38e56b8c fixes the issue in a Debian > > version not explicitly referenced in VS0004, such as 6.1.1? I tried to reproduce it myself today and I wasn't able to trigger the leak on the master branch's commit prior to the fix. I asked internally whether we have a reliable reproducer or if it's something that needs a consequential workload to be observable. > AFAICS no GNU/Linux distribution was able to fix their stable releases > so far. That's not too bad, there is a workaround and it is overall a niche case. If I remember correctly when it was brought to us it wasn't a security problem for the reporter but we recognized the bug as such. Please note that in 2 of the 3 scenarios your VCL is incorrect in the first place, so you have other problems to deal with more pressing than the information leak. Dridi From dridi at varni.sh Wed May 13 09:03:30 2020 From: dridi at varni.sh (Dridi Boukelmoune) Date: Wed, 13 May 2020 09:03:30 +0000 Subject: Detecting and fixing VSV00004 in older releases In-Reply-To: References: <9ecc5065-709e-7bd7-f023-a7e58b885916@beuc.net> <38565a84-215c-a378-67df-3dbf704dd5a5@beuc.net> <5b1af69d-5d94-e942-b9c0-b668e8092d97@beuc.net> Message-ID: > I tried to reproduce it myself today and I wasn't able to trigger the > leak on the master branch's commit prior to the fix. I asked > internally whether we have a reliable reproducer or if it's something > that needs a consequential workload to be observable. The step I was missing trying to reproduce this on my own was ensuring that the error reason is far enough in the client workspace to be leakable. It turns out we had a test case covering all 3 scenarios that was supposed to be pushed a while after the disclosure, but was forgotten. You can use this test case now before and after applying the patch: https://github.com/varnishcache/varnish-cache/commit/0c9c38513bdb7730ac886eba7563f2d87894d734 Dridi From beuc at beuc.net Wed May 13 13:25:01 2020 From: beuc at beuc.net (Sylvain Beucler) Date: Wed, 13 May 2020 15:25:01 +0200 Subject: Detecting and fixing VSV00004 in older releases In-Reply-To: References: <9ecc5065-709e-7bd7-f023-a7e58b885916@beuc.net> <38565a84-215c-a378-67df-3dbf704dd5a5@beuc.net> <5b1af69d-5d94-e942-b9c0-b668e8092d97@beuc.net> Message-ID: <7d2af31c-e7c3-58c9-bfb6-6e29748a3a2a@beuc.net> Hi, On 13/05/2020 11:03, Dridi Boukelmoune wrote: >> I tried to reproduce it myself today and I wasn't able to trigger the >> leak on the master branch's commit prior to the fix. I asked >> internally whether we have a reliable reproducer or if it's something >> that needs a consequential workload to be observable. > > The step I was missing trying to reproduce this on my own was ensuring > that the error reason is far enough in the client workspace to be > leakable. > > It turns out we had a test case covering all 3 scenarios that was > supposed to be pushed a while after the disclosure, but was forgotten. > > You can use this test case now before and after applying the patch: > > https://github.com/varnishcache/varnish-cache/commit/0c9c38513bdb7730ac886eba7563f2d87894d734 Thanks a lot! I was able to check and fix one version (6.1.1), I'll now check the others. Regards, Sylvain Beucler Debian LTS Team From guillaume at varnish-software.com Thu May 14 02:00:44 2020 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Wed, 13 May 2020 19:00:44 -0700 Subject: Let's look at another build system Message-ID: Hello everyone, We have been ranting at autotools for years now, and it's been a couple of years since we tried to get rid of them, so, let's try again, this time with the "new" kid on the block: meson (https://mesonbuild.com/) Some expectation management first: - I'm not touching varnish-cache at the moment (but yes, eventually, that'd be the plan) - instead, I tried my hand at vmod_digest and varnish-modules to collect feedback first - I'm not here to take away your beloved build system, the two options can coexist For the eager and curious, here's what it would entail code-wise: - https://github.com/varnish/varnish-modules/compare/master...meson - https://github.com/varnish/libvmod-digest/compare/master...meson So, let's start with the cons I found first: - the syntax is not amazing, and feels a bit clunky at times - it has some weird opinions about some apparently non-consequential stuff, but you can work around it if you really need to - it's "not like autotools", but you don't make progress without breaking a few habits, or something like that And that's about it. On the other hand, I saw quite a few benefits - it's super boring, with very few install targets (build, test, install, clean, and that's about it) - about test: they can be tagged so you can run a subset of them - it's fast: varnish-modules goes from zero to everything build in less than seconds - it's pretty complete and I was able to easily implement the vsc/vcc processing - on that note, it support recipes with multiple outputs, unlike make - while we can use subdir, it understand the full project as a whole, which would speed things up in varnish-cache - there are only two dependencies: python that we need anyway, and ninja that doesn't need anything help - the amount of "code" to write is way smaller than the autogen+configure+Makefile combo from autotools - out-of tree builds are the default and only option, keeping the source tree pristin - no need for "dist" tarballs, we can just do "git archive" and be done with it - it's terse and doesn't bore you with pages of logs - but the "-t graph" option creates a graphviz dependency graph, nice! - it's only to commands: "meson yourdir" and "ninja -C yourdir" - if you touch the meson.build file, no need to re-run the meson command, it's the same thing as if you edited a source file Honestly, I'm sold, and I'm possibly seeing this with rosy glasses, so feel free to bring me back down to earth. I plan on merging the two PRs above soon, and to try to get some mileage from users (*cough* OpenSolaris *cough*) and maintainers both, but keeping autotools around for now. If there's not too much outcry, I'd like to have a go at varnish-cache some time later (and if you are interested in helping with that fun little project, let me know). Cheers! -- Guillaume Quintard -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Thu May 14 14:48:52 2020 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 14 May 2020 14:48:52 +0000 Subject: Let's look at another build system In-Reply-To: References: Message-ID: <31910.1589467732@critter.freebsd.dk> -------- In message , Guillaume Quintard writes: > >We have been ranting at autotools for years now, and it's been a couple of >years since we tried to get rid of them, so, let's try again, this time >with the "new" kid on the block: meson (https://mesonbuild.com/) I'm all for improving our build system, but it has to be an improvement. One thing that would worry me a bit about meson is the sheer number of dependencies involved. To build it from scratch seems to involve at total of 238 ports on FreeBSD (list below), including both python2, python3, tcl86, mecurial and as far as I can tell anything anybody ever spotted on github. Meson is probably not directly to blame for all 238 ports, but blame or not: They sit at the top of a very, *very* tall tower. Considering how seldom we do nontrivial changes to our build instructions, I would be far more tempted towards a python script which produces a bunch of powerful but tedious Makefiles with full dependency tracking. Poul-Henning /freebsd/ports/archivers/libarchive /freebsd/ports/archivers/liblz4 /freebsd/ports/archivers/lzo2 /freebsd/ports/converters/libiconv /freebsd/ports/converters/p5-Text-Unidecode /freebsd/ports/converters/py-webencodings /freebsd/ports/databases/db5 /freebsd/ports/databases/gdbm /freebsd/ports/databases/py-sqlite3 /freebsd/ports/databases/sqlite3 /freebsd/ports/devel/apr1 /freebsd/ports/devel/atf /freebsd/ports/devel/autoconf /freebsd/ports/devel/autoconf-wrapper /freebsd/ports/devel/automake /freebsd/ports/devel/bison /freebsd/ports/devel/boehm-gc /freebsd/ports/devel/bzr /freebsd/ports/devel/check /freebsd/ports/devel/cmake /freebsd/ports/devel/cvsps /freebsd/ports/devel/dbus /freebsd/ports/devel/dbus-glib /freebsd/ports/devel/gettext-runtime /freebsd/ports/devel/gettext-tools /freebsd/ports/devel/git /freebsd/ports/devel/glib20 /freebsd/ports/devel/gmake /freebsd/ports/devel/jsoncpp /freebsd/ports/devel/kyua /freebsd/ports/devel/libatomic_ops /freebsd/ports/devel/libedit /freebsd/ports/devel/libffi /freebsd/ports/devel/libltdl /freebsd/ports/devel/libpthread-stubs /freebsd/ports/devel/libtextstyle /freebsd/ports/devel/libtool /freebsd/ports/devel/libunistring /freebsd/ports/devel/libuv /freebsd/ports/devel/lutok /freebsd/ports/devel/m4 /freebsd/ports/devel/mercurial /freebsd/ports/devel/meson /freebsd/ports/devel/ninja /freebsd/ports/devel/npth /freebsd/ports/devel/p5-Locale-libintl /freebsd/ports/devel/p5-Sub-Uplevel /freebsd/ports/devel/p5-Term-ReadKey /freebsd/ports/devel/p5-Test-Deep /freebsd/ports/devel/p5-Test-Exception /freebsd/ports/devel/p5-Test-NoWarnings /freebsd/ports/devel/p5-Test-Warn /freebsd/ports/devel/p5-subversion /freebsd/ports/devel/pcre /freebsd/ports/devel/pkgconf /freebsd/ports/devel/py-Jinja2 /freebsd/ports/devel/py-apipkg /freebsd/ports/devel/py-asn1crypto /freebsd/ports/devel/py-atomicwrites /freebsd/ports/devel/py-attrs /freebsd/ports/devel/py-babel /freebsd/ports/devel/py-cffi /freebsd/ports/devel/py-click /freebsd/ports/devel/py-coverage /freebsd/ports/devel/py-dateutil /freebsd/ports/devel/py-dbus /freebsd/ports/devel/py-entrypoints /freebsd/ports/devel/py-extras /freebsd/ports/devel/py-flaky /freebsd/ports/devel/py-freezegun /freebsd/ports/devel/py-fs /freebsd/ports/devel/py-genty /freebsd/ports/devel/py-hypothesis /freebsd/ports/devel/py-importlib-metadata /freebsd/ports/devel/py-incremental /freebsd/ports/devel/py-invoke /freebsd/ports/devel/py-iso8601 /freebsd/ports/devel/py-linecache2 /freebsd/ports/devel/py-mock /freebsd/ports/devel/py-more-itertools /freebsd/ports/devel/py-nose /freebsd/ports/devel/py-pbr /freebsd/ports/devel/py-pip /freebsd/ports/devel/py-pluggy /freebsd/ports/devel/py-pretend /freebsd/ports/devel/py-py /freebsd/ports/devel/py-pyasn1 /freebsd/ports/devel/py-pycparser /freebsd/ports/devel/py-pympler /freebsd/ports/devel/py-pytest /freebsd/ports/devel/py-pytest-capturelog /freebsd/ports/devel/py-pytest-cov /freebsd/ports/devel/py-pytest-forked /freebsd/ports/devel/py-pytest-mock /freebsd/ports/devel/py-pytest-rerunfailures /freebsd/ports/devel/py-pytest-runner /freebsd/ports/devel/py-pytest-timeout /freebsd/ports/devel/py-pytest-xdist /freebsd/ports/devel/py-python-mimeparse /freebsd/ports/devel/py-pytz /freebsd/ports/devel/py-readme_renderer /freebsd/ports/devel/py-scripttest /freebsd/ports/devel/py-semantic_version /freebsd/ports/devel/py-setuptools /freebsd/ports/devel/py-setuptools_scm /freebsd/ports/devel/py-simplejson /freebsd/ports/devel/py-six /freebsd/ports/devel/py-sortedcontainers /freebsd/ports/devel/py-testtools /freebsd/ports/devel/py-tox /freebsd/ports/devel/py-traceback2 /freebsd/ports/devel/py-twine /freebsd/ports/devel/py-unittest2 /freebsd/ports/devel/py-virtualenv /freebsd/ports/devel/py-wcwidth /freebsd/ports/devel/py-wheel /freebsd/ports/devel/py-yaml /freebsd/ports/devel/py-zipp /freebsd/ports/devel/py-zope.interface /freebsd/ports/devel/pydbus-common /freebsd/ports/devel/readline /freebsd/ports/devel/scons /freebsd/ports/devel/subversion /freebsd/ports/devel/xorg-macros /freebsd/ports/dns/libidn2 /freebsd/ports/dns/py-idna /freebsd/ports/emulators/tpm-emulator /freebsd/ports/ftp/curl /freebsd/ports/graphics/py-imagesize /freebsd/ports/lang/cython /freebsd/ports/lang/expect /freebsd/ports/lang/lua52 /freebsd/ports/lang/p5-Error /freebsd/ports/lang/perl5.30 /freebsd/ports/lang/python27 /freebsd/ports/lang/python37 /freebsd/ports/lang/tcl86 /freebsd/ports/math/gmp /freebsd/ports/misc/dejagnu /freebsd/ports/misc/getopt /freebsd/ports/misc/help2man /freebsd/ports/misc/py-pexpect /freebsd/ports/misc/py-tqdm /freebsd/ports/net/p5-IO-Socket-INET6 /freebsd/ports/net/p5-Socket6 /freebsd/ports/net/py-pysocks /freebsd/ports/net/py-urllib3 /freebsd/ports/ports-mgmt/pkg /freebsd/ports/print/indexinfo /freebsd/ports/print/libpaper /freebsd/ports/print/texinfo /freebsd/ports/security/ca_root_nss /freebsd/ports/security/gnupg /freebsd/ports/security/gnutls /freebsd/ports/security/libassuan /freebsd/ports/security/libgcrypt /freebsd/ports/security/libgpg-error /freebsd/ports/security/libksba /freebsd/ports/security/libsodium /freebsd/ports/security/libtasn1 /freebsd/ports/security/nettle /freebsd/ports/security/p11-kit /freebsd/ports/security/p5-Authen-SASL /freebsd/ports/security/p5-Digest-HMAC /freebsd/ports/security/p5-GSSAPI /freebsd/ports/security/p5-IO-Socket-SSL /freebsd/ports/security/p5-Net-SSLeay /freebsd/ports/security/pinentry /freebsd/ports/security/pinentry-tty /freebsd/ports/security/py-SecretStorage /freebsd/ports/security/py-bcrypt /freebsd/ports/security/py-certifi /freebsd/ports/security/py-cryptography /freebsd/ports/security/py-cryptography-vectors /freebsd/ports/security/py-keyring /freebsd/ports/security/py-keyrings.alt /freebsd/ports/security/py-openssl /freebsd/ports/security/py-paramiko /freebsd/ports/security/py-pycrypto /freebsd/ports/security/py-pynacl /freebsd/ports/security/rhash /freebsd/ports/security/trousers /freebsd/ports/shells/bash /freebsd/ports/sysutils/py-execnet /freebsd/ports/sysutils/py-filelock /freebsd/ports/sysutils/py-pkginfo /freebsd/ports/sysutils/py-ptyprocess /freebsd/ports/textproc/asciidoc /freebsd/ports/textproc/docbook /freebsd/ports/textproc/docbook-sgml /freebsd/ports/textproc/docbook-xml /freebsd/ports/textproc/docbook-xsl /freebsd/ports/textproc/expat2 /freebsd/ports/textproc/html2text /freebsd/ports/textproc/iso8879 /freebsd/ports/textproc/libxml2 /freebsd/ports/textproc/libxslt /freebsd/ports/textproc/minixmlto /freebsd/ports/textproc/p5-Unicode-EastAsianWidth /freebsd/ports/textproc/py-MarkupSafe /freebsd/ports/textproc/py-alabaster /freebsd/ports/textproc/py-chardet /freebsd/ports/textproc/py-docutils /freebsd/ports/textproc/py-pygments /freebsd/ports/textproc/py-pystemmer /freebsd/ports/textproc/py-snowballstemmer /freebsd/ports/textproc/py-sphinx /freebsd/ports/textproc/py-sphinx_rtd_theme /freebsd/ports/textproc/py-sphinxcontrib-websupport /freebsd/ports/textproc/py-toml /freebsd/ports/textproc/py-towncrier /freebsd/ports/textproc/sdocbook-xml /freebsd/ports/textproc/utf8proc /freebsd/ports/textproc/xmlcatmgr /freebsd/ports/textproc/xmlcharent /freebsd/ports/textproc/xmlto /freebsd/ports/www/libnghttp2 /freebsd/ports/www/p5-CGI /freebsd/ports/www/p5-HTML-Parser /freebsd/ports/www/p5-HTML-Tagset /freebsd/ports/www/p5-Mozilla-CA /freebsd/ports/www/py-bleach /freebsd/ports/www/py-django111 /freebsd/ports/www/py-html5lib /freebsd/ports/www/py-requests /freebsd/ports/www/py-requests-toolbelt /freebsd/ports/www/py-tornado /freebsd/ports/www/serf /freebsd/ports/www/w3m /freebsd/ports/x11/libICE /freebsd/ports/x11/libSM /freebsd/ports/x11/libX11 /freebsd/ports/x11/libXau /freebsd/ports/x11/libXdmcp /freebsd/ports/x11/libxcb /freebsd/ports/x11/xcb-proto /freebsd/ports/x11/xorgproto /freebsd/ports/x11/xtrans -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From guillaume at varnish-software.com Thu May 14 16:00:50 2020 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Thu, 14 May 2020 09:00:50 -0700 Subject: Let's look at another build system In-Reply-To: <31910.1589467732@critter.freebsd.dk> References: <31910.1589467732@critter.freebsd.dk> Message-ID: > One thing that would worry me a bit about meson is the sheer number > of dependencies involved. I'm going to vote for "that freeBSD port is absolutely bonkers and probably deserves the behind-the-barn treatment", specially considering the debian dependency graph: https://ibb.co/YdXMHRZ I'm not fully against rolling out our own system, but for the love of all that is holy, let's not base it on Make. -- Guillaume Quintard On Thu, May 14, 2020 at 7:48 AM Poul-Henning Kamp wrote: > -------- > In message gjsFy3Zy_U-wcbeGD63HsVkZ+3Zw at mail.gmail.com> > , Guillaume Quintard writes: > > > > >We have been ranting at autotools for years now, and it's been a couple of > >years since we tried to get rid of them, so, let's try again, this time > >with the "new" kid on the block: meson (https://mesonbuild.com/) > > I'm all for improving our build system, but it has to be an improvement. > > One thing that would worry me a bit about meson is the sheer number > of dependencies involved. > > To build it from scratch seems to involve at total of 238 ports on FreeBSD > (list below), including both python2, python3, tcl86, mecurial and as far > as I can tell anything anybody ever spotted on github. > > Meson is probably not directly to blame for all 238 ports, but blame > or not: They sit at the top of a very, *very* tall tower. > > Considering how seldom we do nontrivial changes to our build instructions, > I would be far more tempted towards a python script which produces a bunch > of powerful but tedious Makefiles with full dependency tracking. > > Poul-Henning > > /freebsd/ports/archivers/libarchive > /freebsd/ports/archivers/liblz4 > /freebsd/ports/archivers/lzo2 > /freebsd/ports/converters/libiconv > /freebsd/ports/converters/p5-Text-Unidecode > /freebsd/ports/converters/py-webencodings > /freebsd/ports/databases/db5 > /freebsd/ports/databases/gdbm > /freebsd/ports/databases/py-sqlite3 > /freebsd/ports/databases/sqlite3 > /freebsd/ports/devel/apr1 > /freebsd/ports/devel/atf > /freebsd/ports/devel/autoconf > /freebsd/ports/devel/autoconf-wrapper > /freebsd/ports/devel/automake > /freebsd/ports/devel/bison > /freebsd/ports/devel/boehm-gc > /freebsd/ports/devel/bzr > /freebsd/ports/devel/check > /freebsd/ports/devel/cmake > /freebsd/ports/devel/cvsps > /freebsd/ports/devel/dbus > /freebsd/ports/devel/dbus-glib > /freebsd/ports/devel/gettext-runtime > /freebsd/ports/devel/gettext-tools > /freebsd/ports/devel/git > /freebsd/ports/devel/glib20 > /freebsd/ports/devel/gmake > /freebsd/ports/devel/jsoncpp > /freebsd/ports/devel/kyua > /freebsd/ports/devel/libatomic_ops > /freebsd/ports/devel/libedit > /freebsd/ports/devel/libffi > /freebsd/ports/devel/libltdl > /freebsd/ports/devel/libpthread-stubs > /freebsd/ports/devel/libtextstyle > /freebsd/ports/devel/libtool > /freebsd/ports/devel/libunistring > /freebsd/ports/devel/libuv > /freebsd/ports/devel/lutok > /freebsd/ports/devel/m4 > /freebsd/ports/devel/mercurial > /freebsd/ports/devel/meson > /freebsd/ports/devel/ninja > /freebsd/ports/devel/npth > /freebsd/ports/devel/p5-Locale-libintl > /freebsd/ports/devel/p5-Sub-Uplevel > /freebsd/ports/devel/p5-Term-ReadKey > /freebsd/ports/devel/p5-Test-Deep > /freebsd/ports/devel/p5-Test-Exception > /freebsd/ports/devel/p5-Test-NoWarnings > /freebsd/ports/devel/p5-Test-Warn > /freebsd/ports/devel/p5-subversion > /freebsd/ports/devel/pcre > /freebsd/ports/devel/pkgconf > /freebsd/ports/devel/py-Jinja2 > /freebsd/ports/devel/py-apipkg > /freebsd/ports/devel/py-asn1crypto > /freebsd/ports/devel/py-atomicwrites > /freebsd/ports/devel/py-attrs > /freebsd/ports/devel/py-babel > /freebsd/ports/devel/py-cffi > /freebsd/ports/devel/py-click > /freebsd/ports/devel/py-coverage > /freebsd/ports/devel/py-dateutil > /freebsd/ports/devel/py-dbus > /freebsd/ports/devel/py-entrypoints > /freebsd/ports/devel/py-extras > /freebsd/ports/devel/py-flaky > /freebsd/ports/devel/py-freezegun > /freebsd/ports/devel/py-fs > /freebsd/ports/devel/py-genty > /freebsd/ports/devel/py-hypothesis > /freebsd/ports/devel/py-importlib-metadata > /freebsd/ports/devel/py-incremental > /freebsd/ports/devel/py-invoke > /freebsd/ports/devel/py-iso8601 > /freebsd/ports/devel/py-linecache2 > /freebsd/ports/devel/py-mock > /freebsd/ports/devel/py-more-itertools > /freebsd/ports/devel/py-nose > /freebsd/ports/devel/py-pbr > /freebsd/ports/devel/py-pip > /freebsd/ports/devel/py-pluggy > /freebsd/ports/devel/py-pretend > /freebsd/ports/devel/py-py > /freebsd/ports/devel/py-pyasn1 > /freebsd/ports/devel/py-pycparser > /freebsd/ports/devel/py-pympler > /freebsd/ports/devel/py-pytest > /freebsd/ports/devel/py-pytest-capturelog > /freebsd/ports/devel/py-pytest-cov > /freebsd/ports/devel/py-pytest-forked > /freebsd/ports/devel/py-pytest-mock > /freebsd/ports/devel/py-pytest-rerunfailures > /freebsd/ports/devel/py-pytest-runner > /freebsd/ports/devel/py-pytest-timeout > /freebsd/ports/devel/py-pytest-xdist > /freebsd/ports/devel/py-python-mimeparse > /freebsd/ports/devel/py-pytz > /freebsd/ports/devel/py-readme_renderer > /freebsd/ports/devel/py-scripttest > /freebsd/ports/devel/py-semantic_version > /freebsd/ports/devel/py-setuptools > /freebsd/ports/devel/py-setuptools_scm > /freebsd/ports/devel/py-simplejson > /freebsd/ports/devel/py-six > /freebsd/ports/devel/py-sortedcontainers > /freebsd/ports/devel/py-testtools > /freebsd/ports/devel/py-tox > /freebsd/ports/devel/py-traceback2 > /freebsd/ports/devel/py-twine > /freebsd/ports/devel/py-unittest2 > /freebsd/ports/devel/py-virtualenv > /freebsd/ports/devel/py-wcwidth > /freebsd/ports/devel/py-wheel > /freebsd/ports/devel/py-yaml > /freebsd/ports/devel/py-zipp > /freebsd/ports/devel/py-zope.interface > /freebsd/ports/devel/pydbus-common > /freebsd/ports/devel/readline > /freebsd/ports/devel/scons > /freebsd/ports/devel/subversion > /freebsd/ports/devel/xorg-macros > /freebsd/ports/dns/libidn2 > /freebsd/ports/dns/py-idna > /freebsd/ports/emulators/tpm-emulator > /freebsd/ports/ftp/curl > /freebsd/ports/graphics/py-imagesize > /freebsd/ports/lang/cython > /freebsd/ports/lang/expect > /freebsd/ports/lang/lua52 > /freebsd/ports/lang/p5-Error > /freebsd/ports/lang/perl5.30 > /freebsd/ports/lang/python27 > /freebsd/ports/lang/python37 > /freebsd/ports/lang/tcl86 > /freebsd/ports/math/gmp > /freebsd/ports/misc/dejagnu > /freebsd/ports/misc/getopt > /freebsd/ports/misc/help2man > /freebsd/ports/misc/py-pexpect > /freebsd/ports/misc/py-tqdm > /freebsd/ports/net/p5-IO-Socket-INET6 > /freebsd/ports/net/p5-Socket6 > /freebsd/ports/net/py-pysocks > /freebsd/ports/net/py-urllib3 > /freebsd/ports/ports-mgmt/pkg > /freebsd/ports/print/indexinfo > /freebsd/ports/print/libpaper > /freebsd/ports/print/texinfo > /freebsd/ports/security/ca_root_nss > /freebsd/ports/security/gnupg > /freebsd/ports/security/gnutls > /freebsd/ports/security/libassuan > /freebsd/ports/security/libgcrypt > /freebsd/ports/security/libgpg-error > /freebsd/ports/security/libksba > /freebsd/ports/security/libsodium > /freebsd/ports/security/libtasn1 > /freebsd/ports/security/nettle > /freebsd/ports/security/p11-kit > /freebsd/ports/security/p5-Authen-SASL > /freebsd/ports/security/p5-Digest-HMAC > /freebsd/ports/security/p5-GSSAPI > /freebsd/ports/security/p5-IO-Socket-SSL > /freebsd/ports/security/p5-Net-SSLeay > /freebsd/ports/security/pinentry > /freebsd/ports/security/pinentry-tty > /freebsd/ports/security/py-SecretStorage > /freebsd/ports/security/py-bcrypt > /freebsd/ports/security/py-certifi > /freebsd/ports/security/py-cryptography > /freebsd/ports/security/py-cryptography-vectors > /freebsd/ports/security/py-keyring > /freebsd/ports/security/py-keyrings.alt > /freebsd/ports/security/py-openssl > /freebsd/ports/security/py-paramiko > /freebsd/ports/security/py-pycrypto > /freebsd/ports/security/py-pynacl > /freebsd/ports/security/rhash > /freebsd/ports/security/trousers > /freebsd/ports/shells/bash > /freebsd/ports/sysutils/py-execnet > /freebsd/ports/sysutils/py-filelock > /freebsd/ports/sysutils/py-pkginfo > /freebsd/ports/sysutils/py-ptyprocess > /freebsd/ports/textproc/asciidoc > /freebsd/ports/textproc/docbook > /freebsd/ports/textproc/docbook-sgml > /freebsd/ports/textproc/docbook-xml > /freebsd/ports/textproc/docbook-xsl > /freebsd/ports/textproc/expat2 > /freebsd/ports/textproc/html2text > /freebsd/ports/textproc/iso8879 > /freebsd/ports/textproc/libxml2 > /freebsd/ports/textproc/libxslt > /freebsd/ports/textproc/minixmlto > /freebsd/ports/textproc/p5-Unicode-EastAsianWidth > /freebsd/ports/textproc/py-MarkupSafe > /freebsd/ports/textproc/py-alabaster > /freebsd/ports/textproc/py-chardet > /freebsd/ports/textproc/py-docutils > /freebsd/ports/textproc/py-pygments > /freebsd/ports/textproc/py-pystemmer > /freebsd/ports/textproc/py-snowballstemmer > /freebsd/ports/textproc/py-sphinx > /freebsd/ports/textproc/py-sphinx_rtd_theme > /freebsd/ports/textproc/py-sphinxcontrib-websupport > /freebsd/ports/textproc/py-toml > /freebsd/ports/textproc/py-towncrier > /freebsd/ports/textproc/sdocbook-xml > /freebsd/ports/textproc/utf8proc > /freebsd/ports/textproc/xmlcatmgr > /freebsd/ports/textproc/xmlcharent > /freebsd/ports/textproc/xmlto > /freebsd/ports/www/libnghttp2 > /freebsd/ports/www/p5-CGI > /freebsd/ports/www/p5-HTML-Parser > /freebsd/ports/www/p5-HTML-Tagset > /freebsd/ports/www/p5-Mozilla-CA > /freebsd/ports/www/py-bleach > /freebsd/ports/www/py-django111 > /freebsd/ports/www/py-html5lib > /freebsd/ports/www/py-requests > /freebsd/ports/www/py-requests-toolbelt > /freebsd/ports/www/py-tornado > /freebsd/ports/www/serf > /freebsd/ports/www/w3m > /freebsd/ports/x11/libICE > /freebsd/ports/x11/libSM > /freebsd/ports/x11/libX11 > /freebsd/ports/x11/libXau > /freebsd/ports/x11/libXdmcp > /freebsd/ports/x11/libxcb > /freebsd/ports/x11/xcb-proto > /freebsd/ports/x11/xorgproto > /freebsd/ports/x11/xtrans > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Thu May 14 16:25:27 2020 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 14 May 2020 16:25:27 +0000 Subject: Let's look at another build system In-Reply-To: References: <31910.1589467732@critter.freebsd.dk> Message-ID: <32115.1589473527@critter.freebsd.dk> -------- In message , Guillaume Quintard writes: >I'm going to vote for "that freeBSD port is absolutely bonkers and probably >deserves the behind-the-barn treatment", specially considering the debian >dependency graph: https://ibb.co/YdXMHRZ The FreeBSD graph is a fully recursive "all dependencies" graph, both for building and running, the debian graph seems to be truncated somehow, because otherwise that would be a very handicapped py3 instance. And as I said: meson is probably not to blame, I just wanted to illustrate my concern. >I'm not fully against rolling out our own system, but for the love of all >that is holy, let's not base it on Make. If all you use make(1) for is running processes, it's as good as anything (except maybe jam(1)). The hard part about make(1) is getting _all_ your dependencies recorded _correctly_ in the makefile. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From guillaume at varnish-software.com Thu May 14 23:19:42 2020 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Thu, 14 May 2020 16:19:42 -0700 Subject: Let's look at another build system In-Reply-To: <32115.1589473527@critter.freebsd.dk> References: <31910.1589467732@critter.freebsd.dk> <32115.1589473527@critter.freebsd.dk> Message-ID: > The FreeBSD graph is a fully recursive "all dependencies" graph, both for > building and running, the debian graph seems to be truncated somehow, > because otherwise that would be a very handicapped py3 instance. My bad, I was only considering the run dependencies, the build+run dependency list is bigger. BUT! I dug a bit, and the insanity is really only introduced by the test dependency on devel/py-pytest-xdist, snip it and meson+ninja only need 33 ports, which isn't that crazy when compared with the 20 items in the automake+autotools+autoconf+libtool case. Now, I have now idea if that detail matters or not, as I'm not a freeBSD, but hopefully that number isn't as shocking anymore. > If all you use make(1) for is running processes, it's as good as anything > (except maybe jam(1)). The hard part about make(1) is getting _all_ > your dependencies recorded _correctly_ in the makefile. That's the biggest footgun of make because it doesn't know about recipes producing multiple files (and we do love those), so you have to bend over backwards to teach it how to do it correctly. Add to this the dumb recursive mode, its lack of dependency regarding the build commands and all the dark magic it *tries* to accomplish to handle C compilation correctly (I'd rather have something truly dumb that doesn't get in the way) and you get a nice recipe (pun intended) for disaster. I don't know jam and so have no problem with it. Since you talked about the generator+builder pattern, I feel like i need to link to this very recent post: http://neugierig.org/software/blog/2020/05/ninja.html It's from the ninja creator, where he writes about it a bit (and also apologies for the terrible name). -- Guillaume Quintard On Thu, May 14, 2020 at 9:25 AM Poul-Henning Kamp wrote: > -------- > In message < > CAJ6ZYQxTCZO7gS0_o7iehr2gYOJ58XKheOaPxW1zSRO68hfK9w at mail.gmail.com> > , Guillaume Quintard writes: > > >I'm going to vote for "that freeBSD port is absolutely bonkers and > probably > >deserves the behind-the-barn treatment", specially considering the debian > >dependency graph: https://ibb.co/YdXMHRZ > > The FreeBSD graph is a fully recursive "all dependencies" graph, both for > building and running, the debian graph seems to be truncated somehow, > because otherwise that would be a very handicapped py3 instance. > > And as I said: meson is probably not to blame, I just wanted to illustrate > my concern. > > >I'm not fully against rolling out our own system, but for the love of all > >that is holy, let's not base it on Make. > > If all you use make(1) for is running processes, it's as good as anything > (except maybe jam(1)). The hard part about make(1) is getting _all_ > your dependencies recorded _correctly_ in the makefile. > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Fri May 15 08:00:32 2020 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Fri, 15 May 2020 08:00:32 +0000 Subject: Let's look at another build system In-Reply-To: References: <31910.1589467732@critter.freebsd.dk> <32115.1589473527@critter.freebsd.dk> Message-ID: <35553.1589529632@critter.freebsd.dk> -------- In message , Guillaume Quintard writes: >> If all you use make(1) for is running processes, it's as good as anything >> (except maybe jam(1)). The hard part about make(1) is getting _all_ >> your dependencies recorded _correctly_ in the makefile. > >That's the biggest footgun of make because it doesn't know about recipes >producing multiple files [...] Dont get me started:i I know perfectly well what the problems are, which is why I only said I was "tempted" :-) >I don't know jam [...] Jam(1) was make(1) done right. Unfortunately there were absolutely no way to migrate, short of starting from scratch, not even for highly stylized Makefiles like the FreeBSD tree, so it never caught on and is sadly no longer of relevance. But as I said: By all means lets look at this. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From sid at arista.com Sun May 17 06:42:37 2020 From: sid at arista.com (Sidhartha Agrawal) Date: Sat, 16 May 2020 23:42:37 -0700 Subject: Fwd: Arista Networks: Caching the BODY of a POST request for subsequent GETs In-Reply-To: References: Message-ID: Hi Varnish Community, I am a software developer at Arista Network in our Tools and Infrastructure Group. We use Varnish(6.3.1) to cache our build objects such as RPMS and images and Varnish is an integral part of our internal build system. Some of these objects are of the order of ~500 MB, so a GET of multiple such objects can put a significant load on the backend(OpenSwift). Since there is an adequate temporal locality(within a day) between the POST of the large object and the subsequent GET, we would like to cache the object when the POST happens, so that the subsequent GET is guaranteed to find it in the cache(unless evicted or expired). As I understand, Varnish bypasses the cache for POST and passes it straight to the backend. We would like to change this behavior of POST to also add an entry for the object in the cache if the write to the backend was successful. A cursory google search for ?Caching Post requests on Varnish? lead to me this article . Though I do not think this is exactly what we need. The article talks about avoiding a write to the backend if another POST request is made with the same body. That is different from what we are trying to do. I would appreciate any insights you can provide in to: - What changes do I need to make to the VCL to achieve our desired behavior - Comments, on whether this is a bad idea for some reason. I have been following the VCL state machine from this website . I appreciate the help. -Sid -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Sun May 17 14:47:24 2020 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Sun, 17 May 2020 07:47:24 -0700 Subject: Arista Networks: Caching the BODY of a POST request for subsequent GETs In-Reply-To: References: Message-ID: Hi Sidharta, I'm not sure why the tutorial you linked to isn't adequate. Do you need the POST to also create an object in cache? Kind regards, -- Guillaume Quintard On Sat, May 16, 2020 at 11:44 PM Sidhartha Agrawal wrote: > Hi Varnish Community, > > I am a software developer at Arista Network in our Tools and > Infrastructure Group. We use Varnish(6.3.1) to cache our build objects such > as RPMS and images and Varnish is an integral part of our internal build > system. > > Some of these objects are of the order of ~500 MB, so a GET of multiple > such objects can put a significant load on the backend(OpenSwift). Since > there is an adequate temporal locality(within a day) between the POST of > the large object and the subsequent GET, we would like to cache the object > when the POST happens, so that the subsequent GET is guaranteed to find it > in the cache(unless evicted or expired). > > As I understand, Varnish bypasses the cache for POST and passes it > straight to the backend. We would like to change this behavior of POST to > also add an entry for the object in the cache if the write to the backend > was successful. > > A cursory google search for ?Caching Post requests on Varnish? lead to me > this article > . > Though I do not think this is exactly what we need. The article talks about > avoiding a write to the backend if another POST request is made with the > same body. That is different from what we are trying to do. > > I would appreciate any insights you can provide in to: > > - > > What changes do I need to make to the VCL to achieve our desired > behavior > - > > Comments, on whether this is a bad idea for some reason. > > > I have been following the VCL state machine from this website > . > I appreciate the help. > > -Sid > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sid at arista.com Sun May 17 18:10:14 2020 From: sid at arista.com (Sidhartha Agrawal) Date: Sun, 17 May 2020 11:10:14 -0700 Subject: Arista Networks: Caching the BODY of a POST request for subsequent GETs In-Reply-To: References: Message-ID: Hi Guillaume, Thank you for responding. Yes, we want the POST request to also create an object in the cache i.e. cache the body of the POST request. This way the subsequent GET can find the object in the cache. >From what I could gather(and I could be wrong here), the tutorial was using the body of the POST request to compute the HASH. But was not actually caching the BODY of the POST-request. It was caching the response to the POST so they subsequent POST would not hit the backend. Kind regards, -Sid On Sun, May 17, 2020 at 7:47 AM Guillaume Quintard < guillaume at varnish-software.com> wrote: > Hi Sidharta, > > I'm not sure why the tutorial you linked to isn't adequate. Do you need > the POST to also create an object in cache? > > Kind regards, > > -- > Guillaume Quintard > > > On Sat, May 16, 2020 at 11:44 PM Sidhartha Agrawal wrote: > >> Hi Varnish Community, >> >> I am a software developer at Arista Network in our Tools and >> Infrastructure Group. We use Varnish(6.3.1) to cache our build objects such >> as RPMS and images and Varnish is an integral part of our internal build >> system. >> >> Some of these objects are of the order of ~500 MB, so a GET of multiple >> such objects can put a significant load on the backend(OpenSwift). Since >> there is an adequate temporal locality(within a day) between the POST of >> the large object and the subsequent GET, we would like to cache the object >> when the POST happens, so that the subsequent GET is guaranteed to find it >> in the cache(unless evicted or expired). >> >> As I understand, Varnish bypasses the cache for POST and passes it >> straight to the backend. We would like to change this behavior of POST to >> also add an entry for the object in the cache if the write to the backend >> was successful. >> >> A cursory google search for ?Caching Post requests on Varnish? lead to >> me this article >> . >> Though I do not think this is exactly what we need. The article talks about >> avoiding a write to the backend if another POST request is made with the >> same body. That is different from what we are trying to do. >> >> I would appreciate any insights you can provide in to: >> >> - >> >> What changes do I need to make to the VCL to achieve our desired >> behavior >> - >> >> Comments, on whether this is a bad idea for some reason. >> >> >> I have been following the VCL state machine from this website >> . >> I appreciate the help. >> >> -Sid >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reza at varnish-software.com Mon May 18 13:06:21 2020 From: reza at varnish-software.com (Reza Naghibi) Date: Mon, 18 May 2020 09:06:21 -0400 Subject: Arista Networks: Caching the BODY of a POST request for subsequent GETs In-Reply-To: References: Message-ID: I think what you are looking for is write through caching behavior? This is not possible with open source. However, it's fully supported in the enterprise version. --- Reza Naghibi VP of Technology Varnish Software On Sun, May 17, 2020 at 10:48 AM Guillaume Quintard < guillaume at varnish-software.com> wrote: > Hi Sidharta, > > I'm not sure why the tutorial you linked to isn't adequate. Do you need > the POST to also create an object in cache? > > Kind regards, > > -- > Guillaume Quintard > > > On Sat, May 16, 2020 at 11:44 PM Sidhartha Agrawal wrote: > >> Hi Varnish Community, >> >> I am a software developer at Arista Network in our Tools and >> Infrastructure Group. We use Varnish(6.3.1) to cache our build objects such >> as RPMS and images and Varnish is an integral part of our internal build >> system. >> >> Some of these objects are of the order of ~500 MB, so a GET of multiple >> such objects can put a significant load on the backend(OpenSwift). Since >> there is an adequate temporal locality(within a day) between the POST of >> the large object and the subsequent GET, we would like to cache the object >> when the POST happens, so that the subsequent GET is guaranteed to find it >> in the cache(unless evicted or expired). >> >> As I understand, Varnish bypasses the cache for POST and passes it >> straight to the backend. We would like to change this behavior of POST to >> also add an entry for the object in the cache if the write to the backend >> was successful. >> >> A cursory google search for ?Caching Post requests on Varnish? lead to >> me this article >> . >> Though I do not think this is exactly what we need. The article talks about >> avoiding a write to the backend if another POST request is made with the >> same body. That is different from what we are trying to do. >> >> I would appreciate any insights you can provide in to: >> >> - >> >> What changes do I need to make to the VCL to achieve our desired >> behavior >> - >> >> Comments, on whether this is a bad idea for some reason. >> >> >> I have been following the VCL state machine from this website >> . >> I appreciate the help. >> >> -Sid >> >> _______________________________________________ >> varnish-misc mailing list >> varnish-misc at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >> > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From info+varnish at shee.org Sat May 23 23:29:09 2020 From: info+varnish at shee.org (info+varnish at shee.org) Date: Sun, 24 May 2020 01:29:09 +0200 Subject: Transparent hugepages on RHEL8 Message-ID: This notes https://varnish-cache.org/docs/trunk/installation/platformnotes.html has a comment about "Transparent hugepages". Does this still apply to EL8? On an EL8 system # find /sys/ |grep transparent_hugepage/enabled /sys/kernel/mm/transparent_hugepage/enabled # cat /sys/kernel/mm/transparent_hugepage/enabled [always] madvise never # uname -r 4.18.0-147.8.1.el8_1.x86_64 rpm -q varnish varnish-6.0.2-1.module_el8.0.0+19+b2cdb875 Any suggestions would be greatly appreciated! Thanks! From geoff at uplex.de Sun May 24 08:09:57 2020 From: geoff at uplex.de (Geoff Simmons) Date: Sun, 24 May 2020 10:09:57 +0200 Subject: Transparent hugepages on RHEL8 In-Reply-To: References: Message-ID: On 5/24/20 01:29, info+varnish at shee.org wrote: > This notes > > https://varnish-cache.org/docs/trunk/installation/platformnotes.html > > has a comment about "Transparent hugepages". > > Does this still apply to EL8? That's a good heads-up that those docs need to be updated -- they refer to RHEL6 and Linux kernel 3.2. If I'm not mistaken, enabling THP by default was fairly new at the time, but it's still the default and that's old news now, as your settings confirmed (just checked that it's also the default on my Debian stretch laptop). The issue is not really the distro or kernel version, but the use of the THP feature, and it's still a problem, probably always will be. AFAICT THP does nothing good for Varnish. It's harmless if you're lucky, but it can be very disruptive. I haven't tried it with RHEL8. The doc says that it "is known to cause sporadic crashes of Varnish", but while I haven't seen crashes, on RHEL7 I've seen that the memory usage of the cache process bloats up enormously, orders of magnitude larger than the actual size of the cache and anything else in Varnish that occupies memory. After disabling THP for Varnish (as detailed below), I saw memory usage become much smaller, more like what you'd expect from the cache size and other overhead. There's an explanation for why THP causes that, but suffice it to say that THP creates trouble for a variety of apps that manage a lot of memory. MongoDB, Oracle, redis and many other projects advise you to turn it off. THP is inevitably a problem for the jemalloc memory allocator, which is invariably used with Varnish. You can turn off THP system-wide with: $ echo never > /sys/kernel/mm/transparent_hugepage/enabled Or, I believe this may work in /etc/grub.conf: transparent_hugepage=never Since you're on RHEL8, you also have the option of disabling THP for jemalloc when used by Varnish; so you don't have to turn it off for everything, if you prefer to leave on the default for other processes on your system. The option is: thp:never One way to do that is to start Varnish with that setting in MALLOC_CONF in its environment: $ MALLOC_CONF=thp:never /usr/sbin/varnishd -a :80 ... Or you could set thp:never in /etc/malloc.conf, in which case the setting holds for any app that uses jemalloc. The jemalloc man page has all the details. This is possible in RHEL8 because el8 supports versions of jemalloc that have the option. Earlier versions of jemalloc didn't have it, in particular 3.0.6, on which the world was stuck for a very long time, and was the latest available in el7. For readers who are using el7/RHEL7 -- I patched up an RPM that installs more recent jemalloc on el7. Git repo here: https://code.uplex.de/uplex-varnish/libjemalloc2-el7-rpm (Still no README there, my bad.) There's an el7 package repo with the RPM at https://pkg.uplex.de/: $ yum-config-manager --add-repo https://pkg.uplex.de/rpm/7/uplex-varnish/x86_64/ Newer versions of jemalloc have a different SO name, libjemalloc.so.2, whereas most software built for el7 that links to jemalloc expects libjemalloc.so.1. That includes the el7 Varnish RPMs from packagecloud. So if you have that in your Varnish binary (if ldd points to libjemalloc.so.1), and you want to use the el7 RPM for newer jemalloc, you can do this: $ patchelf --replace-needed libjemalloc.so.1 libjemalloc.so.2 /path/to/varnishd Then check ldd, you should see it now pointing to libjemalloc.so.2. I have that working in production, which made the thp:never setting possible, and that got rid of the memory bloat. Sorry for the long-winded response, I had a big fight with this problem a few months ago, and am still a little miffed at transparent hugepages. HTH, Geoff PS while I'm on the subject, shout-out to Ingvar Hagelund, who does the epel/fedora packaging for Varnish, jemalloc, and a variety of other things. He did the hard work packaging jemalloc, I just changed a version number. -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From AlexWakefield at fastmail.com.au Mon May 25 06:26:17 2020 From: AlexWakefield at fastmail.com.au (Alex Wakefield) Date: Mon, 25 May 2020 16:26:17 +1000 Subject: Varnish over memory allocation Message-ID: <70d69a29-7c65-43bf-a0d8-2579e9da3916@www.fastmail.com> Hi all, I currently have multiple Varnish 6.0.5 (varnish-6.0.5 revision 3065ccaacc4bb537fb976a524bd808db42c5fe40) instances deployed to production with Varnish modules (0.15.0) installed that consistently go over their memory allocation by several gigabytes. These instances live on 32GB VMs (Ubuntu 18.04) but are tuned to only use 24GB of memory to allow enough overhead for fragmentation and other processes on the machine. If left alone they grow until OOM killer kicks in and kills them off. Currently they're sitting at 30GB of memory used with ~1,942,200 objects in cache according to MAIN.n_object. In terms of traffic we serve standard HTTP traffic for several quite large websites but images and other binary objects are not stored in cache. I understand that per 100,000 objects there is ~100MB of fragmentation/overhead so is this amount of memory over-usage to be expected? Is there any tuning I can do to try to reduce this overhead or is the answer to reduce the amount of objects in memory? We purge assets via XKey tags that are on our pages and some standard bans via purge. All our bans are lurker friendly though, we only hover ~30-50 active bans at any time. XKey purges are a combination of soft and hard purges. Any advice would be fantastic! Let me know if there is any further information I can provide. Regards, Alex From cosimo at streppone.it Mon May 25 08:03:29 2020 From: cosimo at streppone.it (Cosimo Streppone) Date: Mon, 25 May 2020 10:03:29 +0200 Subject: Varnish over memory allocation In-Reply-To: <70d69a29-7c65-43bf-a0d8-2579e9da3916@www.fastmail.com> References: <70d69a29-7c65-43bf-a0d8-2579e9da3916@www.fastmail.com> Message-ID: On Mon, May 25, 2020, at 08:26, Alex Wakefield wrote: > > These instances live on 32GB VMs (Ubuntu 18.04) but are tuned to only > use 24GB of memory to allow enough overhead for fragmentation and other > processes on the machine. If left alone they grow until OOM killer > kicks in and kills them off. What storage module are you using? -- Cosimo From AlexWakefield at fastmail.com.au Mon May 25 08:06:15 2020 From: AlexWakefield at fastmail.com.au (Alex Wakefield) Date: Mon, 25 May 2020 18:06:15 +1000 Subject: Varnish over memory allocation In-Reply-To: References: <70d69a29-7c65-43bf-a0d8-2579e9da3916@www.fastmail.com> Message-ID: Whoops, knew I forgot to specify something! We're using malloc. Command line switch is specifically `-s malloc,24GB` I think this is what you mean? Cheers, Alex On Mon, 25 May 2020, at 6:03 PM, Cosimo Streppone wrote: > On Mon, May 25, 2020, at 08:26, Alex Wakefield wrote: > > > > These instances live on 32GB VMs (Ubuntu 18.04) but are tuned to only > > use 24GB of memory to allow enough overhead for fragmentation and other > > processes on the machine. If left alone they grow until OOM killer > > kicks in and kills them off. > > What storage module are you using? > > -- > Cosimo > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From cosimo at streppone.it Mon May 25 08:20:23 2020 From: cosimo at streppone.it (Cosimo Streppone) Date: Mon, 25 May 2020 10:20:23 +0200 Subject: Varnish over memory allocation In-Reply-To: References: <70d69a29-7c65-43bf-a0d8-2579e9da3916@www.fastmail.com> Message-ID: <81087ba9-0106-4686-a5be-5258a84ef11b@www.fastmail.com> On Mon, May 25, 2020, at 10:06, Alex Wakefield wrote: > > We're using malloc. Command line switch is specifically `-s malloc,24GB` > I think this is what you mean? Yes. Unfortunately (or rather fortunately) I never had problems like those with malloc storage. -- Cosimo From info+varnish at shee.org Mon May 25 09:26:40 2020 From: info+varnish at shee.org (info+varnish at shee.org) Date: Mon, 25 May 2020 11:26:40 +0200 Subject: Transparent hugepages on RHEL8 In-Reply-To: References: Message-ID: <7f2c17f5b8c1a63e2c436c64a3b6c85a0f23b9b0.camel@shee.org> Am Sonntag, den 24.05.2020, 10:09 +0200 schrieb Geoff Simmons: > On 5/24/20 01:29, info+varnish at shee.org wrote: > > This notes > > > > https://varnish-cache.org/docs/trunk/installation/platformnotes.html > > > > has a comment about "Transparent hugepages". > > > > Does this still apply to EL8? > > That's a good heads-up that those docs need to be updated -- they > refer > to RHEL6 and Linux kernel 3.2. If I'm not mistaken, enabling THP by > default was fairly new at the time, but it's still the default and > that's old news now, as your settings confirmed (just checked that > it's > also the default on my Debian stretch laptop). > > The issue is not really the distro or kernel version, but the use of > the > THP feature, and it's still a problem, probably always will be. > AFAICT > THP does nothing good for Varnish. It's harmless if you're lucky, but > it > can be very disruptive. > > I haven't tried it with RHEL8. The doc says that it "is known to > cause > sporadic crashes of Varnish", but while I haven't seen crashes, on > RHEL7 > I've seen that the memory usage of the cache process bloats up > enormously, orders of magnitude larger than the actual size of the > cache > and anything else in Varnish that occupies memory. After disabling > THP > for Varnish (as detailed below), I saw memory usage become much > smaller, > more like what you'd expect from the cache size and other overhead. > > There's an explanation for why THP causes that, but suffice it to say > that THP creates trouble for a variety of apps that manage a lot of > memory. MongoDB, Oracle, redis and many other projects advise you to > turn it off. THP is inevitably a problem for the jemalloc memory > allocator, which is invariably used with Varnish. > > You can turn off THP system-wide with: > > $ echo never > /sys/kernel/mm/transparent_hugepage/enabled > > Or, I believe this may work in /etc/grub.conf: > > transparent_hugepage=never > > Since you're on RHEL8, you also have the option of disabling THP for > jemalloc when used by Varnish; so you don't have to turn it off for > everything, if you prefer to leave on the default for other processes > on > your system. The option is: thp:never > > One way to do that is to start Varnish with that setting in > MALLOC_CONF > in its environment: > > $ MALLOC_CONF=thp:never /usr/sbin/varnishd -a :80 ... > > Or you could set thp:never in /etc/malloc.conf, in which case the > setting holds for any app that uses jemalloc. The jemalloc man page > has > all the details. > > This is possible in RHEL8 because el8 supports versions of jemalloc > that > have the option. Earlier versions of jemalloc didn't have it, in > particular 3.0.6, on which the world was stuck for a very long time, > and > was the latest available in el7. > > For readers who are using el7/RHEL7 -- I patched up an RPM that > installs > more recent jemalloc on el7. Git repo here: > > https://code.uplex.de/uplex-varnish/libjemalloc2-el7-rpm > (Still no README there, my bad.) > > There's an el7 package repo with the RPM at https://pkg.uplex.de/: > > $ yum-config-manager --add-repo > https://pkg.uplex.de/rpm/7/uplex-varnish/x86_64/ > > Newer versions of jemalloc have a different SO name, > libjemalloc.so.2, > whereas most software built for el7 that links to jemalloc expects > libjemalloc.so.1. That includes the el7 Varnish RPMs from > packagecloud. > So if you have that in your Varnish binary (if ldd points to > libjemalloc.so.1), and you want to use the el7 RPM for newer > jemalloc, > you can do this: > > $ patchelf --replace-needed libjemalloc.so.1 libjemalloc.so.2 > /path/to/varnishd > > Then check ldd, you should see it now pointing to libjemalloc.so.2. > > I have that working in production, which made the thp:never setting > possible, and that got rid of the memory bloat. > > Sorry for the long-winded response, I had a big fight with this > problem > a few months ago, and am still a little miffed at transparent > hugepages. > Danke Geoff! for your explanatory answer. It gives some insights that motivates to go further then just disabling a system option. I will take a deeper dive into the current context (EL8) and check what is still relevant and what not and then come back ... -- Leon From dridi at varni.sh Mon May 25 10:02:03 2020 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 25 May 2020 10:02:03 +0000 Subject: Varnish over memory allocation In-Reply-To: References: <70d69a29-7c65-43bf-a0d8-2579e9da3916@www.fastmail.com> Message-ID: On Mon, May 25, 2020 at 8:07 AM Alex Wakefield wrote: > > Whoops, knew I forgot to specify something! > > We're using malloc. Command line switch is specifically `-s malloc,24GB` The -s option only specifies the storage size (HTTP responses with some metadata). The rest of Varnish's memory footprint goes on top, things like loaded VCLs, ongoing VCL transactions, all kinds of data structures. VMODs like XKey may add their own footprint on top, the list goes on. Even on the storage side, if you only declare a malloc storage like you did, you will get an unlimited Transient storage by default for short-lived or uncacheable responses. The only way today to tell a Varnish instance to limit itself to 24GB (and still on a best-effort basis) is with Varnish Enterprise's memory governor. Dridi From dridi at varni.sh Mon May 25 10:33:21 2020 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 25 May 2020 10:33:21 +0000 Subject: Transparent hugepages on RHEL8 In-Reply-To: References: Message-ID: On Sun, May 24, 2020 at 8:12 AM Geoff Simmons wrote: > > On 5/24/20 01:29, info+varnish at shee.org wrote: > > This notes > > > > https://varnish-cache.org/docs/trunk/installation/platformnotes.html > > > > has a comment about "Transparent hugepages". > > > > Does this still apply to EL8? > > That's a good heads-up that those docs need to be updated -- they refer > to RHEL6 and Linux kernel 3.2. If I'm not mistaken, enabling THP by > default was fairly new at the time, but it's still the default and > that's old news now, as your settings confirmed (just checked that it's > also the default on my Debian stretch laptop). > > The issue is not really the distro or kernel version, but the use of the > THP feature, and it's still a problem, probably always will be. AFAICT > THP does nothing good for Varnish. It's harmless if you're lucky, but it > can be very disruptive. > > I haven't tried it with RHEL8. The doc says that it "is known to cause > sporadic crashes of Varnish", but while I haven't seen crashes, on RHEL7 > I've seen that the memory usage of the cache process bloats up > enormously, orders of magnitude larger than the actual size of the cache > and anything else in Varnish that occupies memory. After disabling THP > for Varnish (as detailed below), I saw memory usage become much smaller, > more like what you'd expect from the cache size and other overhead. > > There's an explanation for why THP causes that, but suffice it to say > that THP creates trouble for a variety of apps that manage a lot of > memory. MongoDB, Oracle, redis and many other projects advise you to > turn it off. THP is inevitably a problem for the jemalloc memory > allocator, which is invariably used with Varnish. I just wanted to react to the "invariably" word here. This is not accurate, it should read "by default" instead. See ./configure --help: > --with-jemalloc use jemalloc memory allocator. Default is yes on Linux, no elsewhere And considering that jemalloc is not available in el8, but epel8 instead. I suspect Red Hat ships a varnish package that uses glibc with no custom allocator. Cheers, Dridi From info+varnish at shee.org Mon May 25 12:28:51 2020 From: info+varnish at shee.org (info+varnish at shee.org) Date: Mon, 25 May 2020 14:28:51 +0200 Subject: Transparent hugepages on RHEL8 In-Reply-To: References: Message-ID: <90e18294e340251ac2032a310ffc82ff132ab758.camel@shee.org> Am Montag, den 25.05.2020, 10:33 +0000 schrieb Dridi Boukelmoune: > On Sun, May 24, 2020 at 8:12 AM Geoff Simmons wrote: > > On 5/24/20 01:29, info+varnish at shee.org wrote: > > > This notes > > > > > > https://varnish-cache.org/docs/trunk/installation/platformnotes.html > > > > > > has a comment about "Transparent hugepages". > > > > > > Does this still apply to EL8? > > > > That's a good heads-up that those docs need to be updated -- they > > refer > > to RHEL6 and Linux kernel 3.2. If I'm not mistaken, enabling THP by > > default was fairly new at the time, but it's still the default and > > that's old news now, as your settings confirmed (just checked that > > it's > > also the default on my Debian stretch laptop). > > > > The issue is not really the distro or kernel version, but the use > > of the > > THP feature, and it's still a problem, probably always will be. > > AFAICT > > THP does nothing good for Varnish. It's harmless if you're lucky, > > but it > > can be very disruptive. > > > > I haven't tried it with RHEL8. The doc says that it "is known to > > cause > > sporadic crashes of Varnish", but while I haven't seen crashes, on > > RHEL7 > > I've seen that the memory usage of the cache process bloats up > > enormously, orders of magnitude larger than the actual size of the > > cache > > and anything else in Varnish that occupies memory. After disabling > > THP > > for Varnish (as detailed below), I saw memory usage become much > > smaller, > > more like what you'd expect from the cache size and other overhead. > > > > There's an explanation for why THP causes that, but suffice it to > > say > > that THP creates trouble for a variety of apps that manage a lot of > > memory. MongoDB, Oracle, redis and many other projects advise you > > to > > turn it off. THP is inevitably a problem for the jemalloc memory > > allocator, which is invariably used with Varnish. > > I just wanted to react to the "invariably" word here. This is not > accurate, it should read "by default" instead. > > See ./configure --help: > > > --with-jemalloc use jemalloc memory allocator. Default is yes on > > Linux, no elsewhere > > And considering that jemalloc is not available in el8, but epel8 > instead. I suspect Red Hat ships a varnish package that uses glibc > with no custom allocator. > Seems so: # strings /usr/sbin/varnishd | grep -i jemalloc and comparing with redis that is also in the distribution base repo: # strings /usr/bin/redis-server|grep -i jemalloc|head -4 je_jemalloc_prefork je_jemalloc_postfork_child je_jemalloc_postfork_parent jemalloc-5.1.0 are the implications that varnish build that way in this context is more resilient with hugepages enabled? -- Leon From AlexWakefield at fastmail.com.au Mon May 25 12:38:10 2020 From: AlexWakefield at fastmail.com.au (Alex Wakefield) Date: Mon, 25 May 2020 22:38:10 +1000 Subject: Varnish over memory allocation In-Reply-To: References: <70d69a29-7c65-43bf-a0d8-2579e9da3916@www.fastmail.com> Message-ID: Hey Dridi, Thanks for the reply. I originally thought that it might've been transient storage as well however checking that shows that only 287MB of storage has been allocated from it. Its allocated and released 1.06TB though, is that perhaps the cause of the issue? SMA.s0.g_bytes shows the correct size of 23.99GB allocated which is what I expect however a pmap of the varnish process shows 29977920K used, 6GB over what was set. Cheers, Alex On Mon, 25 May 2020, at 8:02 PM, Dridi Boukelmoune wrote: > On Mon, May 25, 2020 at 8:07 AM Alex Wakefield > wrote: > > > > Whoops, knew I forgot to specify something! > > > > We're using malloc. Command line switch is specifically `-s malloc,24GB` > > The -s option only specifies the storage size (HTTP responses with > some metadata). The rest of Varnish's memory footprint goes on top, > things like loaded VCLs, ongoing VCL transactions, all kinds of data > structures. VMODs like XKey may add their own footprint on top, the > list goes on. > > Even on the storage side, if you only declare a malloc storage like > you did, you will get an unlimited Transient storage by default for > short-lived or uncacheable responses. > > The only way today to tell a Varnish instance to limit itself to 24GB > (and still on a best-effort basis) is with Varnish Enterprise's memory > governor. > > Dridi > From info+varnish at shee.org Mon May 25 13:06:05 2020 From: info+varnish at shee.org (info+varnish at shee.org) Date: Mon, 25 May 2020 15:06:05 +0200 Subject: Varnish over memory allocation In-Reply-To: References: <70d69a29-7c65-43bf-a0d8-2579e9da3916@www.fastmail.com> Message-ID: <6d77854b6e45fb9321bbc186ef608c1413b035a5.camel@shee.org> Am Montag, den 25.05.2020, 22:38 +1000 schrieb Alex Wakefield: > Hey Dridi, > > Thanks for the reply. I originally thought that it might've been > transient storage as well however checking that shows that only 287MB > of storage has been allocated from it. Its allocated and released > 1.06TB though, is that perhaps the cause of the issue? > > SMA.s0.g_bytes shows the correct size of 23.99GB allocated which is > what I expect however a pmap of the varnish process shows 29977920K > used, 6GB over what was set. Whats your /proc/meminfo output? -- Leon From vlad.rusu at lola.tech Mon May 25 13:17:25 2020 From: vlad.rusu at lola.tech (Vlad Rusu) Date: Mon, 25 May 2020 16:17:25 +0300 Subject: vmod_header (varnish-modules) on varnish cache 4.1 Message-ID: <0C817AA6-1446-4834-8D16-F8696FC89C8C@lola.tech> Hi all, A question for the maintainers of https://github.com/varnish/varnish-modules I need to use vmod_header to get control over the Set-Cookie response headers. Feels like there is no other way in Varnish Cache. Looking at the branches I see version 6 only. Any reason to believe this won?t work with varnish cache version 4.1.x? I know.. I should upgrade. But till then.. Appreciate your support. Thanks, ? Vlad Rusu Cell: +40758066019 Lola Tech | lola.tech -------------- next part -------------- An HTML attachment was scrubbed... URL: From AlexWakefield at fastmail.com.au Mon May 25 13:20:03 2020 From: AlexWakefield at fastmail.com.au (Alex Wakefield) Date: Mon, 25 May 2020 23:20:03 +1000 Subject: Varnish over memory allocation In-Reply-To: <6d77854b6e45fb9321bbc186ef608c1413b035a5.camel@shee.org> References: <70d69a29-7c65-43bf-a0d8-2579e9da3916@www.fastmail.com> <6d77854b6e45fb9321bbc186ef608c1413b035a5.camel@shee.org> Message-ID: Hey Leon, $ cat /proc/meminfo MemTotal: 32940300 kB MemFree: 309836 kB MemAvailable: 2083696 kB Buffers: 127768 kB Cached: 1999212 kB SwapCached: 0 kB Active: 30551008 kB Inactive: 1561588 kB Active(anon): 30067316 kB Inactive(anon): 304 kB Active(file): 483692 kB Inactive(file): 1561284 kB Unevictable: 82180 kB Mlocked: 82180 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 3856 kB Writeback: 0 kB AnonPages: 30066172 kB Mapped: 243516 kB Shmem: 10936 kB Slab: 272788 kB SReclaimable: 186140 kB SUnreclaim: 86648 kB KernelStack: 6960 kB PageTables: 66996 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 16470148 kB Committed_AS: 31298244 kB VmallocTotal: 34359738367 kB VmallocUsed: 0 kB VmallocChunk: 0 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB ShmemHugePages: 0 kB ShmemPmdMapped: 0 kB CmaTotal: 0 kB CmaFree: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 890860 kB DirectMap2M: 31614976 kB DirectMap1G: 1048576 kB Regards, Alex On Mon, 25 May 2020, at 11:06 PM, info+varnish at shee.org wrote: > Am Montag, den 25.05.2020, 22:38 +1000 schrieb Alex Wakefield: > > Hey Dridi, > > > > Thanks for the reply. I originally thought that it might've been > > transient storage as well however checking that shows that only 287MB > > of storage has been allocated from it. Its allocated and released > > 1.06TB though, is that perhaps the cause of the issue? > > > > SMA.s0.g_bytes shows the correct size of 23.99GB allocated which is > > what I expect however a pmap of the varnish process shows 29977920K > > used, 6GB over what was set. > > Whats your /proc/meminfo output? > > -- > Leon > > > _______________________________________________ > varnish-misc mailing list > varnish-misc at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > From dridi at varni.sh Mon May 25 13:25:45 2020 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 25 May 2020 13:25:45 +0000 Subject: Transparent hugepages on RHEL8 In-Reply-To: <90e18294e340251ac2032a310ffc82ff132ab758.camel@shee.org> References: <90e18294e340251ac2032a310ffc82ff132ab758.camel@shee.org> Message-ID: > are the implications that varnish build that way in this context > is more resilient with hugepages enabled? I have no idea, we didn't package the el8 varnish DNF module! From dridi at varni.sh Mon May 25 13:34:36 2020 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 25 May 2020 13:34:36 +0000 Subject: vmod_header (varnish-modules) on varnish cache 4.1 In-Reply-To: <0C817AA6-1446-4834-8D16-F8696FC89C8C@lola.tech> References: <0C817AA6-1446-4834-8D16-F8696FC89C8C@lola.tech> Message-ID: On Mon, May 25, 2020 at 1:19 PM Vlad Rusu wrote: > > Hi all, > > A question for the maintainers of https://github.com/varnish/varnish-modules > > I need to use vmod_header to get control over the Set-Cookie response headers. Feels like there is no other way in Varnish Cache. > > Looking at the branches I see version 6 only. Any reason to believe this won?t work with varnish cache version 4.1.x? > > I know.. I should upgrade. But till then.. > > Appreciate your support. Hi Vlad, Grab the 0.15.0 release from the download site: https://download.varnish-software.com/varnish-modules/ It should work with Varnish 4.1 (otherwise try an older release). Dridi From vlad.rusu at lola.tech Mon May 25 13:36:36 2020 From: vlad.rusu at lola.tech (Vlad Rusu) Date: Mon, 25 May 2020 16:36:36 +0300 Subject: vmod_header (varnish-modules) on varnish cache 4.1 In-Reply-To: References: <0C817AA6-1446-4834-8D16-F8696FC89C8C@lola.tech> Message-ID: Thank you Dridi ? Vlad Rusu Cell: +40758066019 Lola Tech | lola.tech > On 25 May 2020, at 16:34, Dridi Boukelmoune wrote: > > On Mon, May 25, 2020 at 1:19 PM Vlad Rusu wrote: >> >> Hi all, >> >> A question for the maintainers of https://github.com/varnish/varnish-modules >> >> I need to use vmod_header to get control over the Set-Cookie response headers. Feels like there is no other way in Varnish Cache. >> >> Looking at the branches I see version 6 only. Any reason to believe this won?t work with varnish cache version 4.1.x? >> >> I know.. I should upgrade. But till then.. >> >> Appreciate your support. > > Hi Vlad, > > Grab the 0.15.0 release from the download site: > > https://download.varnish-software.com/varnish-modules/ > > It should work with Varnish 4.1 (otherwise try an older release). > > Dridi -------------- next part -------------- An HTML attachment was scrubbed... URL: