<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>Hello!</p>
<p>Many thanks for your answers! <span class="moz-smiley-s1"><span>:-)</span></span></p>
<p>@Dridi:</p>
<p>You are right, writing a specific VMOD would be the ideal
solution but unfortunately I am not qualified for the job. ^^<br>
</p>
<p>By the way, I would like to thank all the people who are working
hard to enhance and maintain Varnish.</p>
<p>This software is absolutely awesome!<br>
</p>
<p>@Xavier:</p>
<p>Before considering HAProxy, I searched if quick and dirty hacks
were possible with iptables to limit simultaneous connexions and
tc to shape the traffic.</p>
<p>But, after a quick reading of the documentation of HAProxy, it
became clear that - as you said - it is a reliable solution.</p>
<p>So, thanks the for the hint!</p>
<p>Have a great day!<br>
</p>
<div class="moz-cite-prefix">Le 16/06/2020 à 00:17, Xavier Leune a
écrit :<br>
</div>
<blockquote type="cite"
cite="mid:CAMJmMZPsxZS=nNSW70yiCkWGPsVGxKDCX9FW4PON4TccYcXMug@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">
<div dir="ltr">Hello,
<div><br>
<div>@tranxene50 if implementing a vmod can be very
challenging, using haproxy can be a great solution here.
Please refer to this blog post: <a
href="https://www.haproxy.com/fr/blog/four-examples-of-haproxy-rate-limiting/"
moz-do-not-send="true">https://www.haproxy.com/fr/blog/four-examples-of-haproxy-rate-limiting/</a> (or
in french ;) <a
href="https://www.haproxy.com/fr/blog/four-examples-of-haproxy-rate-limiting/"
moz-do-not-send="true">https://www.haproxy.com/fr/blog/four-examples-of-haproxy-rate-limiting/</a> ).
The very first step is about setting a maximum connections
number and a queuing. Using haproxy as your backend would
require low engineering and a minimum overage.</div>
</div>
</div>
<div><br>
</div>
<div>Regards,</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">Le lun. 15 juin 2020
à 20:02, Dridi Boukelmoune <a class="moz-txt-link-rfc2396E" href="mailto:dridi@varni.sh"><dridi@varni.sh></a> a écrit :<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex"><br>
Bonsoir,<br>
<br>
Unfortunately we don't have any sort of queuing on the
backend side,<br>
so besides implementing your own backend transport from
scratch in a<br>
VMOD there is currently no solution.<br>
<br>
Cordialement,<br>
Dridi<br>
<br>
On Sun, Jun 14, 2020 at 2:32 AM tranxene50<br>
<<a href="mailto:tranxene50@openvz-diff-backups.fr"
target="_blank" moz-do-not-send="true">tranxene50@openvz-diff-backups.fr</a>>
wrote:<br>
><br>
> Hello!<br>
><br>
> Please forgive my bad English, I live in France.<br>
><br>
> Summary: how to cache - with Varnish - Open Street Map
PNG images without overloading OSM tiles servers?<br>
><br>
> The question seems related to Varnish backends and
".max_connections" parameter.<br>
><br>
> A far as I know, if ".max_connections" is reached for a
backend, Varnish sends 503 http errors.<br>
><br>
> I understand the logic but would it be possible to
queue these incoming requests and wait until the selected
backend is really available?<br>
><br>
> backend a_tile {<br>
> .host = "<a
href="http://a.tile.openstreetmap.org" rel="noreferrer"
target="_blank" moz-do-not-send="true">a.tile.openstreetmap.org</a>";<br>
> .port = "80";<br>
> .max_connections = 2;<br>
> }<br>
><br>
> If Varnish have, let's say 100 incoming requests in 1
second, how can I handle this "spike" without overloading
the backend?<br>
><br>
> All my google searches were "dead ends" so I think the
question is poorly formulated.<br>
><br>
> Note 1 : using [random|round_robin] directors could be
a temporary solution<br>
> Note 2 : libvmod-dynamic is great but does not limit
backend simultaneous connexions<br>
><br>
> Many thanks for your help!<br>
</blockquote>
</div>
</div>
</blockquote>
</body>
</html>