Varnish swallowing 4xx responses from POSTs
twbecker at gmail.com
Fri Sep 28 01:08:00 UTC 2018
Thanks for the response. I’m curious what specifically you believe to be in violation of the spec here. There’s a lot of ambiguity to be had by my read, but the option to send responses at any point seems pretty clear. From RFC 7230 Section 6.5
A client sending a message body SHOULD monitor the network connection
for an error response while it is transmitting the request. If the
client sees a response that indicates the server does not wish to
receive the message body and is closing the connection, the client
SHOULD immediately cease transmitting the body and close its side of
I should point out I initiated a thread on the Jetty mailing list on this same topic prior to this one, and they (perhaps unsurprisingly) defend this behavior. Greg Wilkins of the Jetty team asked me to relay this message in particular: https://www.eclipse.org/lists/jetty-users/msg08611.html <https://www.eclipse.org/lists/jetty-users/msg08611.html>
As I mentioned in that thread, I have no horse in this race and just want to solve my problem and perhaps spare others from this same issue, which was rather tough to debug.
> On Sep 27, 2018, at 9:40 AM, Dridi Boukelmoune <dridi at varni.sh> wrote:
> On Wed, Sep 26, 2018 at 2:44 AM Tommy Becker <twbecker at gmail.com> wrote:
>> We have an application that we front with Varnish 4.0.5. Recently, after an application upgrade in which we migrated from Jetty 9.2 to 9.4, we began noticing a lot of 503s being returned from Varnish on POST requests. We have an endpoint that takes a payload of a potentially large amount of JSON, and validates it as it’s being read. What we have discovered is that if there is a problem with the content, we correctly return a 400 Bad Request from Jetty. Notably, this can happen before the entire content is received. When this happens, Varnish continues to send the remainder of data, despite having already seen the response. Now after our upgrade, Jetty's behavior is to send a TCP RST when this happens (since the data is unwanted by the application). Unfortunately, Varnish interprets the RST as a backend error, and goes to vcl_backend_error, having never sent the original response returned from Jetty to the client. So instead of seeing a 400 Bad Request with a helpful message, they simply get 503 Service Unavailable.
> I'm pretty certain that this optimization does not comply with the
> HTTP/1 specs. Even though Jetty is trying to improve the latency by
> replying early, as far as Varnish is concerned it failed to send the
> full request and won't bother reading the response.
>> I found this issue which seems similar: https://github.com/varnishcache/varnish-cache/issues/2332 Can someone help here? Is there anyway to work around this behavior?
> In this case that's different because the backend is inspecting the
> body as it is coming and not rejecting the request based on the size
> only. So I'm afraid there's no way to work around this behavior.
> As this is not a bug, we could introduce either a feature flag or a
> VCL variable turned off by default to tolerate an early reset of the
> request side of an HTTP/1 socket.
> You could join next bugwash on Monday to bring this to the team's
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the varnish-misc