Any way to invoke code from VCL after a request has been serviced?

Guillaume Quintard guillaume at varnish-software.com
Mon May 15 23:19:50 CEST 2017


Thanks Ryan, I think I have a clearer picture now.

So, indeed, I think you only need a only a light vmod and a log analyzer.

If I get what's happening, you need a vmod call in vcl_recv to create a
trace if it doesn't exist yet, and in vcl_backend_fetch to create a span.
Then, you can simply look at the log, and you'll have all the meta data you
need (including IP, ports, timings and such).

You may want a specific function to set the component name per-request, but
that can easily be done through std.log, so I wouldn't worry about it at
first.

Am I completely off, or is that at least remotely sensible?


-- 
Guillaume Quintard

On Thu, May 11, 2017 at 10:20 PM, Ryan Burn <rnickb731 at gmail.com> wrote:

> Sure. The intention with OpenTracing is to provide a common API that
> can be used to instrument frameworks and libraries. What happens when,
> for example, a span is created or its context injected into a request
> header isn’t specified by the standard. It’s up to the particular
> tracing implementation used (e.g. LightStep, Zipkin, Jaeger, etc) to
> decide what specifically to do.
>
> So, if a user wants to enable varnish for OpenTracing, I’d expect them
> do something like the following in their VCL:
>
> ###
> # This is distributed as part of the varnish-opentracing project. It
> imports a varnish module
> # that exposes VCL functions to interact with the C++ OpenTracing API
> #         https://github.com/opentracing/opentracing-cpp
> # and adds commands to the VCL built-in subroutines so that the
> module’s functions will
> # be invoked when certain events occur.
> include “opentracing.vcl”;
>
>
> # A user also needs to select a tracing implementation to use. This is
> done by importing
> # the implementation’s module and initializing the tracer in vcl_init.
> For example, if they’re
> # using LightStep they might do something like this
> import lightstep;
>
> sub vcl_init {
>   lightstep.collector_host(“…”);
>   lightstep.collector_port(“…”);
>   lightstep.init_tracer();
> }
>
>
> # Tracing is then explicitly turned on for a particular request with
> logic in the vcl_recv subroutine.
> # This means that a span will be created for the request and any
> backend requests that result from it.
> # The trace’s context will also be propagated into the backend request
> headers, so that any tracing done
> # by the backend server can be linked to it.
> sub vcl_recv {
>   # This turns on tracing for all requests.
>   opentracing.trace_request();
> }
> ###
>
>
> Though all the pieces aren’t together, I have an example set up here
>
> https://github.com/rnburn/varnish-opentracing/blob/master/example/library/
> varnish/library.vcl
>
> To go back to the questions:
>
> - what data do you inject, and how do you create it?
> You would be injecting a list of key-value pairs that represent the
> context of the active span
> (https://github.com/opentracing/specification/
> blob/master/specification.md#spancontext).
> Specifically what that means is up to the tracing implementation, but
> it would look something like this:
>
>
> 'ot-tracer-traceid': 'f84de504f0287bbc'       // An ID used to
> identify the Trace.
> 'ot-tracer-spanid': 'e34088878e7f0ce8'      // An ID used to identify
> the active span within the Trace.
> 'ot-tracer-sampled': 'true'                            // Heuristic
> used by the Tracer
>
> - what do you need to know about the req/resp/bereq/beresp?
> Knowing whether the request resulted in an error is pretty important
> to record. Other data usually added
> are the URI, http method, ip addresses of the client, server, and
> backend servers. Some of the guidelines on what to include are
> documented here:
>
> https://github.com/opentracing/specification/blob/master/semantic_
> conventions.md
>
>
> An example might make this clearer. This shows the breakdown of a
> trace representing the action of a user submitting a profile form:
>
> https://github.com/rnburn/nginx-opentracing/blob/master/
> doc/data/nginx-upload-trace5.png
>
> The server (described in more detail here
> https://github.com/rnburn/nginx-opentracing/blob/master/doc/Tutorial.md)
> uses nginx as a reverse proxy in front of Node.js servers that update
> a database and perform image manipulation. You can see spans created
> on the nginx side to track the duration of the request and how long it
> passes through various location blocks as well as spans created from
> the Node.js server to represent the database activity and image
> manipulation. Injecting context into the request headers is what
> allows the spans to be linked together so that the entire trace can be
> formed.
>
> On Thu, May 11, 2017 at 9:38 PM, Guillaume Quintard
> <guillaume at varnish-software.com> wrote:
> > you can put anything in the priv field of the task, but the issue is that
> > you have to put that data in there, meaning a call to your vmod from the
> > vcl.
> >
> > the VUT.dispatch_f function isn't to be called from a vmod, and I don't
> > think you need to.
> >
> > Maybe it's time to take a step back, can you fill us in the whole
> workflow,
> > notably:
> > - what data do you inject, and how do you create it?
> > - what do you need to know about the req/resp/bereq/beresp?
> >
> > I almost have the feeling that this could be solved through pure
> vcl+shell.
> >
> > --
> > Guillaume Quintard
> >
> > On Thu, May 11, 2017 at 5:33 PM, Ryan Burn <rnickb731 at gmail.com> wrote:
> >>
> >> From the free function, is there any way to get the status code or
> >> other properties of the request? I tried using VRT_r_obj_status with a
> >> stored reference to the context, but that doesn't seem to work since
> >> some of the request's resources have already been reclaimed:
> >>
> >>
> >> https://github.com/rnburn/varnish-opentracing/blob/
> master/opentracing/src/trace.cpp#L22
> >>
> >> Is there any other place something like the status would be stored?
> >>
> >>
> >> On Mon, May 8, 2017 at 11:13 AM, Reza Naghibi <
> reza at varnish-software.com>
> >> wrote:
> >> > Sorry, email misfire.
> >> >
> >> > You can do this in a VMOD via PRIV_TASK:
> >> >
> >> >
> >> > https://varnish-cache.org/docs/trunk/reference/vmod.
> html#private-pointers
> >> >
> >> > It might make sense to track this stuff in some kind of struct, in
> which
> >> > case, put it into *priv and then register a *free callback. Otherwise,
> >> > just
> >> > put a dummy value into the *priv. *free will get called after the
> >> > request is
> >> > done and you can put your custom code in there.
> >> >
> >> > --
> >> > Reza Naghibi
> >> > Varnish Software
> >> >
> >> > On Mon, May 8, 2017 at 11:10 AM, Reza Naghibi
> >> > <reza at varnish-software.com>
> >> > wrote:
> >> >>
> >> >> You can do this in a VMOD via PRIV_TASK:
> >> >>
> >> >>
> >> >> --
> >> >> Reza Naghibi
> >> >> Varnish Software
> >> >>
> >> >> On Fri, May 5, 2017 at 10:15 PM, Ryan Burn <rnickb731 at gmail.com>
> wrote:
> >> >>>
> >> >>> Hello,
> >> >>> From VCL, is it possible to execute code that runs after a request
> has
> >> >>> been processed?
> >> >>>
> >> >>> I'm looking into writing a module that enables Varnish for
> distributed
> >> >>> tracing using the OpenTracing project [opentracing.io]. This
> requires
> >> >>> invoking code at the beginning of a request to start a span and
> insert
> >> >>> tracing context into the request's headers and invoking code after a
> >> >>> request's been processed to finish the span and measure how long it
> >> >>> took to process.
> >> >>>
> >> >>> I recently did a similar project for nginx
> >> >>> [github.com/rnburn/nginx-opentracing]. Nginx provides an
> >> >>> NGX_HTTP_LOG_PHASE [www.nginxguts.com/2011/01/phases/] that allows
> you
> >> >>> to set up handlers run after requests are serviced. Can anything
> >> >>> equivalent be done using VCL?
> >> >>>
> >> >>> I image you could accomplish this by subscribing and regularly
> reading
> >> >>> from Varnish's shared memory log, but I'd much rather do it directly
> >> >>> if possible.
> >> >>>
> >> >>> Thanks, Ryan
> >> >>>
> >> >>> _______________________________________________
> >> >>> varnish-misc mailing list
> >> >>> varnish-misc at varnish-cache.org
> >> >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
> >> >>
> >> >>
> >> >
> >>
> >> _______________________________________________
> >> varnish-misc mailing list
> >> varnish-misc at varnish-cache.org
> >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.varnish-cache.org/lists/pipermail/varnish-misc/attachments/20170515/db28c032/attachment.html>


More information about the varnish-misc mailing list