[master] f66d9ae Start updating vcl(7) for 4.0

Federico G. Schwindt fgsch at lodoss.net
Mon Feb 17 09:07:24 CET 2014


commit f66d9ae6c2688007345586a07c7107511761cc4c
Author: Federico G. Schwindt <fgsch at lodoss.net>
Date:   Mon Feb 17 08:02:48 2014 +0000

    Start updating vcl(7) for 4.0

diff --git a/doc/sphinx/reference/vcl.rst b/doc/sphinx/reference/vcl.rst
index 9aea743..7e00e43 100644
--- a/doc/sphinx/reference/vcl.rst
+++ b/doc/sphinx/reference/vcl.rst
@@ -33,20 +33,18 @@ For an overview of the VCL syntax please see the users guide.
 Backend declarations
 --------------------
 
-A backend declaration creates and initializes a named backend object:
-::
+A backend declaration creates and initializes a named backend object::
 
-  backend www {
-    .host = "www.example.com";
-    .port = "http";
-  }
+    backend www {
+        .host = "www.example.com";
+        .port = "http";
+    }
 
-The backend object can later be used to select a backend at request time:
-::
+The backend object can later be used to select a backend at request time::
 
-  if (req.http.host ~ "(?i)^(www.)?example.com$") {
-    set req.backend = www;
-  }
+    if (req.http.host ~ "(?i)^(www.)?example.com$") {
+        set req.backend = www;
+    }
 
 To avoid overloading backend servers, .max_connections can be set to
 limit the maximum number of concurrent backend connections.
@@ -57,24 +55,15 @@ backend connection, .first_byte_timeout for the time to wait for the
 first byte from the backend and .between_bytes_timeout for time to
 wait between each received byte.
 
-These can be set in the declaration like this:
-::
-
-  backend www {
-    .host = "www.example.com";
-    .port = "http";
-    .connect_timeout = 1s;
-    .first_byte_timeout = 5s;
-    .between_bytes_timeout = 2s;
-  }
-
-To mark a backend as unhealthy after number of items have been added
-to its saintmode list ``.saintmode_threshold`` can be set to the maximum
-list size. Setting a value of 0 disables saint mode checking entirely
-for that backend.  The value in the backend declaration overrides the
-parameter.
-
+These can be set in the declaration like this::
 
+    backend www {
+        .host = "www.example.com";
+        .port = "http";
+        .connect_timeout = 1s;
+        .first_byte_timeout = 5s;
+        .between_bytes_timeout = 2s;
+    }
 
 .. _ref-vcl-director:
 
@@ -89,24 +78,23 @@ be used.
 There are several types of directors. The different director types
 use different algorithms to choose which backend to use.
 
-Configuring a director may look like this:
-::
-
-  director b2 random {
-    .retries = 5;
-    {
-      // We can refer to named backends
-      .backend = b1;
-      .weight  = 7;
-    }
-    {
-      // Or define them inline 
-      .backend  = {
-        .host = "fs2";
-      }
-    .weight         = 3;
+Configuring a director may look like this::
+
+    director b2 random {
+        .retries = 5;
+        {
+            # We can refer to named backends
+            .backend = b1;
+            .weight = 7;
+        }
+        {
+            # Or define them inline
+            .backend  = {
+                .host = "fs2";
+            }
+            .weight = 3;
+        }
     }
-  } 
 
 The family of random directors
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -171,8 +159,7 @@ The DNS director
 ~~~~~~~~~~~~~~~~
 
 The DNS director can use backends in two different ways. Either like the
-random or round-robin director or using .list:
-::
+random or round-robin director or using .list::
 
   director directorname dns {
           .list = {
@@ -207,27 +194,26 @@ round-robin manner.
 The fallback director
 ~~~~~~~~~~~~~~~~~~~~~
 
-The fallback director will pick the first backend that is healthy. It 
+The fallback director will pick the first backend that is healthy. It
 considers them in the order in which they are listed in its definition.
 
 The fallback director does not take any options.
 
-An example of a fallback director:
-::
+An example of a fallback director::
 
-  director b3 fallback {
-    { .backend = www1; }
-    { .backend = www2; } // will only be used if www1 is unhealthy.
-    { .backend = www3; } // will only be used if both www1 and www2
-                         // are unhealthy.
-  }
+    director b3 fallback {
+        { .backend = www1; }
+        { .backend = www2; } # will only be used if www1 is unhealthy.
+        { .backend = www3; } # will only be used if both www1 and www2
+                             # are unhealthy.
+    }
 
 Backend probes
 --------------
 
 Backends can be probed to see whether they should be considered
-healthy or not.  The return status can also be checked by using
-req.backend.healthy.
+healthy or not.  The return status can also be checked by using the
+healthy function from the std vmod.
 
 Probes take the following parameters:
 
@@ -241,7 +227,7 @@ Probes take the following parameters:
 .window
   How many of the latest polls we examine to determine backend health.
   Defaults to 8.
-.threshold 
+.threshold
   How many of the polls in .window must have succeeded for us to
   consider the backend healthy. If this is set to more than or equal
   to the threshold, the backend starts as healthy. Defaults to the
@@ -264,66 +250,61 @@ Probes take the following parameters:
   Default is 2 seconds.
 
 A backend with a probe can be defined like this, together with the
-backend or director:
-::
-
-  backend www {
-    .host = "www.example.com";
-    .port = "http";
-    .probe = {
-      .url = "/test.jpg";
-      .timeout = 0.3 s;
-      .window = 8;
-      .threshold = 3;
-      .initial = 3;
+backend or director::
+
+    backend www {
+        .host = "www.example.com";
+        .port = "http";
+        .probe = {
+            .url = "/test.jpg";
+            .timeout = 0.3s;
+            .window = 8;
+            .threshold = 3;
+            .initial = 3;
+        }
     }
-  }
 
-Or it can be defined separately and then referenced:
-::
+Or it can be defined separately and then referenced::
 
-  probe healthcheck {
-     .url = "/status.cgi";
-     .interval = 60s;     
-     .timeout = 0.3 s;
-     .window = 8;
-     .threshold = 3;
-     .initial = 3;
-     .expected_response = 200;
-  }	
-
-  backend www {
-    .host = "www.example.com";
-    .port = "http";
-    .probe = healthcheck;
-  }
+    probe healthcheck {
+        .url = "/status.cgi";
+        .interval = 60s;
+        .timeout = 0.3s;
+        .window = 8;
+        .threshold = 3;
+        .initial = 3;
+        .expected_response = 200;
+    }
 
-If you have many backends this can simplify the config a lot.
+    backend www {
+        .host = "www.example.com";
+        .port = "http";
+        .probe = healthcheck;
+    }
 
+If you have many backends this can simplify the config a lot.
 
-It is also possible to specify the raw HTTP request:
-::
+It is also possible to specify the raw HTTP request::
 
-  probe rawprobe {
-      # NB: \r\n automatically inserted after each string!
-      .request =
-        "GET / HTTP/1.1"
-        "Host: www.foo.bar"
-        "Connection: close";
-  }
+    probe rawprobe {
+        # NB: \r\n automatically inserted after each string!
+        .request =
+          "GET / HTTP/1.1"
+          "Host: www.foo.bar"
+          "Connection: close";
+    }
 
 ACLs
 ----
 
 An ACL declaration creates and initializes a named access control list
-which can later be used to match client addresses:
-::
+which can later be used to match client addresses::
 
-  acl local {
-    "localhost";         // myself
-    "192.0.2.0"/24;      // and everyone on the local network
-    ! "192.0.2.23";      // except for the dialin router 
-  }
+    acl local {
+        "localhost";    # myself
+        "192.0.2.0"/24; # and everyone on the local network
+        ! "192.0.2.23"; # except for the dialin router
+    }
 
 If an ACL entry specifies a host name which Varnish is unable to
 resolve, it will match any address it is compared to.  Consequently,
@@ -331,12 +312,11 @@ if it is preceded by a negation mark, it will reject any address it is
 compared to, which may not be what you intended.  If the entry is
 enclosed in parentheses, however, it will simply be ignored.
 
-To match an IP address against an ACL, simply use the match operator:
-::
+To match an IP address against an ACL, simply use the match operator::
 
-  if (client.ip ~ local) {
-    return (pipe);
-  }
+    if (client.ip ~ local) {
+        return (pipe);
+    }
 
 Regular Expressions
 -------------------
@@ -346,54 +326,51 @@ complete description of PCRE please see the pcre(3) man page.
 
 To send flags to the PCRE engine, such as to turn on *case
 insensitivity* add the flag within parens following a question mark,
-like this:
-::
-
-  # If host is NOT example dot com..
-  if (req.http.host !~ "(?i)example.com$") {
-          ...
-  }
+like this::
 
+    # If host is NOT example dot com..
+    if (req.http.host !~ "(?i)example.com$") {
+        ...
+    }
 
 Functions
 ---------
 
 The following built-in functions are available:
 
+ban(expression)
+  Bans all objects in cache that match the expression.
+
 hash_data(str)
-  Adds a string to the hash input. In default.vcl hash_data() is
-  called on the host and URL of the *request*.
+  Adds a string to the hash input. In the built-in VCL hash_data()
+  is called on the host and URL of the *request*.
 
 regsub(str, regex, sub)
-  Returns a copy of str with the first occurrence of the regular 
+  Returns a copy of str with the first occurrence of the regular
   expression regex replaced with sub. Within sub, \\0 (which can
   also be spelled \\&) is replaced with the entire matched string,
-  and \\n is replaced with the contents of subgroup n in the 
+  and \\n is replaced with the contents of subgroup n in the
   matched string.
 
 regsuball(str, regex, sub)
   As regsub() but this replaces all occurrences.
 
-ban(ban expression)
-  Bans all objects in cache that match the expression.
-
 Subroutines
 ~~~~~~~~~~~
 
-A subroutine is used to group code for legibility or reusability:
-::
-  
-  sub pipe_if_local {
-    if (client.ip ~ local) {
-      return (pipe);
+A subroutine is used to group code for legibility or reusability::
+
+    sub pipe_if_local {
+        if (client.ip ~ local) {
+            return (pipe);
+        }
     }
-  }
 
 Subroutines in VCL do not take arguments, nor do they return values.
 
-To call a subroutine, use the call keyword followed by the subroutine's name:
+To call a subroutine, use the call keyword followed by the subroutine's name::
 
-call pipe_if_local;
+    call pipe_if_local;
 
 There are a number of special subroutines which hook into the Varnish
 workflow.  These subroutines may inspect and manipulate HTTP headers
@@ -416,72 +393,87 @@ vcl_recv
   been received and parsed.  Its purpose is to decide whether or not
   to serve the request, how to do it, and, if applicable, which backend
   to use.
-  
+
   The vcl_recv subroutine may terminate with calling return() on one of
   the following keywords:
 
-  error code [reason]
+  error(code [, reason])
     Return the specified error code to the client and abandon the request.
 
-  pass    
-    Switch to pass mode.  Control will eventually pass to vcl_pass.
-
-  pipe    
-    Switch to pipe mode.  Control will eventually pass to vcl_pipe.
-
-  lookup  
+  hash
     Look up the requested object in the cache.  Control will
     eventually pass to vcl_hit or vcl_miss, depending on whether the
     object is in the cache.  The ``bereq.method`` value will be set
     to ``GET`` regardless of the value of ``req.method``.
 
-vcl_pipe
-  Called upon entering pipe mode.  In this mode, the request is passed
-  on to the backend, and any further data from either client or
-  backend is passed on unaltered until either end closes the
-  connection.
-  
-  The vcl_pipe subroutine may terminate with calling return() with one of
-  the following keywords:
-
-  error code [reason]
-    Return the specified error code to the client and abandon the request.
+  pass
+    Switch to pass mode.  Control will eventually pass to vcl_pass.
 
   pipe
-    Proceed with pipe mode.
+    Switch to pipe mode.  Control will eventually pass to vcl_pipe.
+
+  purge
+    ?
 
 vcl_pass
   Called upon entering pass mode.  In this mode, the request is passed
   on to the backend, and the backend's response is passed on to the
   client, but is not entered into the cache.  Subsequent requests
   submitted over the same client connection are handled normally.
-  
+
   The vcl_pass subroutine may terminate with calling return() with one of
   the following keywords:
-  
-  error code [reason]
+
+  error(code [, reason])
     Return the specified error code to the client and abandon the request.
 
-  pass
+  fetch
     Proceed with pass mode.
 
   restart
-    Restart the transaction. Increases the restart counter. If the number 
-    of restarts is higher than *max_restarts* Varnish emits a guru meditation 
+    Restart the transaction. Increases the restart counter. If the number
+    of restarts is higher than *max_restarts* Varnish emits a guru meditation
     error.
 
+vcl_pipe
+  Called upon entering pipe mode.  In this mode, the request is passed
+  on to the backend, and any further data from either client or
+  backend is passed on unaltered until either end closes the
+  connection.
+
+  The vcl_pipe subroutine may terminate with calling return() with one of
+  the following keywords:
+
+  error(code [, reason])
+    Return the specified error code to the client and abandon the request.
+
+  pipe
+    Proceed with pipe mode.
+
+vcl_purge
+  ?
+
+  The vcl_purge subroutine may terminate with calling return() with one of
+  the following keywords:
+
+  error(code [, reason])
+    Return the specified error code to the client and abandon the request.
+
+..  fetch
+..    Not implemented.
+
 vcl_hash
   You may call hash_data() on the data you would like to add to the hash.
-  
+
   The vcl_hash subroutine may terminate with calling return() with one of
   the following keywords:
 
-  hash
+  lookup
     Proceed.
 
 vcl_hit
   Called after a cache lookup if the requested document was found in the cache.
-  
+
   The vcl_hit subroutine may terminate with calling return() with one of
   the following keywords:
 
@@ -489,69 +481,73 @@ vcl_hit
     Deliver the cached object to the client.  Control will eventually
     pass to vcl_deliver.
 
-  error code [reason]
+  error(code [, reason])
     Return the specified error code to the client and abandon the request.
 
+  fetch
+    ?
+
   pass
     Switch to pass mode.  Control will eventually pass to vcl_pass.
 
   restart
-    Restart the transaction. Increases the restart counter. If the number 
-    of restarts is higher than *max_restarts* Varnish emits a guru meditation 
+    Restart the transaction. Increases the restart counter. If the number
+    of restarts is higher than *max_restarts* Varnish emits a guru meditation
     error.
 
 vcl_miss
   Called after a cache lookup if the requested document was not found
   in the cache.  Its purpose is to decide whether or not to attempt to
   retrieve the document from the backend, and which backend to use.
-  
+
   The vcl_miss subroutine may terminate with calling return() with one of
   the following keywords:
 
-  error code [reason]
+  error(code [, reason])
     Return the specified error code to the client and abandon the request.
 
+  fetch
+    Retrieve the requested object from the backend.  Control will
+    eventually pass to vcl_backend_fetch.
+
   pass
     Switch to pass mode.  Control will eventually pass to vcl_pass.
 
+  restart
+    ?
+
+vcl_backend_fetch
+  ?
+
+  The vcl_backend_fetch subroutine may terminate with calling return()
+  with one of the following keywords:
+
+  abandon
+    ?
+
   fetch
     Retrieve the requested object from the backend.  Control will
-    eventually pass to vcl_fetch.
+    eventually pass to vcl_backend_response.
 
-vcl_fetch
+vcl_backend_response
   Called after a document has been successfully retrieved from the backend.
-  
-  The vcl_fetch subroutine may terminate with calling return() with
-  one of the following keywords:
+
+  The vcl_backend_response subroutine may terminate with calling return()
+  with one of the following keywords:
+
+  abandon
+    ?
 
   deliver
     Possibly insert the object into the cache, then deliver it to the
     client.  Control will eventually pass to vcl_deliver.
 
-  error code [reason]
-    Return the specified error code to the client and abandon the request.
-
-  hit_for_pass 
-    Pass in fetch. Passes the object without caching it. This will
-    create a so-called hit_for_pass object which has the side effect
-    that the decision not to cache will be cached. This is to allow
-    would-be uncachable requests to be passed to the backend at the
-    same time. The same logic is not necessary in vcl_recv because
-    this happens before any potential queueing for an object takes
-    place.  Note that the TTL for the hit_for_pass object will be set
-    to what the current value of beresp.ttl is. Control will be
-    handled to vcl_deliver on the current request, but subsequent
-    requests will go directly to vcl_pass based on the hit_for_pass
-    object.
-
-  restart
-    Restart the transaction. Increases the restart counter. If the number 
-    of restarts is higher than *max_restarts* Varnish emits a guru meditation 
-    error.
+  retry
+    ?
 
 vcl_deliver
   Called before a cached object is delivered to the client.
-  
+
   The vcl_deliver subroutine may terminate with one of the following
   keywords:
 
@@ -559,23 +555,23 @@ vcl_deliver
     Deliver the object to the client.
 
   restart
-    Restart the transaction. Increases the restart counter. If the number 
-    of restarts is higher than *max_restarts* Varnish emits a guru meditation 
+    Restart the transaction. Increases the restart counter. If the number
+    of restarts is higher than *max_restarts* Varnish emits a guru meditation
     error.
 
 vcl_error
-  Called when we hit an error, either explicitly or implicitly due to 
+  Called when we hit an error, either explicitly or implicitly due to
   backend or internal errors.
 
   The vcl_error subroutine may terminate by calling return with one of
   the following keywords:
- 
+
   deliver
     Deliver the error object to the client.
 
   restart
-    Restart the transaction. Increases the restart counter. If the number 
-    of restarts is higher than *max_restarts* Varnish emits a guru meditation 
+    Restart the transaction. Increases the restart counter. If the number
+    of restarts is higher than *max_restarts* Varnish emits a guru meditation
     error.
 
 vcl_fini
@@ -587,7 +583,6 @@ vcl_fini
   ok
     Normal return, VCL will be discarded.
 
-
 If one of these subroutines is left undefined or terminates without
 reaching a handling decision, control will be handed over to the
 builtin default.  See the EXAMPLES section for a listing of the
@@ -601,30 +596,29 @@ appear in the source.
 The default versions distributed with Varnish will be implicitly
 concatenated as a last resort at the end.
 
-Example:
-::
+Example::
 
-	# in file "main.vcl"
-	include "backends.vcl";
-	include "ban.vcl";
-
-	# in file "backends.vcl"
-	sub vcl_recv {
-	  if (req.http.host ~ "(?i)example.com") {
-	    set req.backend = foo;
-	  } elsif (req.http.host ~ "(?i)example.org") {
-	    set req.backend = bar;
-	  }
+    # in file "main.vcl"
+    include "backends.vcl";
+    include "ban.vcl";
+
+    # in file "backends.vcl"
+    sub vcl_recv {
+        if (req.http.host ~ "(?i)example.com") {
+            set req.backend_hint = foo;
+	} elsif (req.http.host ~ "(?i)example.org") {
+	    set req.backend_hint = bar;
 	}
+    }
 
-	# in file "ban.vcl"
-	sub vcl_recv {
-	  if (client.ip ~ admin_network) {
+    # in file "ban.vcl"
+    sub vcl_recv {
+        if (client.ip ~ admin_network) {
 	    if (req.http.Cache-Control ~ "no-cache") {
-	      ban("req.url ~ " + req.url);
-	    }
-	  }
-	}
+                ban("req.url ~ " + req.url);
+            }
+        }
+    }
 
 Variables
 ~~~~~~~~~
@@ -634,7 +628,7 @@ made available to the handler subroutines through global variables.
 
 The following variables are always available:
 
-now   
+now
   The current time, in seconds since the epoch. When used in string context
   it returns a formatted string.
 
@@ -654,26 +648,22 @@ client.ip
 client.identity
   Identification of the client, used to load balance in the client director.
 
-server.hostname
-  The host name of the server.
-
-server.identity 
-  The identity of the server, as set by the -i
-  parameter.  If the -i parameter is not passed to varnishd,
-  server.identity will be set to the name of the instance, as
-  specified by the -n parameter.
-
 server.ip
   The IP address of the socket on which the client connection was received.
 
-server.port
-  The port number of the socket on which the client connection was received.
+server.hostname
+  The host name of the server.
+
+server.identity
+  The identity of the server, as set by the -i parameter.
+  If the -i parameter is not passed to varnishd, server.identity will be
+  set to the name of the instance, as specified by the -n parameter.
 
 req.method
   The request type (e.g. "GET", "HEAD").
 
 req.request
-  Outdated way to spell req.method.
+  For backward compatibility. Same as req.method.
 
 req.url
   The requested URL.
@@ -681,31 +671,17 @@ req.url
 req.proto
   The HTTP protocol version used by the client.
 
-req.backend
-  The backend to use to service the request.
-
-req.backend.healthy
-  Whether the backend is healthy or not. Requires an active probe to be set
-  on the backend.
-
 req.http.header
   The corresponding HTTP header.
 
-req.hash_always_miss
-  Force a cache miss for this request. If set to true Varnish will disregard
-  any existing objects and always (re)fetch from the backend.
-
-req.hash_ignore_busy
-  Ignore any busy object during cache lookup. You would want to do 
-  this if you have two server looking up content from each other to 
-  avoid potential deadlocks.
+req.backend_hint
+  Set bereq.backend to this if we attempt to fetch. This variable is
+  a convenience so the overall policy can be set up once and for all
+  in vcl_recv.
 
 req.can_gzip
   Does the client accept the gzip transfer encoding.
 
-req.restarts
-  A count of how many times this request has been restarted.
-
 req.esi
   Boolean. Set to false to disable ESI processing regardless of any
   value in beresp.do_esi. Defaults to true. This variable is subject
@@ -714,8 +690,20 @@ req.esi
 req.esi_level
   A count of how many levels of ESI requests we're currently at.
 
-req.grace
-  Set to a period to enable grace.
+req.hash_always_miss
+  Force a cache miss for this request. If set to true Varnish will disregard
+  any existing objects and always (re)fetch from the backend.
+
+req.hash_ignore_busy
+  Ignore any busy object during cache lookup. You would want to do
+  this if you have two server looking up content from each other to
+  avoid potential deadlocks.
+
+req.restarts
+  A count of how many times this request has been restarted.
+
+req.ttl
+  ?
 
 req.xid
   Unique ID of this request.
@@ -727,7 +715,7 @@ bereq.method
   The request type (e.g. "GET", "HEAD").
 
 bereq.request
-  Outdated way to spell bereq.method.
+  For backward compatibility. Same as bereq.method.
 
 bereq.url
   The requested URL.
@@ -738,43 +726,29 @@ bereq.proto
 bereq.http.header
   The corresponding HTTP header.
 
+bereq.backend
+  ?
+
+bereq.between_bytes_timeout
+  The time in seconds to wait between each received byte from the
+  backend. Not available in pipe mode.
+
 bereq.connect_timeout
   The time in seconds to wait for a backend connection.
 
 bereq.first_byte_timeout
-  The time in seconds to wait for the first byte from the backend.  Not
+  The time in seconds to wait for the first byte from the backend. Not
   available in pipe mode.
 
-bereq.between_bytes_timeout
-  The time in seconds to wait between each received byte from the
-  backend.  Not available in pipe mode.
+bereq.retries
+  ?
+
+bereq.uncacheable
+  ?
 
 The following variables are available after the requested object has
 been retrieved from the backend, before it is entered into the cache. In
-other words, they are available in vcl_fetch:
-
-beresp.do_stream 
-  Deliver the object to the client directly without fetching the whole
-  object into Varnish. If this request is pass'ed it will not be
-  stored in memory. As of Varnish Cache 3.0 the object will marked as busy
-  as it is delivered so only client can access the object.
-
-beresp.do_esi
-  Boolean. ESI-process the object after fetching it. Defaults to
-  false. Set it to true to parse the object for ESI directives. Will
-  only be honored if req.esi is true.
-
-beresp.do_gzip 
-  Boolean. Gzip the object before storing it. Defaults to false. When
-  http_gzip_support is on Varnish will request already compressed
-  content from the backend and as such compression in Varnish is not needed.
-
-beresp.do_gunzip
-  Boolean. Unzip the object before storing it in the cache.  Defaults
-  to false.
-
-beresp.http.header
-  The corresponding HTTP header.
+other words, they are available in vcl_backend_response:
 
 beresp.proto
   The HTTP protocol version used the backend replied with.
@@ -782,31 +756,57 @@ beresp.proto
 beresp.status
   The HTTP status code returned by the server.
 
-beresp.response
+beresp.reason
   The HTTP status message returned by the server.
 
-beresp.ttl
-  The object's remaining time to live, in seconds. beresp.ttl is writable.
+beresp.response
+  For backward compatibility. Same as beresp.reason.
 
-beresp.grace
-  Set to a period to enable grace.
+beresp.http.header
+  The corresponding HTTP header.
 
-beresp.saintmode
-  Set to a period to enable saint mode.
+beresp.backend.ip
+  IP of the backend this response was fetched from.
 
 beresp.backend.name
   Name of the backend this response was fetched from.
 
-beresp.backend.ip
-  IP of the backend this response was fetched from.
+beresp.do_esi
+  Boolean. ESI-process the object after fetching it. Defaults to
+  false. Set it to true to parse the object for ESI directives. Will
+  only be honored if req.esi is true.
+
+beresp.do_gunzip
+  Boolean. Unzip the object before storing it in the cache. Defaults to
+  false.
+
+beresp.do_gzip
+  Boolean. Gzip the object before storing it. Defaults to false. When
+  http_gzip_support is on Varnish will request already compressed
+  content from the backend and as such compression in Varnish is not needed.
+
+beresp.do_stream
+  Deliver the object to the client directly without fetching the whole
+  object into varnish. If this request is pass'ed it will not be
+  stored in memory. As of Varnish Cache 3.0 the object will marked as busy
+  as it is delivered so only client can access the object.
+
+beresp.grace
+  Set to a period to enable grace.
 
-beresp.backend.port
-  Port of the backend this response was fetched from.
+beresp.keep
+  ?
 
-beresp.storage
-  Set to force Varnish to save this object to a particular storage
+beresp.storage_hint
+  Hint to Varnish that you want to save this object to a particular storage
   backend.
 
+beresp.ttl
+  The object's remaining time to live, in seconds. beresp.ttl is writable.
+
+beresp.uncacheable
+  ?
+
 After the object is entered into the cache, the following (mostly
 read-only) variables are available when the object has been located in
 cache, typically in vcl_hit, or when constructing a synthetic reply in
@@ -818,33 +818,35 @@ obj.proto
 obj.status
   The HTTP status code returned by the server.
 
-obj.response
+obj.reason
   The HTTP status message returned by the server.
 
-obj.ttl
-  The object's remaining time to live, in seconds. obj.ttl is writable.
+obj.response
+  For backward compatibility. Same as obj.reason.
 
-obj.last_use
-  The approximate time elapsed since the object was last requests, in
-  seconds. This variable is also available in vcl_deliver.
+obj.http.header
+  The corresponding HTTP header.
+
+obj.grace
+  The object's grace period in seconds. obj.grace is writable.
 
 obj.hits
-  The approximate number of times the object has been delivered. A value 
+  The approximate number of times the object has been delivered. A value
   of 0 indicates a cache miss. This variable is also available in
   vcl_deliver.
 
-obj.grace
-  The object's grace period in seconds. obj.grace is writable.
+obj.keep
+  ?
 
-obj.http.header
-  The corresponding HTTP header.
+obj.last_use
+  The approximate time elapsed since the object was last requests, in
+  seconds. This variable is also available in vcl_deliver.
 
-The following variables are available while determining the hash key
-of an object:
+obj.ttl
+  The object's remaining time to live, in seconds. obj.ttl is writable.
 
-req.hash
-  The hash key used to refer to an object in the cache.  Used when
-  both reading from and writing to the cache.
+obj.uncacheable
+  ?
 
 The following variables are available while preparing a response to the client:
 
@@ -854,32 +856,33 @@ resp.proto
 resp.status
   The HTTP status code that will be returned.
 
-resp.response
+resp.reason
   The HTTP status message that will be returned.
 
+resp.response
+  For backward compatibility. Same as resp.reason.
+
 resp.http.header
   The corresponding HTTP header.
 
-Values may be assigned to variables using the set keyword:
-::
+Values may be assigned to variables using the set keyword::
 
-  sub vcl_recv {
-    # Normalize the Host: header
-    if (req.http.host ~ "(?i)^(www.)?example.com$") {
-      set req.http.host = "www.example.com";
+    sub vcl_recv {
+        # Normalize the Host: header
+        if (req.http.host ~ "(?i)^(www.)?example.com$") {
+            set req.http.host = "www.example.com";
+        }
     }
-  }
 
-HTTP headers can be removed entirely using the remove keyword:
-::
+HTTP headers can be removed entirely using the unset keyword::
 
-  sub vcl_fetch {
-    # Don't cache cookies
-    remove beresp.http.Set-Cookie;
-  }
+    sub vcl_backend_response {
+        # Don't cache cookies
+        unset beresp.http.Set-Cookie;
+    }
 
-Grace and saint mode
---------------------
+Grace
+-----
 
 If the backend takes a long time to generate an object there is a risk
 of a thread pile up.  In order to prevent this you can enable *grace*.
@@ -891,134 +894,116 @@ object will be kept up to two minutes past their expiration time or a
 fresh object is generated.
 ::
 
-  sub vcl_recv {
-    set req.grace = 2m;
-  }
-  sub vcl_fetch {
-    set beresp.grace = 2m;
-  }
-
-Saint mode is similar to grace mode and relies on the same
-infrastructure but functions differently. You can add VCL code to
-vcl_fetch to see whether or not you *like* the response coming from
-the backend. If you find that the response is not appropriate you can
-set beresp.saintmode to a time limit and call *restart*. Varnish will
-then retry other backends to try to fetch the object again. 
-
-If there are no more backends or if you hit *max_restarts* and we have
-an object that is younger than what you set beresp.saintmode to be
-Varnish will serve the object, even if it is stale.
+    sub vcl_backend_response {
+        set beresp.grace = 2m;
+    }
 
 EXAMPLES
 ========
 
 The following code is the equivalent of the default configuration with
 the backend address set to "backend.example.com" and no backend port
-specified:
-::
+specified::
 
-  backend default {
-   .host = "backend.example.com";
-   .port = "http";
-  }
+    backend default {
+        .host = "backend.example.com";
+        .port = "http";
+    }
 
 .. include:: ../../../bin/varnishd/builtin.vcl
   :literal:
 
 The following example shows how to support multiple sites running on
 separate backends in the same Varnish instance, by selecting backends
-based on the request URL:
-::
+based on the request URL::
 
-  backend www {
-    .host = "www.example.com";
-    .port = "80";
-  }
-  
-  backend images {
-    .host = "images.example.com";
-    .port = "80";
-  }
-  
-  sub vcl_recv {
-    if (req.http.host ~ "(?i)^(www.)?example.com$") {
-      set req.http.host = "www.example.com";
-      set req.backend = www;
-    } elsif (req.http.host ~ "(?i)^images.example.com$") {
-      set req.backend = images;
-    } else {
-      error 404 "Unknown virtual host";
+    backend www {
+        .host = "www.example.com";
+        .port = "80";
+    }
+
+    backend images {
+        .host = "images.example.com";
+        .port = "80";
+    }
+
+    sub vcl_recv {
+        if (req.http.host ~ "(?i)^(www.)?example.com$") {
+            set req.http.host = "www.example.com";
+            set req.backend_hint = www;
+        } elsif (req.http.host ~ "(?i)^images.example.com$") {
+            set req.backend_hint = images;
+        } else {
+            return (error(404, "Unknown virtual host"));
+        }
     }
-  }
 
 The following snippet demonstrates how to force a minimum TTL for
 all documents.  Note that this is not the same as setting the
 default_ttl run-time parameter, as that only affects document for
-which the backend did not specify a TTL:
-::
-  
-  import std; # needed for std.log
+which the backend did not specify a TTL::
+
+    import std; # needed for std.log
 
-  sub vcl_fetch {
-    if (beresp.ttl < 120s) {
-      std.log("Adjusting TTL");
-      set beresp.ttl = 120s;
+    sub vcl_backend_response {
+        if (beresp.ttl < 120s) {
+            std.log("Adjusting TTL");
+            set beresp.ttl = 120s;
+        }
     }
-  }
 
 The following snippet demonstrates how to force Varnish to cache
-documents even when cookies are present:
-::
+documents even when cookies are present::
 
-  sub vcl_recv {
-    if (req.method == "GET" && req.http.cookie) {
-       return(lookup);
+    sub vcl_recv {
+        if (req.method == "GET" && req.http.cookie) {
+            return (hash);
+        }
+    }
+
+    sub vcl_backend_response {
+        if (beresp.http.Set-Cookie) {
+            return (deliver);
+        }
     }
-  }
-  
-  sub vcl_fetch {
-    if (beresp.http.Set-Cookie) {
-       return(deliver);
-   }
-  }
 
 The following code implements the HTTP PURGE method as used by Squid
-for object invalidation:
-::
+for object invalidation::
 
-  acl purge {
-    "localhost";
-    "192.0.2.1"/24;
-  }
+    acl purge {
+        "localhost";
+        "192.0.2.1"/24;
+    }
 
-  sub vcl_recv {
-    if (req.method == "PURGE") {
-      if (!client.ip ~ purge) {
-        error 405 "Not allowed.";
-      }
-      return(lookup);
+    sub vcl_recv {
+        if (req.method == "PURGE") {
+            if (!client.ip ~ purge) {
+                return (error(405, "Not allowed."));
+            }
+            return (hash);
+        }
     }
-  }
 
-  sub vcl_hit {
-    if (req.method == "PURGE") {
-      purge;
-      error 200 "Purged.";
+    sub vcl_hit {
+        if (req.method == "PURGE") {
+            purge;
+            return (error(200, "Purged."));
+        }
     }
-  }
 
-  sub vcl_miss {
-    if (req.method == "PURGE") {
-      purge;
-      error 200 "Purged.";
+    sub vcl_miss {
+        if (req.method == "PURGE") {
+            purge;
+            return (error(200, "Purged."));
+        }
     }
-  }
 
 SEE ALSO
 ========
 
 * varnishd(1)
-* vmod_std(7)
+* vmod_directors(3)
+* vmod_std(3)
 
 HISTORY
 =======



More information about the varnish-commit mailing list