[master] c1799a7 rename tutorial -> users guide

Per Buer perbu at varnish-cache.org
Mon Sep 3 21:41:42 CEST 2012


commit c1799a76c6bad27c3a2e5b9fe9338c86faf7cdb5
Author: Per Buer <per.buer at gmail.com>
Date:   Mon Sep 3 21:40:56 2012 +0200

    rename tutorial -> users guide

diff --git a/doc/sphinx/tutorial/advanced_backend_servers.rst b/doc/sphinx/tutorial/advanced_backend_servers.rst
deleted file mode 100644
index b4206d9..0000000
--- a/doc/sphinx/tutorial/advanced_backend_servers.rst
+++ /dev/null
@@ -1,157 +0,0 @@
-Advanced Backend configuration
-------------------------------
-
-At some point you might need Varnish to cache content from several
-servers. You might want Varnish to map all the URL into one single
-host or not. There are lot of options.
-
-Lets say we need to introduce a Java application into out PHP web
-site. Lets say our Java application should handle URL beginning with
-/java/.
-
-We manage to get the thing up and running on port 8000. Now, lets have
-a look a default.vcl.::
-
-  backend default {
-      .host = "127.0.0.1";
-      .port = "8080";
-  }
-
-We add a new backend.::
-
-  backend java {
-      .host = "127.0.0.1";
-      .port = "8000";
-  }
-
-Now we need tell where to send the difference URL. Lets look at vcl_recv.::
-
-  sub vcl_recv {
-      if (req.url ~ "^/java/") {
-          set req.backend = java;
-      } else {
-          set req.backend = default.
-      }
-  }
-
-It's quite simple, really. Lets stop and think about this for a
-moment. As you can see you can define how you choose backends based on
-really arbitrary data. You want to send mobile devices to a different
-backend? No problem. if (req.User-agent ~ /mobile/) .... should do the
-trick. 
-
-.. _tutorial-advanced_backend_servers-directors:
-
-Directors
----------
-
-You can also group several backend into a group of backends. These
-groups are called directors. This will give you increased performance
-and resilience. You can define several backends and group them
-together in a director.::
-
-	 backend server1 {
-	     .host = "192.168.0.10";
-	 }
-	 backend server2{
-	     .host = "192.168.0.10";
-	 }
-
-Now we create the director.::
-
-       	director example_director round-robin {
-        {
-                .backend = server1;
-        }
-	# server2
-        {
-                .backend = server2;
-        }
-	# foo
-	}
-
-
-This director is a round-robin director. This means the director will
-distribute the incoming requests on a round-robin basis. There is
-also a *random* director which distributes requests in a, you guessed
-it, random fashion.
-
-But what if one of your servers goes down? Can Varnish direct all the
-requests to the healthy server? Sure it can. This is where the Health
-Checks come into play.
-
-.. _tutorial-advanced_backend_servers-health:
-
-Health checks
--------------
-
-Lets set up a director with two backends and health checks. First lets
-define the backends.::
-
-       backend server1 {
-         .host = "server1.example.com";
-	 .probe = {
-                .url = "/";
-                .interval = 5s;
-                .timeout = 1 s;
-                .window = 5;
-                .threshold = 3;
-	   }
-         }
-       backend server2 {
-  	  .host = "server2.example.com";
-  	  .probe = {
-                .url = "/";
-                .interval = 5s;
-                .timeout = 1 s;
-                .window = 5;
-                .threshold = 3;
-	  }
-        }
-
-Whats new here is the probe. Varnish will check the health of each
-backend with a probe. The options are
-
-url
- What URL should varnish request.
-
-interval
- How often should we poll
-
-timeout
- What is the timeout of the probe
-
-window
- Varnish will maintain a *sliding window* of the results. Here the
- window has five checks.
-
-threshold 
- How many of the .window last polls must be good for the backend to be declared healthy.
-
-initial 
- How many of the of the probes a good when Varnish starts - defaults
- to the same amount as the threshold.
-
-Now we define the director.::
-
-  director example_director round-robin {
-        {
-                .backend = server1;
-        }
-        # server2 
-        {
-                .backend = server2;
-        }
-	
-        }
-
-You use this director just as you would use any other director or
-backend. Varnish will not send traffic to hosts that are marked as
-unhealthy. Varnish can also serve stale content if all the backends are
-down. See :ref:`tutorial-handling_misbehaving_servers` for more
-information on how to enable this.
-
-Please note that Varnish will keep probes active for all loaded
-VCLs. Varnish will coalesce probes that seem identical - so be careful
-not to change the probe config if you do a lot of VCL
-loading. Unloading the VCL will discard the probes.
diff --git a/doc/sphinx/tutorial/advanced_topics.rst b/doc/sphinx/tutorial/advanced_topics.rst
deleted file mode 100644
index 1045de9..0000000
--- a/doc/sphinx/tutorial/advanced_topics.rst
+++ /dev/null
@@ -1,63 +0,0 @@
-.. _tutorial-advanced_topics:
-
-Advanced topics
----------------
-
-This tutorial has covered the basics in Varnish. If you read through
-it all you should now have the skills to run Varnish.
-
-Here is a short overview of topics that we haven't covered in the tutorial. 
-
-More VCL
-~~~~~~~~
-
-VCL is a bit more complex then what we've covered so far. There are a
-few more subroutines available and there a few actions that we haven't
-discussed. For a complete(ish) guide to VCL have a look at the VCL man
-page - ref:`reference-vcl`.
-
-Using In-line C to extend Varnish
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-You can use *in-line C* to extend Varnish. Please note that you can
-seriously mess up Varnish this way. The C code runs within the Varnish
-Cache process so if your code generates a segfault the cache will crash.
-
-One of the first uses I saw of In-line C was logging to syslog.::
-
-	# The include statements must be outside the subroutines.
-	C{
-		#include <syslog.h>
-        }C
-	
-        sub vcl_something {
-                C{
-		        syslog(LOG_INFO, "Something happened at VCL line XX.");
-	        }C
-        }
-
-
-Edge Side Includes
-~~~~~~~~~~~~~~~~~~
-
-Varnish can cache create web pages by putting different pages
-together. These *fragments* can have individual cache policies. If you
-have a web site with a list showing the 5 most popular articles on
-your site, this list can probably be cached as a fragment and included
-in all the other pages. Used properly it can dramatically increase
-your hit rate and reduce the load on your servers. ESI looks like this::
-
-  <HTML>
-  <BODY>
-  The time is: <esi:include src="/cgi-bin/date.cgi"/>
-  at this very moment.
-  </BODY>
-  </HTML>
-
-ESI is processed in vcl_fetch by setting *do_esi* to true.::
-
-  sub vcl_fetch {
-      if (req.url == "/test.html") {
-	  set beresp.do_esi = true;  /* Do ESI processing */
-      }
-  }
diff --git a/doc/sphinx/tutorial/backend_servers.rst b/doc/sphinx/tutorial/backend_servers.rst
deleted file mode 100644
index 1b1aaf2..0000000
--- a/doc/sphinx/tutorial/backend_servers.rst
+++ /dev/null
@@ -1,39 +0,0 @@
-.. _tutorial-backend_servers:
-
-Backend servers
----------------
-
-Varnish has a concept of "backend" or "origin" servers. A backend
-server is the server providing the content Varnish will accelerate.
-
-Our first task is to tell Varnish where it can find its content. Start
-your favorite text editor and open the varnish default configuration
-file. If you installed from source this is
-/usr/local/etc/varnish/default.vcl, if you installed from a package it
-is probably /etc/varnish/default.vcl.
-
-Somewhere in the top there will be a section that looks a bit like this.::
-
-	  # backend default {
-	  #     .host = "127.0.0.1";
-	  #     .port = "8080";
-	  # }
-
-We comment in this bit of text and change the port setting from 8080
-to 80, making the text look like.::
-
-          backend default {
-                .host = "127.0.0.1";
-    		.port = "80";
-	  }
-
-Now, this piece of configuration defines a backend in Varnish called
-*default*. When Varnish needs to get content from this backend it will
-connect to port 80 on localhost (127.0.0.1).
-
-Varnish can have several backends defined and can you can even join
-several backends together into clusters of backends for load balancing
-purposes. 
-
-Now that we have the basic Varnish configuration done, let us start up
-Varnish on port 8080 so we can do some fundamental testing on it.
diff --git a/doc/sphinx/tutorial/compression.rst b/doc/sphinx/tutorial/compression.rst
deleted file mode 100644
index 0b8d1e8..0000000
--- a/doc/sphinx/tutorial/compression.rst
+++ /dev/null
@@ -1,75 +0,0 @@
-.. _tutorial-compression:
-
-Compression
-~~~~~~~~~~~
-
-New in Varnish 3.0 was native support for compression, using gzip
-encoding. *Before* 3.0, Varnish would never compress objects. 
-
-In Varnish 3.0 compression defaults to "on", meaning that it tries to
-be smart and do the sensible thing.
-
-If you don't want Varnish tampering with the encoding you can disable
-compression all together by setting the parameter http_gzip_support to
-*false*. Please see man :ref:`ref-varnishd` for details.
-
-
-Default behaviour
-~~~~~~~~~~~~~~~~~
-
-The default for Varnish is to check if the client supports our
-compression scheme (gzip) and if it does it will override the
-Accept-Encoding header and set it to "gzip".
-
-When Varnish then issues a backend request the Accept-Encoding will
-then only consist of "gzip". If the server responds with gzip'ed
-content it will be stored in memory in its compressed form. If the
-backend sends content in clear text it will be stored like that.
-
-You can make Varnish compress content before storing it in cache in
-vcl_fetch by setting do_gzip to true, like this::
-
-   sub vcl_fetch {
-        if (beresp.http.content-type ~ "text") {
-                set beresp.do_gzip = true;
-        }
-  }
-
-Please make sure that you don't try to compress content that is
-incompressable, like jpgs, gifs and mp3. You'll only waste CPU
-cycles. You can also uncompress objects before storing it in memory by
-setting do_gunzip to *true* but I have no idea why anybody would want
-to do that.
-
-Generally, Varnish doesn't use much CPU so it might make more sense to
-have Varnish spend CPU cycles compressing content than doing it in
-your web- or application servers, which are more likely to be
-CPU-bound.
-
-GZIP and ESI
-~~~~~~~~~~~~
-
-If you are using Edge Side Includes you'll be happy to note that ESI
-and GZIP work together really well. Varnish will magically decompress
-the content to do the ESI-processing, then recompress it for efficient
-storage and delivery. 
-
-
-Clients that don't support gzip
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-If the client does not support gzip the Accept-Encoding header is left
-alone and we'll end up serving whatever we get from the backend
-server. Remember that the Backend might tell Varnish to *Vary* on the
-Accept-Encoding.
-
-If the client does not support gzip but we've already got a compressed
-version of the page in memory Varnish will automatically decompress
-the page while delivering it.
-
-
-A random outburst
-~~~~~~~~~~~~~~~~~
-
-Poul has written :ref:`phk_gzip` which talks abit more about how the
-implementation works. 
diff --git a/doc/sphinx/tutorial/cookies.rst b/doc/sphinx/tutorial/cookies.rst
deleted file mode 100644
index c75171f..0000000
--- a/doc/sphinx/tutorial/cookies.rst
+++ /dev/null
@@ -1,95 +0,0 @@
-.. _tutorial-cookies:
-
-Cookies
--------
-
-Varnish will, in the default configuration, not cache a object coming
-from the backend with a Set-Cookie header present. Also, if the client
-sends a Cookie header, Varnish will bypass the cache and go directly to
-the backend.
-
-This can be overly conservative. A lot of sites use Google Analytics
-(GA) to analyze their traffic. GA sets a cookie to track you. This
-cookie is used by the client side javascript and is therefore of no
-interest to the server. 
-
-Cookies from the client
-~~~~~~~~~~~~~~~~~~~~~~~
-
-For a lot of web application it makes sense to completely disregard the
-cookies unless you are accessing a special part of the web site. This
-VCL snippet in vcl_recv will disregard cookies unless you are
-accessing /admin/::
-
-  if ( !( req.url ~ ^/admin/) ) {
-    unset req.http.Cookie;
-  }
-
-Quite simple. If, however, you need to do something more complicated,
-like removing one out of several cookies, things get
-difficult. Unfortunately Varnish doesn't have good tools for
-manipulating the Cookies. We have to use regular expressions to do the
-work. If you are familiar with regular expressions you'll understand
-whats going on. If you don't I suggest you either pick up a book on
-the subject, read through the *pcrepattern* man page or read through
-one of many online guides.
-
-Let me show you what Varnish Software uses. We use some cookies for
-Google Analytics tracking and similar tools. The cookies are all set
-and used by Javascript. Varnish and Drupal doesn't need to see those
-cookies and since Varnish will cease caching of pages when the client
-sends cookies we will discard these unnecessary cookies in VCL. 
-
-In the following VCL we discard all cookies that start with a
-underscore::
-
-  // Remove has_js and Google Analytics __* cookies.
-  set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(_[_a-z]+|has_js)=[^;]*", "");
-  // Remove a ";" prefix, if present.
-  set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", "");
-
-Let me show you an example where we remove everything except the
-cookies named COOKIE1 and COOKIE2 and you can marvel at it::
-
-  sub vcl_recv {
-    if (req.http.Cookie) {
-      set req.http.Cookie = ";" + req.http.Cookie;
-      set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";");
-      set req.http.Cookie = regsuball(req.http.Cookie, ";(COOKIE1|COOKIE2)=", "; \1=");
-      set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", "");
-      set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", "");
-
-      if (req.http.Cookie == "") {
-          remove req.http.Cookie;
-      }
-    }
-  }
-
-A somewhat simpler example that can accomplish almost the same can be
-found below. Instead of filtering out the other cookies it picks out
-the one cookie that is needed, copies it to another header and then
-copies it back, deleting the original cookie header.::
-
-  sub vcl_recv {
-         # save the original cookie header so we can mangle it
-        set req.http.X-Varnish-PHP_SID = req.http.Cookie;
-        # using a capturing sub pattern, extract the continuous string of 
-        # alphanumerics that immediately follows "PHPSESSID="
-        set req.http.X-Varnish-PHP_SID = 
-           regsuball(req.http.X-Varnish-PHP_SID, ";? ?PHPSESSID=([a-zA-Z0-9]+)( |;| ;).*","\1");
-        set req.http.Cookie = req.X-Varnish-PHP_SID;
-        remove req.X-Varnish-PHP_SID;
-   }   
-
-There are other scary examples of what can be done in VCL in the
-Varnish Cache Wiki.
-
-
-Cookies coming from the backend
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-If your backend server sets a cookie using the Set-Cookie header
-Varnish will not cache the page in the default configuration.  A
-hit-for-pass object (see :ref:`tutorial-vcl_fetch_actions`) is created.
-So, if the backend server acts silly and sets unwanted cookies just unset
-the Set-Cookie header and all should be fine. 
diff --git a/doc/sphinx/tutorial/devicedetection.rst b/doc/sphinx/tutorial/devicedetection.rst
deleted file mode 100644
index 6bfc4c9..0000000
--- a/doc/sphinx/tutorial/devicedetection.rst
+++ /dev/null
@@ -1,268 +0,0 @@
-.. _tutorial-devicedetect:
-
-Device detection
-~~~~~~~~~~~~~~~~
-
-Device detection is figuring out what kind of content to serve to a
-client based on the User-Agent string supplied in a request.
-
-Use cases for this are for example to send size reduced files to mobile
-clients with small screens and on high latency networks, or to 
-provide a streaming video codec that the client understands.
-
-There are a couple of strategies on what to do with such clients:
-1) Redirect them to another URL.
-2) Use a different backend for the special clients.
-3) Change the backend requests so the usual backend sends tailored content.
-
-To make the examples easier to understand, it is assumed in this text 
-that all the req.http.X-UA-Device header is present and unique per client class
-that content is to be served to. 
-
-Setting this header can be as simple as::
-
-   sub vcl_recv { 
-       if (req.http.User-Agent ~ "(?i)iphone" {
-           set req.http.X-UA-Device = "mobile-iphone";
-       }
-   }
-
-There are different commercial and free offerings in doing grouping and
-identifiying clients in further detail than this. For a basic and community
-based regular expression set, see
-https://github.com/varnish/varnish-devicedetect/ .
-
-
-Serve the different content on the same URL
--------------------------------------------
-
-The tricks involved are: 
-1. Detect the client (pretty simple, just include devicedetect.vcl and call
-it)
-2. Figure out how to signal the backend what client class this is. This
-includes for example setting a header, changing a header or even changing the
-backend request URL.
-3. Modify any response from the backend to add missing Vary headers, so
-Varnish' internal handling of this kicks in.
-4. Modify output sent to the client so any caches outside our control don't
-serve the wrong content.
-
-All this while still making sure that we only get 1 cached object per URL per
-device class.
-
-
-Example 1: Send HTTP header to backend
-''''''''''''''''''''''''''''''''''''''
-
-The basic case is that Varnish adds the X-UA-Device HTTP header on the backend
-requests, and the backend mentions in the response Vary header that the content
-is dependant on this header. 
-
-Everything works out of the box from Varnish' perspective.
-
-.. 071-example1-start
-
-VCL::
-
-    sub vcl_recv { 
-        # call some detection engine that set req.http.X-UA-Device
-    }
-    # req.http.X-UA-Device is copied by Varnish into bereq.http.X-UA-Device
-
-    # so, this is a bit conterintuitive. The backend creates content based on
-    # the normalized User-Agent, but we use Vary on X-UA-Device so Varnish will
-    # use the same cached object for all U-As that map to the same X-UA-Device.
-    #
-    # If the backend does not mention in Vary that it has crafted special
-    # content based on the User-Agent (==X-UA-Device), add it. 
-    # If your backend does set Vary: User-Agent, you may have to remove that here.
-    sub vcl_fetch {
-        if (req.http.X-UA-Device) {
-            if (!beresp.http.Vary) { # no Vary at all
-                set beresp.http.Vary = "X-UA-Device"; 
-            } elseif (beresp.http.Vary !~ "X-UA-Device") { # add to existing Vary
-                set beresp.http.Vary = beresp.http.Vary + ", X-UA-Device"; 
-            } 
-        }
-        # comment this out if you don't want the client to know your
-        # classification
-        set beresp.http.X-UA-Device = req.http.X-UA-Device;
-    }
-
-    # to keep any caches in the wild from serving wrong content to client #2
-    # behind them, we need to transform the Vary on the way out.
-    sub vcl_deliver {
-        if ((req.http.X-UA-Device) && (resp.http.Vary)) {
-            set resp.http.Vary = regsub(resp.http.Vary, "X-UA-Device", "User-Agent");
-        }
-    }
-
-.. 071-example1-end
-
-Example 2: Normalize the User-Agent string
-''''''''''''''''''''''''''''''''''''''''''
-
-Another way of signaling the device type is to override or normalize the
-User-Agent header sent to the backend.
-
-For example
-
-    User-Agent: Mozilla/5.0 (Linux; U; Android 2.2; nb-no; HTC Desire Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1
-
-becomes:
-
-    User-Agent: mobile-android
-
-when seen by the backend.
-
-This works if you don't need the original header for anything on the backend.
-A possible use for this is for CGI scripts where only a small set of predefined
-headers are (by default) available for the script.
-
-.. 072-example2-start
-
-VCL::
-
-    sub vcl_recv { 
-        # call some detection engine that set req.http.X-UA-Device
-    }
-
-    # override the header before it is sent to the backend
-    sub vcl_miss { if (req.http.X-UA-Device) { set bereq.http.User-Agent = req.http.X-UA-Device; } }
-    sub vcl_pass { if (req.http.X-UA-Device) { set bereq.http.User-Agent = req.http.X-UA-Device; } }
-
-    # standard Vary handling code from previous examples.
-    sub vcl_fetch {
-        if (req.http.X-UA-Device) {
-            if (!beresp.http.Vary) { # no Vary at all
-                set beresp.http.Vary = "X-UA-Device";
-            } elseif (beresp.http.Vary !~ "X-UA-Device") { # add to existing Vary
-                set beresp.http.Vary = beresp.http.Vary + ", X-UA-Device";
-            }
-        }
-        set beresp.http.X-UA-Device = req.http.X-UA-Device;
-    }
-    sub vcl_deliver {
-        if ((req.http.X-UA-Device) && (resp.http.Vary)) {
-            set resp.http.Vary = regsub(resp.http.Vary, "X-UA-Device", "User-Agent");
-        }
-    }
-
-.. 072-example2-end
-
-Example 3: Add the device class as a GET query parameter
-''''''''''''''''''''''''''''''''''''''''''''''''''''''''
-
-If everything else fails, you can add the device type as a GET argument.
-
-    http://example.com/article/1234.html --> http://example.com/article/1234.html?devicetype=mobile-iphone
-
-The client itself does not see this classification, only the backend request
-is changed.
-
-.. 073-example3-start
-
-VCL::
-
-    sub vcl_recv { 
-        # call some detection engine that set req.http.X-UA-Device
-    }
-
-    sub append_ua {
-        if ((req.http.X-UA-Device) && (req.request == "GET")) {
-            # if there are existing GET arguments;
-            if (req.url ~ "\?") {
-                set req.http.X-get-devicetype = "&devicetype=" + req.http.X-UA-Device;
-            } else { 
-                set req.http.X-get-devicetype = "?devicetype=" + req.http.X-UA-Device;
-            }
-            set req.url = req.url + req.http.X-get-devicetype;
-            unset req.http.X-get-devicetype;
-        }
-    }
-
-    # do this after vcl_hash, so all Vary-ants can be purged in one go. (avoid ban()ing)
-    sub vcl_miss { call append_ua; }
-    sub vcl_pass { call append_ua; }
-
-    # Handle redirects, otherwise standard Vary handling code from previous
-    # examples.
-    sub vcl_fetch {
-        if (req.http.X-UA-Device) {
-            if (!beresp.http.Vary) { # no Vary at all
-                set beresp.http.Vary = "X-UA-Device";
-            } elseif (beresp.http.Vary !~ "X-UA-Device") { # add to existing Vary
-                set beresp.http.Vary = beresp.http.Vary + ", X-UA-Device";
-            }
-
-            # if the backend returns a redirect (think missing trailing slash),
-            # we will potentially show the extra address to the client. we
-            # don't want that.  if the backend reorders the get parameters, you
-            # may need to be smarter here. (? and & ordering)
-
-            if (beresp.status == 301 || beresp.status == 302 || beresp.status == 303) {
-                set beresp.http.location = regsub(beresp.http.location, "[?&]devicetype=.*$", "");
-            }
-        }
-        set beresp.http.X-UA-Device = req.http.X-UA-Device;
-    }
-    sub vcl_deliver {
-        if ((req.http.X-UA-Device) && (resp.http.Vary)) {
-            set resp.http.Vary = regsub(resp.http.Vary, "X-UA-Device", "User-Agent");
-        }
-    }
-
-.. 073-example3-end
-
-Different backend for mobile clients
-------------------------------------
-
-If you have a different backend that serves pages for mobile clients, or any
-special needs in VCL, you can use the X-UA-Device header like this::
-
-    backend mobile {
-        .host = "10.0.0.1";
-        .port = "80";
-    }
-
-    sub vcl_recv {
-        # call some detection engine
-
-        if (req.http.X-UA-Device ~ "^mobile" || req.http.X-UA-device ~ "^tablet") {
-            set req.backend = mobile;
-        }
-    }
-    sub vcl_hash {
-        if (req.http.X-UA-Device) {
-            hash_data(req.http.X-UA-Device);
-        }
-    }
-
-Redirecting mobile clients
---------------------------
-
-If you want to redirect mobile clients you can use the following snippet.
-
-.. 065-redir-mobile-start
-
-VCL::
-
-    sub vcl_recv {
-        # call some detection engine
-
-        if (req.http.X-UA-Device ~ "^mobile" || req.http.X-UA-device ~ "^tablet") {
-            error 750 "Moved Temporarily";
-        }
-    }
-     
-    sub vcl_error {
-        if (obj.status == 750) {
-            set obj.http.Location = "http://m.example.com" + req.url;
-            set obj.status = 302;
-            return(deliver);
-        }
-    }
-
-.. 065-redir-mobile-end
-
-
diff --git a/doc/sphinx/tutorial/esi.rst b/doc/sphinx/tutorial/esi.rst
deleted file mode 100644
index 720b790..0000000
--- a/doc/sphinx/tutorial/esi.rst
+++ /dev/null
@@ -1,79 +0,0 @@
-.. _tutorial-esi:
-
-Edge Side Includes
-------------------
-
-*Edge Side Includes* is a language to include *fragments* of web pages
-in other web pages. Think of it as HTML include statement that works
-over HTTP. 
-
-On most web sites a lot of content is shared between
-pages. Regenerating this content for every page view is wasteful and
-ESI tries to address that letting you decide the cache policy for
-each fragment individually.
-
-In Varnish we've only implemented a small subset of ESI. As of 2.1 we
-have three ESI statements:
-
- * esi:include 
- * esi:remove
- * <!--esi ...-->
-
-Content substitution based on variables and cookies is not implemented
-but is on the roadmap. 
-
-Varnish will not process ESI instructions in HTML comments.
-
-Example: esi:include
-~~~~~~~~~~~~~~~~~~~~
-
-Lets see an example how this could be used. This simple cgi script
-outputs the date::
-
-     #!/bin/sh
-     
-     echo 'Content-type: text/html'
-     echo ''
-     date "+%Y-%m-%d %H:%M"
-
-Now, lets have an HTML file that has an ESI include statement::
-
-     <HTML>
-     <BODY>
-     The time is: <esi:include src="/cgi-bin/date.cgi"/>
-     at this very moment.
-     </BODY>
-     </HTML>
-
-For ESI to work you need to activate ESI processing in VCL, like this::
-
-    sub vcl_fetch {
-    	if (req.url == "/test.html") {
-           set beresp.do_esi = true; /* Do ESI processing		*/
-           set beresp.ttl = 24 h;    /* Sets the TTL on the HTML above  */
-    	} elseif (req.url == "/cgi-bin/date.cgi") {
-           set beresp.ttl = 1m;      /* Sets a one minute TTL on	*/
-	       	       	 	     /*  the included object		*/
-        }
-    }
-
-Example: esi:remove and <!--esi ... -->
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The <esi:remove> and <!--esi ... --> constructs can be used to present
-appropriate content whether or not ESI is available, for example you can
-include content when ESI is available or link to it when it is not.
-ESI processors will remove the start ("<!--esi") and end ("-->") when
-the page is processed, while still processing the contents. If the page
-is not processed, it will remain, becoming an HTML/XML comment tag.
-ESI processors will remove <esi:remove> tags and all content contained
-in them, allowing you to only render the content when the page is not
-being ESI-processed.
-For example::
-
-  <esi:remove> 
-    <a href="http://www.example.com/LICENSE">The license</a>
-  </esi:remove>
-  <!--esi  
-  <p>The full text of the license:</p>
-  <esi:include src="http://example.com/LICENSE" />
-  -->
diff --git a/doc/sphinx/tutorial/handling_misbehaving_servers.rst b/doc/sphinx/tutorial/handling_misbehaving_servers.rst
deleted file mode 100644
index 406b4b3..0000000
--- a/doc/sphinx/tutorial/handling_misbehaving_servers.rst
+++ /dev/null
@@ -1,103 +0,0 @@
-.. _tutorial-handling_misbehaving_servers:
-
-Misbehaving servers
--------------------
-
-A key feature of Varnish is its ability to shield you from misbehaving
-web- and application servers.
-
-
-
-Grace mode
-~~~~~~~~~~
-
-When several clients are requesting the same page Varnish will send
-one request to the backend and place the others on hold while fetching
-one copy from the back end. In some products this is called request
-coalescing and Varnish does this automatically.
-
-If you are serving thousands of hits per second the queue of waiting
-requests can get huge. There are two potential problems - one is a
-thundering herd problem - suddenly releasing a thousand threads to
-serve content might send the load sky high. Secondly - nobody likes to
-wait. To deal with this we can instruct Varnish to keep
-the objects in cache beyond their TTL and to serve the waiting
-requests somewhat stale content.
-
-So, in order to serve stale content we must first have some content to
-serve. So to make Varnish keep all objects for 30 minutes beyond their
-TTL use the following VCL::
-
-  sub vcl_fetch {
-    set beresp.grace = 30m;
-  }
-
-Varnish still won't serve the stale objects. In order to enable
-Varnish to actually serve the stale object we must enable this on the
-request. Lets us say that we accept serving 15s old object.::
-
-  sub vcl_recv {
-    set req.grace = 15s;
-  }
-
-You might wonder why we should keep the objects in the cache for 30
-minutes if we are unable to serve them? Well, if you have enabled
-:ref:`tutorial-advanced_backend_servers-health` you can check if the
-backend is sick and if it is we can serve the stale content for a bit
-longer.::
-
-   if (! req.backend.healthy) {
-      set req.grace = 5m;
-   } else {
-      set req.grace = 15s;
-   }
-
-So, to sum up, grace mode solves two problems:
- * it serves stale content to avoid request pile-up.
- * it serves stale content if the backend is not healthy.
-
-Saint mode
-~~~~~~~~~~
-
-Sometimes servers get flaky. They start throwing out random
-errors. You can instruct Varnish to try to handle this in a
-more-than-graceful way - enter *Saint mode*. Saint mode enables you to
-discard a certain page from one backend server and either try another
-server or serve stale content from cache. Lets have a look at how this
-can be enabled in VCL::
-
-  sub vcl_fetch {
-    if (beresp.status == 500) { 
-      set beresp.saintmode = 10s;
-      return(restart);
-    }
-    set beresp.grace = 5m;
-  } 
-
-When we set beresp.saintmode to 10 seconds Varnish will not ask *that*
-server for URL for 10 seconds. A blacklist, more or less. Also a
-restart is performed so if you have other backends capable of serving
-that content Varnish will try those. When you are out of backends
-Varnish will serve the content from its stale cache.
-
-This can really be a life saver.
-
-Known limitations on grace- and saint mode
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-If your request fails while it is being fetched you're thrown into
-vcl_error. vcl_error has access to a rather limited set of data so you
-can't enable saint mode or grace mode here. This will be addressed in a
-future release but a work-around available.
-
-* Declare a backend that is always sick.
-* Set a magic marker in vcl_error
-* Restart the transaction
-* Note the magic marker in vcl_recv and set the backend to the one mentioned
-* Varnish will now serve stale data is any is available
-
-
-God mode
-~~~~~~~~
-Not implemented yet. :-)
-
diff --git a/doc/sphinx/tutorial/increasing_your_hitrate.rst b/doc/sphinx/tutorial/increasing_your_hitrate.rst
deleted file mode 100644
index b9fa7e6..0000000
--- a/doc/sphinx/tutorial/increasing_your_hitrate.rst
+++ /dev/null
@@ -1,213 +0,0 @@
-.. _tutorial-increasing_your_hitrate:
-
-Achieving a high hitrate
-------------------------
-
-Now that Varnish is up and running, and you can access your web
-application through Varnish. Unless your application is specifically
-written to work behind a web accelerator you'll probably need to do
-some changes to either the configuration or the application in order
-to get a high hit rate in Varnish.
-
-Varnish will not cache your data unless it's absolutely sure it is
-safe to do so. So, for you to understand how Varnish decides if and
-how to cache a page, I'll guide you through a couple of tools that you
-will find useful.
-
-Note that you need a tool to see what HTTP headers fly between you and
-the web server. On the Varnish server, the easiest is to use
-varnishlog and varnishtop but sometimes a client-side tool makes
-sense. Here are the ones I use.
-
-Tool: varnishtop
-~~~~~~~~~~~~~~~~
-
-You can use varnishtop to identify what URLs are hitting the backend
-the most. ``varnishtop -i txurl`` is an essential command. You can see
-some other examples of varnishtop usage in :ref:`tutorial-statistics`.
-
-
-Tool: varnishlog
-~~~~~~~~~~~~~~~~
-
-When you have identified the an URL which is frequently sent to the
-backend you can use varnishlog to have a look at the request.
-``varnishlog -c -m 'RxURL:^/foo/bar`` will show you the requests
-coming from the client (-c) matching /foo/bar.
-
-For more information on how varnishlog works please see
-:ref:`tutorial-logging` or man :ref:`ref-varnishlog`.
-
-For extended diagnostics headers, see
-http://www.varnish-cache.org/trac/wiki/VCLExampleHitMissHeader
-
-
-Tool: lwp-request
-~~~~~~~~~~~~~~~~~
-
-lwp-request is part of The World-Wide Web library for Perl. It's a
-couple of really basic programs that can execute an HTTP request and
-give you the result. I mostly use two programs, GET and HEAD.
-
-vg.no was the first site to use Varnish and the people running Varnish
-there are quite clueful. So it's interesting to look at their HTTP
-Headers. Let's send a GET request for their home page::
-
-  $ GET -H 'Host: www.vg.no' -Used http://vg.no/
-  GET http://vg.no/
-  Host: www.vg.no
-  User-Agent: lwp-request/5.834 libwww-perl/5.834
-  
-  200 OK
-  Cache-Control: must-revalidate
-  Refresh: 600
-  Title: VG Nett - Forsiden - VG Nett
-  X-Age: 463
-  X-Cache: HIT
-  X-Rick-Would-Never: Let you down
-  X-VG-Jobb: http://www.finn.no/finn/job/fulltime/result?keyword=vg+multimedia Merk:HeaderNinja
-  X-VG-Korken: http://www.youtube.com/watch?v=Fcj8CnD5188
-  X-VG-WebCache: joanie
-  X-VG-WebServer: leon
-
-OK. Let me explain what it does. GET usually sends off HTTP 0.9
-requests, which lack the Host header. So I add a Host header with the
--H option. -U print request headers, -s prints response status, -e
-prints response headers and -d discards the actual content. We don't
-really care about the content, only the headers.
-
-As you can see, VG adds quite a bit of information in their
-headers. Some of the headers, like the X-Rick-Would-Never are specific
-to vg.no and their somewhat odd sense of humour. Others, like the
-X-VG-Webcache are for debugging purposes. 
-
-So, to check whether a site sets cookies for a specific URL, just do::
-
-  GET -Used http://example.com/ |grep ^Set-Cookie
-
-Tool: Live HTTP Headers
-~~~~~~~~~~~~~~~~~~~~~~~
-
-There is also a plugin for Firefox. *Live HTTP Headers* can show you
-what headers are being sent and recieved. Live HTTP Headers can be
-found at https://addons.mozilla.org/en-US/firefox/addon/3829/ or by
-googling "Live HTTP Headers".
-
-
-The role of HTTP Headers
-~~~~~~~~~~~~~~~~~~~~~~~~
-
-Along with each HTTP request and response comes a bunch of headers
-carrying metadata. Varnish will look at these headers to determine if
-it is appropriate to cache the contents and how long Varnish can keep
-the content.
-
-Please note that when considering these headers Varnish actually
-considers itself *part of* the actual webserver. The rationale being
-that both are under your control. 
-
-The term *surrogate origin cache* is not really well defined by the
-IETF so RFC 2616 so the various ways Varnish works might differ from
-your expectations.
-
-Let's take a look at the important headers you should be aware of:
-
-Cache-Control
-~~~~~~~~~~~~~
-
-The Cache-Control instructs caches how to handle the content. Varnish
-cares about the *max-age* parameter and uses it to calculate the TTL
-for an object. 
-
-"Cache-Control: nocache" is ignored but if you need this you can
-easily add support for it.
-
-So make sure you issue a Cache-Control header with a max-age
-header. You can have a look at what Varnish Software's drupal server
-issues::
-
-  $ GET -Used http://www.varnish-software.com/|grep ^Cache-Control
-  Cache-Control: public, max-age=600
-
-Age
-~~~
-
-Varnish adds an Age header to indicate how long the object has been
-kept inside Varnish. You can grep out Age from varnishlog like this::
-
-  varnishlog -i TxHeader -I ^Age
-
-Pragma
-~~~~~~
-
-An HTTP 1.0 server might send "Pragma: nocache". Varnish ignores this
-header. You could easily add support for this header in VCL.
-
-In vcl_fetch::
-
-  if (beresp.http.Pragma ~ "nocache") {
-     return(hit_for_pass);
-  }
-
-Authorization
-~~~~~~~~~~~~~
-
-If Varnish sees an Authorization header it will pass the request. If
-this is not what you want you can unset the header.
-
-Overriding the time-to-live (ttl)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Sometimes your backend will misbehave. It might, depending on your
-setup, be easier to override the ttl in Varnish than to fix your
-somewhat cumbersome backend. 
-
-You need VCL to identify the objects you want and then you set the
-beresp.ttl to whatever you want::
-
-  sub vcl_fetch {
-      if (req.url ~ "^/legacy_broken_cms/") {
-          set beresp.ttl = 5d;
-      }
-  }
-
-The example will set the TTL to 5 days for the old legacy stuff on
-your site.
-
-Forcing caching for certain requests and certain responses
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Since you still have this cumbersome backend that isn't very friendly
-to work with you might want to override more stuff in Varnish. We
-recommend that you rely as much as you can on the default caching
-rules. It is perfectly easy to force Varnish to lookup an object in
-the cache but it isn't really recommended.
-
-
-Normalizing your namespace
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Some sites are accessed via lots of
-hostnames. http://www.varnish-software.com/,
-http://varnish-software.com/ and http://varnishsoftware.com/ all point
-at the same site. Since Varnish doesn't know they are different,
-Varnish will cache different versions of every page for every
-hostname. You can mitigate this in your web server configuration by
-setting up redirects or by using the following VCL::
-
-  if (req.http.host ~ "(?i)^(www.)?varnish-?software.com") {
-    set req.http.host = "varnish-software.com";
-  }
-
-
-Ways of increasing your hitrate even more
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The following chapters should give your ways of further increasing
-your hitrate, especially the chapter on Cookies.
-
- * :ref:`tutorial-cookies`
- * :ref:`tutorial-vary`
- * :ref:`tutorial-purging`
- * :ref:`tutorial-esi`
-
diff --git a/doc/sphinx/tutorial/index.rst b/doc/sphinx/tutorial/index.rst
deleted file mode 100644
index 91fdb17..0000000
--- a/doc/sphinx/tutorial/index.rst
+++ /dev/null
@@ -1,38 +0,0 @@
-.. _tutorial-index:
-
-%%%%%%%%%%%%%
-Using Varnish
-%%%%%%%%%%%%%
-
-This tutorial is intended for system administrators managing Varnish
-cache. The reader should know how to configure her web- or application
-server and have basic knowledge of the HTTP protocol. The reader
-should have Varnish up and running with the default configuration. 
-
-The tutorial is split into short chapters, each chapter taking on a
-separate topic. Good luck.
-
-.. toctree:: :maxdepth: 1
-
-        introduction
-	backend_servers
-        starting_varnish
-	logging
-        sizing_your_cache
-        putting_varnish_on_port_80
-	vcl
-        statistics
-        increasing_your_hitrate
-	cookies
-	vary
-	purging
-	compression
-	esi
-	virtualized
-	websockets
-	devicedetection
-	advanced_backend_servers
-        handling_misbehaving_servers
-        advanced_topics
-	troubleshooting
-
diff --git a/doc/sphinx/tutorial/introduction.rst b/doc/sphinx/tutorial/introduction.rst
deleted file mode 100644
index 0d43623..0000000
--- a/doc/sphinx/tutorial/introduction.rst
+++ /dev/null
@@ -1,37 +0,0 @@
-.. _tutorial-intro:
-
-What is Varnish?
-----------------
-
-Varnish Cache is a web application accelerator also known as a caching
-HTTP reverse proxy. You install it in front of any server that speaks
-HTTP and configure it to cache the contents. Varnish Cache is really,
-really fast. It typically speeds up delivery with a factor of 300 -
-1000x, depending on your architecture.
-
-
-Performance
-~~~~~~~~~~~
-
-Varnish performs really, really well. It is usually bound by the speed
-of the network, effectivly turning performance into a non-issue. We've
-seen Varnish delivering 20 Gbps on regular off-the-shelf hardware.
-
-Flexibility
-~~~~~~~~~~~
-
-One of the key features of Varnish Cache, in addition to it's
-performance, is the flexibility of it's configuration language,
-VCL. VCL enables you to write policies on how incoming requests should
-be handled. In such a policy you can decide what content you want to
-serve, from where you want to get the content and how the request or
-response should be altered. You can read more about this in our
-tutorial.
-
-
-Supported plattforms
-~~~~~~~~~~~~~~~~~~~~
-
-Varnish is written to run on modern versions of Linux and FreeBSD and
-the best experience is had on those plattforms. Thanks to our
-contributors it also runs on NetBSD, OpenBSD and OS X.
diff --git a/doc/sphinx/tutorial/logging.rst b/doc/sphinx/tutorial/logging.rst
deleted file mode 100644
index 1f0bc18..0000000
--- a/doc/sphinx/tutorial/logging.rst
+++ /dev/null
@@ -1,68 +0,0 @@
-.. _tutorial-logging:
-
-Logging in Varnish
-------------------
-
-One of the really nice features in Varnish is how logging
-works. Instead of logging to normal log file Varnish logs to a shared
-memory segment. When the end of the segment is reached we start over,
-overwriting old data. This is much, much faster then logging to a file
-and it doesn't require disk space.
-
-The flip side is that if you forget to have a program actually write the
-logs to disk they will disappear.
-
-varnishlog is one of the programs you can use to look at what Varnish
-is logging. Varnishlog gives you the raw logs, everything that is
-written to the logs. There are other clients as well, we'll show you
-these later.
-
-In the terminal window you started varnish now type *varnishlog* and
-press enter.
-
-You'll see lines like these scrolling slowly by.::
-
-    0 CLI          - Rd ping
-    0 CLI          - Wr 200 PONG 1273698726 1.0
-
-These is the Varnish master process checking up on the caching process
-to see that everything is OK. 
-
-Now go to the browser and reload the page displaying your web
-app. You'll see lines like these.::
-
-   11 SessionOpen  c 127.0.0.1 58912 0.0.0.0:8080
-   11 ReqStart     c 127.0.0.1 58912 595005213
-   11 RxRequest    c GET
-   11 RxURL        c /
-   11 RxProtocol   c HTTP/1.1
-   11 RxHeader     c Host: localhost:8080
-   11 RxHeader     c Connection: keep-alive
-
-The first column is an arbitrary number, it defines the request. Lines
-with the same number are part of the same HTTP transaction. The second
-column is the *tag* of the log message. All log entries are tagged
-with a tag indicating what sort of activity is being logged. Tags
-starting with Rx indicate Varnish is recieving data and Tx indicates
-sending data.
-
-The third column tell us whether this is is data coming or going to
-the client (c) or to/from the backend (b). The forth column is the
-data being logged.
-
-Now, you can filter quite a bit with varnishlog. The basic option you
-want to know are:
-
--b
- Only show log lines from traffic going between Varnish and the backend 
- servers. This will be useful when we want to optimize cache hit rates.
-
--c 
- Same as -b but for client side traffic.
-
--m tag:regex
- Only list transactions where the tag matches a regular expression. If
- it matches you will get the whole transaction.
-
-Now that Varnish seem to work OK it's time to put Varnish on port 80
-while we tune it.
diff --git a/doc/sphinx/tutorial/purging.rst b/doc/sphinx/tutorial/purging.rst
deleted file mode 100644
index 422f9f4..0000000
--- a/doc/sphinx/tutorial/purging.rst
+++ /dev/null
@@ -1,175 +0,0 @@
-.. _tutorial-purging:
-
-=====================
- Purging and banning
-=====================
-
-One of the most effective ways of increasing your hit ratio is to
-increase the time-to-live (ttl) of your objects. But, as you're aware
-of, in this twitterific day of age serving content that is outdated is
-bad for business.
-
-The solution is to notify Varnish when there is fresh content
-available. This can be done through three mechanisms. HTTP purging,
-banning and forced cache misses. First, let me explain the HTTP purges. 
-
-
-HTTP Purges
-===========
-
-A *purge* is what happens when you pick out an object from the cache
-and discard it along with its variants. Usually a purge is invoked
-through HTTP with the method PURGE.
-
-An HTTP purge is similar to an HTTP GET request, except that the
-*method* is PURGE. Actually you can call the method whatever you'd
-like, but most people refer to this as purging. Squid supports the
-same mechanism. In order to support purging in Varnish you need the
-following VCL in place::
-
-  acl purge {
-	  "localhost";
-	  "192.168.55.0"/24;
-  }
-  
-  sub vcl_recv {
-      	  # allow PURGE from localhost and 192.168.55...
-
-	  if (req.request == "PURGE") {
-		  if (!client.ip ~ purge) {
-			  error 405 "Not allowed.";
-		  }
-		  return (lookup);
-	  }
-  }
-  
-  sub vcl_hit {
-	  if (req.request == "PURGE") {
-	          purge;
-		  error 200 "Purged.";
-	  }
-  }
-  
-  sub vcl_miss {
-	  if (req.request == "PURGE") {
-	          purge;
-		  error 200 "Purged.";
-	  }
-  }
-
-As you can see we have used to new VCL subroutines, vcl_hit and
-vcl_miss. When we call lookup Varnish will try to lookup the object in
-its cache. It will either hit an object or miss it and so the
-corresponding subroutine is called. In vcl_hit the object that is
-stored in cache is available and we can set the TTL. The purge in
-vcl_miss is necessary to purge all variants in the cases where you hit an
-object, but miss a particular variant.
-
-So for example.com to invalidate their front page they would call out
-to Varnish like this::
-
-  PURGE / HTTP/1.0
-  Host: example.com
-
-And Varnish would then discard the front page. This will remove all
-variants as defined by Vary.
-
-Bans
-====
-
-There is another way to invalidate content: Bans. You can think of
-bans as a sort of a filter on objects already in the cache. You *ban*
-certain content from being served from your cache. You can ban
-content based on any metadata we have.
-A ban will only work on objects already in the cache, it does not
-prevent new content from entering the cache or being served.
-
-Support for bans is built into Varnish and available in the CLI
-interface. To ban every png object belonging on example.com, issue
-the following command::
-
-  ban req.http.host == "example.com" && req.url ~ "\.png$"
-
-Quite powerful, really.
-
-Bans are checked when we hit an object in the cache, but before we
-deliver it. *An object is only checked against newer bans*. 
-
-Bans that only match against obj.* are also processed by a background
-worker threads called the *ban lurker*. The ban lurker will walk the
-heap and try to match objects and will evict the matching objects. How
-aggressive the ban lurker is can be controlled by the parameter
-ban_lurker_sleep. The ban lurker can be disabled by setting
-ban_lurker_sleep to 0.
-
-Bans that are older than the oldest objects in the cache are discarded
-without evaluation.  If you have a lot of objects with long TTL, that
-are seldom accessed you might accumulate a lot of bans. This might
-impact CPU usage and thereby performance.
-
-You can also add bans to Varnish via HTTP. Doing so requires a bit of VCL::
-
-  sub vcl_recv {
-	  if (req.request == "BAN") {
-                  # Same ACL check as above:
-		  if (!client.ip ~ purge) {
-			  error 405 "Not allowed.";
-		  }
-		  ban("req.http.host == " + req.http.host +
-		        "&& req.url == " + req.url);
-
-		  # Throw a synthetic page so the
-                  # request won't go to the backend.
-		  error 200 "Ban added";
-	  }
-  }
-
-This VCL sniplet enables Varnish to handle an HTTP BAN method, adding a
-ban on the URL, including the host part.
-
-The ban lurker can help you keep the ban list at a manageable size, so
-we recommend that you avoid using req.* in your bans, as the request
-object is not available in the ban lurker thread.
-
-You can use the following template to write ban lurker friendly bans::
-
-  sub vcl_fetch {
-    set beresp.http.x-url = req.url;
-  }
-
-  sub vcl_deliver {
-    unset resp.http.x-url; # Optional
-  }
-
-  sub vcl_recv {
-    if (req.request == "PURGE") {
-      if (client.ip !~ purge) {
-        error 401 "Not allowed";
-      }
-      ban("obj.http.x-url ~ " + req.url); # Assumes req.url is a regex. This might be a bit too simple
-    }
-  }
-
-To inspect the current ban list, issue the ban.list command in CLI. This
-will produce a status of all current bans::
-
-  0xb75096d0 1318329475.377475    10      obj.http.x-url ~ test
-  0xb7509610 1318329470.785875    20G     obj.http.x-url ~ test
-
-The ban list contains the ID of the ban, the timestamp when the ban
-entered the ban list. A count of the objects that has reached this point
-in the ban list, optionally postfixed with a 'G' for "Gone", if the ban
-is no longer valid.  Finally, the ban expression is listed. The ban can
-be marked as Gone if it is a duplicate ban, but is still kept in the list
-for optimization purposes.
-
-Forcing a cache miss
-====================
-
-The final way to invalidate an object is a method that allows you to
-refresh an object by forcing a hash miss for a single request. If you set
-req.hash_always_miss to true, varnish will miss the current object in the
-cache, thus forcing a fetch from the backend. This can in turn add the
-freshly fetched object to the cache, thus overriding the current one. The
-old object will stay in the cache until ttl expires or it is evicted by
-some other means.
diff --git a/doc/sphinx/tutorial/putting_varnish_on_port_80.rst b/doc/sphinx/tutorial/putting_varnish_on_port_80.rst
deleted file mode 100644
index 73a80ff..0000000
--- a/doc/sphinx/tutorial/putting_varnish_on_port_80.rst
+++ /dev/null
@@ -1,25 +0,0 @@
-
-Put Varnish on port 80
-----------------------
-
-Until now we've been running with Varnish on a high port, for testing
-purposes. You should test your application and if it works OK we can
-switch, so Varnish will be running on port 80 and your web server on a
-high port.
-
-First we kill off varnishd::
-
-     # pkill varnishd
-
-and stop your web server. Edit the configuration for your web server
-and make it bind to port 8080 instead of 80. Now open the Varnish
-default.vcl and change the port of the *default* backend to 8080.
-
-Start up your web server and then start varnish::
-
-      # varnishd -f /usr/local/etc/varnish/default.vcl -s malloc,1G -T 127.0.0.1:2000
-
-Note that we've removed the -a option. Now Varnish, as its default
-setting dictates, will bind to the http port (80). Now everyone
-accessing your site will be accessing through Varnish.
-
diff --git a/doc/sphinx/tutorial/sizing_your_cache.rst b/doc/sphinx/tutorial/sizing_your_cache.rst
deleted file mode 100644
index c19647c..0000000
--- a/doc/sphinx/tutorial/sizing_your_cache.rst
+++ /dev/null
@@ -1,25 +0,0 @@
-
-Sizing your cache
------------------
-
-Picking how much memory you should give Varnish can be a tricky
-task. A few things to consider:
-
- * How big is your *hot* data set. For a portal or news site that
-   would be the size of the front page with all the stuff on it, and
-   the size of all the pages and objects linked from the first page. 
- * How expensive is it to generate an object? Sometimes it makes sense
-   to only cache images a little while or not to cache them at all if
-   they are cheap to serve from the backend and you have a limited
-   amount of memory.
- * Watch the n_lru_nuked counter with :ref:`reference-varnishstat` or some other
-   tool. If you have a lot of LRU activity then your cache is evicting
-   objects due to space constraints and you should consider increasing
-   the size of the cache.
-
-Be aware that every object that is stored also carries overhead that
-is kept outside the actually storage area. So, even if you specify -s
-malloc,16G varnish might actually use **double** that. Varnish has a
-overhead of about 1k per object. So, if you have lots of small objects
-in your cache the overhead might be significant.
-
diff --git a/doc/sphinx/tutorial/starting_varnish.rst b/doc/sphinx/tutorial/starting_varnish.rst
deleted file mode 100644
index 6c89f54..0000000
--- a/doc/sphinx/tutorial/starting_varnish.rst
+++ /dev/null
@@ -1,51 +0,0 @@
-.. _tutorial-starting_varnish:
-
-Starting Varnish
-----------------
-
-I assume varnishd is in your path. You might want to run ``pkill
-varnishd`` to make sure varnishd isn't running. Become root and type:
-
-``# varnishd -f /usr/local/etc/varnish/default.vcl -s malloc,1G -T 127.0.0.1:2000 -a 0.0.0.0:8080``
-
-I added a few options, lets go through them:
-
-``-f /usr/local/etc/varnish/default.vcl``
- The -f options specifies what configuration varnishd should use.
-
-``-s malloc,1G``
- The -s options chooses the storage type Varnish should use for
- storing its content. I used the type *malloc*, which just uses memory
- for storage. There are other backends as well, described in 
- :ref:tutorial-storage. 1G specifies how much memory should be allocated 
- - one gigabyte. 
-
-``-T 127.0.0.1:2000``
- Varnish has a built-in text-based administration
- interface. Activating the interface makes Varnish manageble without
- stopping it. You can specify what interface the management interface
- should listen to. Make sure you don't expose the management interface
- to the world as you can easily gain root access to a system via the
- Varnish management interface. I recommend tieing it to localhost. If
- you have users on your system that you don't fully trust, use firewall
- rules to restrict access to the interface to root only.
-
-``-a 0.0.0.0:8080``
- I specify that I want Varnish to listen on port 8080 for incomming
- HTTP requests. For a production environment you would probably make
- Varnish listen on port 80, which is the default.
-
-Now you have Varnish running. Let us make sure that it works
-properly. Use your browser to go to http://192.168.2.2:8080/
-(obviously, you should replace the IP address with one on your own
-system) - you should now see your web application running there.
-
-Whether or not the application actually goes faster when run through
-Varnish depends on a few factors. If you application uses cookies for
-every session (a lot of PHP and Java applications seem to send a
-session cookie if it is needed or not) or if it uses authentication
-chances are Varnish won't do much caching. Ignore that for the moment,
-we come back to that in :ref:`tutorial-increasing_your_hitrate`.
-
-Lets make sure that Varnish really does do something to your web
-site. To do that we'll take a look at the logs.
diff --git a/doc/sphinx/tutorial/statistics.rst b/doc/sphinx/tutorial/statistics.rst
deleted file mode 100644
index 4386111..0000000
--- a/doc/sphinx/tutorial/statistics.rst
+++ /dev/null
@@ -1,57 +0,0 @@
-.. _tutorial-statistics:
-
-
-Statistics
-----------
-
-Now that your varnish is up and running let's have a look at how it is
-doing. There are several tools that can help.
-
-varnishtop
-~~~~~~~~~~
-
-The varnishtop utility reads the shared memory logs and presents a
-continuously updated list of the most commonly occurring log entries.
-
-With suitable filtering using the -I, -i, -X and -x options, it can be
-used to display a ranking of requested documents, clients, user
-agents, or any other information which is recorded in the log.
-
-``varnishtop -i rxurl`` will show you what URLs are being asked for
-by the client. ``varnishtop -i txurl`` will show you what your backend
-is being asked the most. ``varnishtop -i RxHeader -I
-Accept-Encoding`` will show the most popular Accept-Encoding header
-the client are sending you.
-
-varnishhist
-~~~~~~~~~~~
-
-The varnishhist utility reads varnishd(1) shared memory logs and
-presents a continuously updated histogram showing the distribution of
-the last N requests by their processing.  The value of N and the
-vertical scale are displayed in the top left corner.  The horizontal
-scale is logarithmic.  Hits are marked with a pipe character ("|"),
-and misses are marked with a hash character ("#").
-
-
-varnishsizes
-~~~~~~~~~~~~
-
-Varnishsizes does the same as varnishhist, except it shows the size of
-the objects and not the time take to complete the request. This gives
-you a good overview of how big the objects you are serving are.
-
-
-varnishstat
-~~~~~~~~~~~
-
-Varnish has lots of counters. We count misses, hits, information about
-the storage, threads created, deleted objects. Just about
-everything. varnishstat will dump these counters. This is useful when
-tuning varnish. 
-
-There are programs that can poll varnishstat regularly and make nice
-graphs of these counters. One such program is Munin. Munin can be
-found at http://munin-monitoring.org/ . There is a plugin for munin in
-the varnish source code.
-
diff --git a/doc/sphinx/tutorial/troubleshooting.rst b/doc/sphinx/tutorial/troubleshooting.rst
deleted file mode 100644
index 5bbcf6c..0000000
--- a/doc/sphinx/tutorial/troubleshooting.rst
+++ /dev/null
@@ -1,99 +0,0 @@
-Troubleshooting Varnish
------------------------
-
-Sometimes Varnish misbehaves. In order for you to understand whats
-going on there are a couple of places you can check. varnishlog,
-/var/log/syslog, /var/log/messages are all places where varnish might
-leave clues of whats going on.
-
-
-When Varnish won't start
-~~~~~~~~~~~~~~~~~~~~~~~~
-
-Sometimes Varnish wont start. There is a plethora of reasons why
-Varnish wont start on your machine. We've seen everything from wrong
-permissions on /dev/null to other processes blocking the ports.
-
-Starting Varnish in debug mode to see what is going on.
-
-Try to start varnish by::
-
-    # varnishd -f /usr/local/etc/varnish/default.vcl -s malloc,1G -T 127.0.0.1:2000  -a 0.0.0.0:8080 -d
-
-Notice the -d option. It will give you some more information on what
-is going on. Let us see how Varnish will react to something else
-listening on its port.::
-
-    # varnishd -n foo -f /usr/local/etc/varnish/default.vcl -s malloc,1G -T 127.0.0.1:2000  -a 0.0.0.0:8080 -d
-    storage_malloc: max size 1024 MB.
-    Using old SHMFILE
-    Platform: Linux,2.6.32-21-generic,i686,-smalloc,-hcritbit
-    200 193     
-    -----------------------------
-    Varnish Cache CLI.
-    -----------------------------
-    Type 'help' for command list.
-    Type 'quit' to close CLI session.
-    Type 'start' to launch worker process.
-
-Now Varnish is running. Only the master process is running, in debug
-mode the cache does not start. Now you're on the console. You can
-instruct the master process to start the cache by issuing "start".::
-
-	 start
-	 bind(): Address already in use
-	 300 22      
-	 Could not open sockets
-
-And here we have our problem. Something else is bound to the HTTP port
-of Varnish. If this doesn't help try strace or truss or come find us
-on IRC.
-
-
-Varnish is crashing
-~~~~~~~~~~~~~~~~~~~
-
-When varnish goes bust the child processes crashes. Usually the mother
-process will manage this by restarting the child process again. Any
-errors will be logged in syslog. It might look like this::
-
-       Mar  8 13:23:38 smoke varnishd[15670]: Child (15671) not responding to CLI, killing it.
-       Mar  8 13:23:43 smoke varnishd[15670]: last message repeated 2 times
-       Mar  8 13:23:43 smoke varnishd[15670]: Child (15671) died signal=3
-       Mar  8 13:23:43 smoke varnishd[15670]: Child cleanup complete
-       Mar  8 13:23:43 smoke varnishd[15670]: child (15697) Started
-
-Specifically if you see the "Error in munmap" error on Linux you might
-want to increase the amount of maps available. Linux is limited to a
-maximum of 64k maps. Setting vm.max_max_count i sysctl.conf will
-enable you to increase this limit. You can inspect the number of maps
-your program is consuming by counting the lines in /proc/$PID/maps
-
-This is a rather odd thing to document here - but hopefully Google
-will serve you this page if you ever encounter this error. 
-
-Varnish gives me Guru meditation
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-First find the relevant log entries in varnishlog. That will probably
-give you a clue. Since varnishlog logs so much data it might be hard
-to track the entries down. You can set varnishlog to log all your 503
-errors by issuing the following command::
-
-   $ varnishlog -c -m TxStatus:503
-
-If the error happened just a short time ago the transaction might still
-be in the shared memory log segment. To get varnishlog to process the
-whole shared memory log just add the -d option::
-
-   $ varnishlog -d -c -m TxStatus:503
-
-Please see the varnishlog man page for elaborations on further
-filtering capabilities and explanation of the various options.
-
-
-Varnish doesn't cache
-~~~~~~~~~~~~~~~~~~~~~
-
-See :ref:`tutorial-increasing_your_hitrate`.
-
diff --git a/doc/sphinx/tutorial/vary.rst b/doc/sphinx/tutorial/vary.rst
deleted file mode 100644
index ad7b48d..0000000
--- a/doc/sphinx/tutorial/vary.rst
+++ /dev/null
@@ -1,58 +0,0 @@
-.. _tutorial-vary:
-
-Vary
-~~~~
-
-The Vary header is sent by the web server to indicate what makes a
-HTTP object Vary. This makes a lot of sense with headers like
-Accept-Encoding. When a server issues a "Vary: Accept-Encoding" it
-tells Varnish that its needs to cache a separate version for every
-different Accept-Encoding that is coming from the clients. So, if a
-clients only accepts gzip encoding Varnish won't serve the version of
-the page encoded with the deflate encoding.
-
-The problem is that the Accept-Encoding field contains a lot of
-different encodings. If one browser sends::
-
-  Accept-Encoding: gzip,deflate
-
-And another one sends::
-
-  Accept-Encoding: deflate,gzip
-
-Varnish will keep two variants of the page requested due to the
-different Accept-Encoding headers. Normalizing the accept-encoding
-header will sure that you have as few variants as possible. The
-following VCL code will normalize the Accept-Encoding headers::
-
-    if (req.http.Accept-Encoding) {
-        if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") {
-            # No point in compressing these
-            remove req.http.Accept-Encoding;
-        } elsif (req.http.Accept-Encoding ~ "gzip") {
-            set req.http.Accept-Encoding = "gzip";
-        } elsif (req.http.Accept-Encoding ~ "deflate") {
-            set req.http.Accept-Encoding = "deflate";
-        } else {
-            # unknown algorithm
-            remove req.http.Accept-Encoding;
-        }
-    }
-
-The code sets the Accept-Encoding header from the client to either
-gzip, deflate with a preference for gzip.
-
-Pitfall - Vary: User-Agent
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Some applications or application servers send *Vary: User-Agent* along
-with their content. This instructs Varnish to cache a separate copy
-for every variation of User-Agent there is. There are plenty. Even a
-single patchlevel of the same browser will generate at least 10
-different User-Agent headers based just on what operating system they
-are running. 
-
-So if you *really* need to Vary based on User-Agent be sure to
-normalize the header or your hit rate will suffer badly. Use the above
-code as a template.
-
diff --git a/doc/sphinx/tutorial/vcl.rst b/doc/sphinx/tutorial/vcl.rst
deleted file mode 100644
index 0601468..0000000
--- a/doc/sphinx/tutorial/vcl.rst
+++ /dev/null
@@ -1,200 +0,0 @@
-Varnish Configuration Language - VCL
--------------------------------------
-
-Varnish has a great configuration system. Most other systems use
-configuration directives, where you basically turn on and off lots of
-switches. Varnish uses a domain specific language called Varnish
-Configuration Language, or VCL for short. Varnish translates this
-configuration into binary code which is then executed when requests
-arrive.
-
-The VCL files are divided into subroutines. The different subroutines
-are executed at different times. One is executed when we get the
-request, another when files are fetched from the backend server.
-
-Varnish will execute these subroutines of code at different stages of
-its work. Because it is code it is execute line by line precedence
-isn't a problem. At some point you call an action in this subroutine
-and then the execution of the subroutine stops.
-
-If you don't call an action in your subroutine and it reaches the end
-Varnish will execute some built in VCL code. You will see this VCL
-code commented out in default.vcl.
-
-99% of all the changes you'll need to do will be done in two of these
-subroutines. *vcl_recv* and *vcl_fetch*.
-
-vcl_recv
-~~~~~~~~
-
-vcl_recv (yes, we're skimpy with characters, it's Unix) is called at
-the beginning of a request, after the complete request has been
-received and parsed.  Its purpose is to decide whether or not to serve
-the request, how to do it, and, if applicable, which backend to use.
-
-In vcl_recv you can also alter the request. Typically you can alter
-the cookies and add and remove request headers.
-
-Note that in vcl_recv only the request object, req is available.
-
-vcl_fetch
-~~~~~~~~~
-
-vcl_fetch is called *after* a document has been successfully retrieved
-from the backend. Normal tasks her are to alter the response headers,
-trigger ESI processing, try alternate backend servers in case the
-request failed.
-
-In vcl_fetch you still have the request object, req, available. There
-is also a *backend response*, beresp. beresp will contain the HTTP
-headers from the backend.
-
-.. _tutorial-vcl_fetch_actions:
-
-actions
-~~~~~~~
-
-The most common actions to return are these:
-
-*pass*
- When you return pass the request and subsequent response will be passed to
- and from the backend server. It won't be cached. pass can be returned from
- vcl_recv
-
-*hit_for_pass*
-  Similar to pass, but accessible from vcl_fetch. Unlike pass, hit_for_pass
-  will create a hitforpass object in the cache. This has the side-effect of
-  caching the decision not to cache. This is to allow would-be uncachable
-  requests to be passed to the backend at the same time. The same logic is
-  not necessary in vcl_recv because this happens before any potential
-  queueing for an object takes place.
-
-*lookup*
-  When you return lookup from vcl_recv you tell Varnish to deliver content 
-  from cache even if the request othervise indicates that the request 
-  should be passed. You can't return lookup from vcl_fetch.
-
-*pipe*
-  Pipe can be returned from vcl_recv as well. Pipe short circuits the
-  client and the backend connections and Varnish will just sit there
-  and shuffle bytes back and forth. Varnish will not look at the data being 
-  send back and forth - so your logs will be incomplete. 
-  Beware that with HTTP 1.1 a client can send several requests on the same 
-  connection and so you should instruct Varnish to add a "Connection: close"
-  header before actually returning pipe. 
-
-*deliver*
- Deliver the cached object to the client.  Usually returned from vcl_fetch. 
-
-Requests, responses and objects
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-In VCL, there are three important data structures. The request, coming
-from the client, the response coming from the backend server and the
-object, stored in cache.
-
-In VCL you should know the following structures.
-
-*req*
- The request object. When Varnish has received the request the req object is 
- created and populated. Most of the work you do in vcl_recv you 
- do on or with the req object.
-
-*beresp*
- The backend respons object. It contains the headers of the object 
- comming from the backend. Most of the work you do in vcl_fetch you 
- do on the beresp object.
-
-*obj*
- The cached object. Mostly a read only object that resides in memory. 
- obj.ttl is writable, the rest is read only.
-
-Operators
-~~~~~~~~~
-
-The following operators are available in VCL. See the examples further
-down for, uhm, examples.
-
-= 
- Assignment operator.
-
-== 
- Comparison.
-
-~
- Match. Can either be used with regular expressions or ACLs.
-
-!
- Negation.
-
-&&
- Logical *and*
-
-||
- Logical *or*
-
-Example 1 - manipulating headers
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Lets say we want to remove the cookie for all objects in the /images
-directory of our web server::
-
-  sub vcl_recv {
-    if (req.url ~ "^/images") {
-      unset req.http.cookie;
-    }
-  }
-
-Now, when the request is handled to the backend server there will be
-no cookie header. The interesting line is the one with the
-if-statement. It matches the URL, taken from the request object, and
-matches it against the regular expression. Note the match operator. If
-it matches the Cookie: header of the request is unset (deleted). 
-
-Example 2 - manipulating beresp
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Here we override the TTL of a object comming from the backend if it
-matches certain criteria::
-
-  sub vcl_fetch {
-     if (req.url ~ "\.(png|gif|jpg)$") {
-       unset beresp.http.set-cookie;
-       set beresp.ttl = 1h;
-    }
-  }
-
-Example 3 - ACLs
-~~~~~~~~~~~~~~~~
-
-You create a named access control list with the *acl* keyword. You can match
-the IP address of the client against an ACL with the match operator.::
-
-  # Who is allowed to purge....
-  acl local {
-      "localhost";
-      "192.168.1.0"/24; /* and everyone on the local network */
-      ! "192.168.1.23"; /* except for the dialin router */
-  }
-  
-  sub vcl_recv {
-    if (req.request == "PURGE") {
-      if (client.ip ~ local) {
-         return(lookup);
-      }
-    } 
-  }
-  
-  sub vcl_hit {
-     if (req.request == "PURGE") {
-       set obj.ttl = 0s;
-       error 200 "Purged.";
-      }
-  }
-
-  sub vcl_miss {
-    if (req.request == "PURGE") {
-      error 404 "Not in cache.";
-    }
-  }
-
diff --git a/doc/sphinx/tutorial/virtualized.rst b/doc/sphinx/tutorial/virtualized.rst
deleted file mode 100644
index 317d3e2..0000000
--- a/doc/sphinx/tutorial/virtualized.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-
-Running Varnish in a virtualized environment
---------------------------------------------
-
-It is possible, but not recommended for high performance, to run
-Varnish on virtualized hardware. Reduced disk- and network performance
-will reduce the performance a bit so make sure your system has good IO
-performance.
-
-OpenVZ
-~~~~~~
-
-If you are running on 64bit OpenVZ (or Parallels VPS), you must reduce
-the maximum stack size before starting Varnish. The default allocates
-to much memory per thread, which will make varnish fail as soon as the
-number of threads (==traffic) increases.
-
-Reduce the maximum stack size by running::
-
-    ulimit -s 256
-
-in the startup script.
-
diff --git a/doc/sphinx/tutorial/websockets.rst b/doc/sphinx/tutorial/websockets.rst
deleted file mode 100644
index a74353e..0000000
--- a/doc/sphinx/tutorial/websockets.rst
+++ /dev/null
@@ -1,20 +0,0 @@
-
-Using Websockets 
-----------------
-
-Websockets is a technology for creating a bidirectional stream-based channel over HTTP.
-
-To run websockets through Varnish you need to pipe it, and copy the Upgrade header. Use the following
-VCL config to do so::
-
-    sub vcl_pipe {
-         if (req.http.upgrade) {
-             set bereq.http.upgrade = req.http.upgrade;
-         }
-    }
-    sub vcl_recv {
-         if (req.http.Upgrade ~ "(?i)websocket") {
-             return (pipe);
-         }
-    }
-
diff --git a/doc/sphinx/users-guide/advanced_backend_servers.rst b/doc/sphinx/users-guide/advanced_backend_servers.rst
new file mode 100644
index 0000000..b4206d9
--- /dev/null
+++ b/doc/sphinx/users-guide/advanced_backend_servers.rst
@@ -0,0 +1,157 @@
+Advanced Backend configuration
+------------------------------
+
+At some point you might need Varnish to cache content from several
+servers. You might want Varnish to map all the URL into one single
+host or not. There are lot of options.
+
+Lets say we need to introduce a Java application into out PHP web
+site. Lets say our Java application should handle URL beginning with
+/java/.
+
+We manage to get the thing up and running on port 8000. Now, lets have
+a look a default.vcl.::
+
+  backend default {
+      .host = "127.0.0.1";
+      .port = "8080";
+  }
+
+We add a new backend.::
+
+  backend java {
+      .host = "127.0.0.1";
+      .port = "8000";
+  }
+
+Now we need tell where to send the difference URL. Lets look at vcl_recv.::
+
+  sub vcl_recv {
+      if (req.url ~ "^/java/") {
+          set req.backend = java;
+      } else {
+          set req.backend = default.
+      }
+  }
+
+It's quite simple, really. Lets stop and think about this for a
+moment. As you can see you can define how you choose backends based on
+really arbitrary data. You want to send mobile devices to a different
+backend? No problem. if (req.User-agent ~ /mobile/) .... should do the
+trick. 
+
+.. _tutorial-advanced_backend_servers-directors:
+
+Directors
+---------
+
+You can also group several backend into a group of backends. These
+groups are called directors. This will give you increased performance
+and resilience. You can define several backends and group them
+together in a director.::
+
+	 backend server1 {
+	     .host = "192.168.0.10";
+	 }
+	 backend server2{
+	     .host = "192.168.0.10";
+	 }
+
+Now we create the director.::
+
+       	director example_director round-robin {
+        {
+                .backend = server1;
+        }
+	# server2
+        {
+                .backend = server2;
+        }
+	# foo
+	}
+
+
+This director is a round-robin director. This means the director will
+distribute the incoming requests on a round-robin basis. There is
+also a *random* director which distributes requests in a, you guessed
+it, random fashion.
+
+But what if one of your servers goes down? Can Varnish direct all the
+requests to the healthy server? Sure it can. This is where the Health
+Checks come into play.
+
+.. _tutorial-advanced_backend_servers-health:
+
+Health checks
+-------------
+
+Lets set up a director with two backends and health checks. First lets
+define the backends.::
+
+       backend server1 {
+         .host = "server1.example.com";
+	 .probe = {
+                .url = "/";
+                .interval = 5s;
+                .timeout = 1 s;
+                .window = 5;
+                .threshold = 3;
+	   }
+         }
+       backend server2 {
+  	  .host = "server2.example.com";
+  	  .probe = {
+                .url = "/";
+                .interval = 5s;
+                .timeout = 1 s;
+                .window = 5;
+                .threshold = 3;
+	  }
+        }
+
+Whats new here is the probe. Varnish will check the health of each
+backend with a probe. The options are
+
+url
+ What URL should varnish request.
+
+interval
+ How often should we poll
+
+timeout
+ What is the timeout of the probe
+
+window
+ Varnish will maintain a *sliding window* of the results. Here the
+ window has five checks.
+
+threshold 
+ How many of the .window last polls must be good for the backend to be declared healthy.
+
+initial 
+ How many of the of the probes a good when Varnish starts - defaults
+ to the same amount as the threshold.
+
+Now we define the director.::
+
+  director example_director round-robin {
+        {
+                .backend = server1;
+        }
+        # server2 
+        {
+                .backend = server2;
+        }
+	
+        }
+
+You use this director just as you would use any other director or
+backend. Varnish will not send traffic to hosts that are marked as
+unhealthy. Varnish can also serve stale content if all the backends are
+down. See :ref:`tutorial-handling_misbehaving_servers` for more
+information on how to enable this.
+
+Please note that Varnish will keep probes active for all loaded
+VCLs. Varnish will coalesce probes that seem identical - so be careful
+not to change the probe config if you do a lot of VCL
+loading. Unloading the VCL will discard the probes.
diff --git a/doc/sphinx/users-guide/advanced_topics.rst b/doc/sphinx/users-guide/advanced_topics.rst
new file mode 100644
index 0000000..1045de9
--- /dev/null
+++ b/doc/sphinx/users-guide/advanced_topics.rst
@@ -0,0 +1,63 @@
+.. _tutorial-advanced_topics:
+
+Advanced topics
+---------------
+
+This tutorial has covered the basics in Varnish. If you read through
+it all you should now have the skills to run Varnish.
+
+Here is a short overview of topics that we haven't covered in the tutorial. 
+
+More VCL
+~~~~~~~~
+
+VCL is a bit more complex then what we've covered so far. There are a
+few more subroutines available and there a few actions that we haven't
+discussed. For a complete(ish) guide to VCL have a look at the VCL man
+page - ref:`reference-vcl`.
+
+Using In-line C to extend Varnish
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You can use *in-line C* to extend Varnish. Please note that you can
+seriously mess up Varnish this way. The C code runs within the Varnish
+Cache process so if your code generates a segfault the cache will crash.
+
+One of the first uses I saw of In-line C was logging to syslog.::
+
+	# The include statements must be outside the subroutines.
+	C{
+		#include <syslog.h>
+        }C
+	
+        sub vcl_something {
+                C{
+		        syslog(LOG_INFO, "Something happened at VCL line XX.");
+	        }C
+        }
+
+
+Edge Side Includes
+~~~~~~~~~~~~~~~~~~
+
+Varnish can cache create web pages by putting different pages
+together. These *fragments* can have individual cache policies. If you
+have a web site with a list showing the 5 most popular articles on
+your site, this list can probably be cached as a fragment and included
+in all the other pages. Used properly it can dramatically increase
+your hit rate and reduce the load on your servers. ESI looks like this::
+
+  <HTML>
+  <BODY>
+  The time is: <esi:include src="/cgi-bin/date.cgi"/>
+  at this very moment.
+  </BODY>
+  </HTML>
+
+ESI is processed in vcl_fetch by setting *do_esi* to true.::
+
+  sub vcl_fetch {
+      if (req.url == "/test.html") {
+	  set beresp.do_esi = true;  /* Do ESI processing */
+      }
+  }
diff --git a/doc/sphinx/users-guide/backend_servers.rst b/doc/sphinx/users-guide/backend_servers.rst
new file mode 100644
index 0000000..1b1aaf2
--- /dev/null
+++ b/doc/sphinx/users-guide/backend_servers.rst
@@ -0,0 +1,39 @@
+.. _tutorial-backend_servers:
+
+Backend servers
+---------------
+
+Varnish has a concept of "backend" or "origin" servers. A backend
+server is the server providing the content Varnish will accelerate.
+
+Our first task is to tell Varnish where it can find its content. Start
+your favorite text editor and open the varnish default configuration
+file. If you installed from source this is
+/usr/local/etc/varnish/default.vcl, if you installed from a package it
+is probably /etc/varnish/default.vcl.
+
+Somewhere in the top there will be a section that looks a bit like this.::
+
+	  # backend default {
+	  #     .host = "127.0.0.1";
+	  #     .port = "8080";
+	  # }
+
+We comment in this bit of text and change the port setting from 8080
+to 80, making the text look like.::
+
+          backend default {
+                .host = "127.0.0.1";
+    		.port = "80";
+	  }
+
+Now, this piece of configuration defines a backend in Varnish called
+*default*. When Varnish needs to get content from this backend it will
+connect to port 80 on localhost (127.0.0.1).
+
+Varnish can have several backends defined and can you can even join
+several backends together into clusters of backends for load balancing
+purposes. 
+
+Now that we have the basic Varnish configuration done, let us start up
+Varnish on port 8080 so we can do some fundamental testing on it.
diff --git a/doc/sphinx/users-guide/compression.rst b/doc/sphinx/users-guide/compression.rst
new file mode 100644
index 0000000..0b8d1e8
--- /dev/null
+++ b/doc/sphinx/users-guide/compression.rst
@@ -0,0 +1,75 @@
+.. _tutorial-compression:
+
+Compression
+~~~~~~~~~~~
+
+New in Varnish 3.0 was native support for compression, using gzip
+encoding. *Before* 3.0, Varnish would never compress objects. 
+
+In Varnish 3.0 compression defaults to "on", meaning that it tries to
+be smart and do the sensible thing.
+
+If you don't want Varnish tampering with the encoding you can disable
+compression all together by setting the parameter http_gzip_support to
+*false*. Please see man :ref:`ref-varnishd` for details.
+
+
+Default behaviour
+~~~~~~~~~~~~~~~~~
+
+The default for Varnish is to check if the client supports our
+compression scheme (gzip) and if it does it will override the
+Accept-Encoding header and set it to "gzip".
+
+When Varnish then issues a backend request the Accept-Encoding will
+then only consist of "gzip". If the server responds with gzip'ed
+content it will be stored in memory in its compressed form. If the
+backend sends content in clear text it will be stored like that.
+
+You can make Varnish compress content before storing it in cache in
+vcl_fetch by setting do_gzip to true, like this::
+
+   sub vcl_fetch {
+        if (beresp.http.content-type ~ "text") {
+                set beresp.do_gzip = true;
+        }
+  }
+
+Please make sure that you don't try to compress content that is
+incompressable, like jpgs, gifs and mp3. You'll only waste CPU
+cycles. You can also uncompress objects before storing it in memory by
+setting do_gunzip to *true* but I have no idea why anybody would want
+to do that.
+
+Generally, Varnish doesn't use much CPU so it might make more sense to
+have Varnish spend CPU cycles compressing content than doing it in
+your web- or application servers, which are more likely to be
+CPU-bound.
+
+GZIP and ESI
+~~~~~~~~~~~~
+
+If you are using Edge Side Includes you'll be happy to note that ESI
+and GZIP work together really well. Varnish will magically decompress
+the content to do the ESI-processing, then recompress it for efficient
+storage and delivery. 
+
+
+Clients that don't support gzip
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If the client does not support gzip the Accept-Encoding header is left
+alone and we'll end up serving whatever we get from the backend
+server. Remember that the Backend might tell Varnish to *Vary* on the
+Accept-Encoding.
+
+If the client does not support gzip but we've already got a compressed
+version of the page in memory Varnish will automatically decompress
+the page while delivering it.
+
+
+A random outburst
+~~~~~~~~~~~~~~~~~
+
+Poul has written :ref:`phk_gzip` which talks abit more about how the
+implementation works. 
diff --git a/doc/sphinx/users-guide/cookies.rst b/doc/sphinx/users-guide/cookies.rst
new file mode 100644
index 0000000..c75171f
--- /dev/null
+++ b/doc/sphinx/users-guide/cookies.rst
@@ -0,0 +1,95 @@
+.. _tutorial-cookies:
+
+Cookies
+-------
+
+Varnish will, in the default configuration, not cache a object coming
+from the backend with a Set-Cookie header present. Also, if the client
+sends a Cookie header, Varnish will bypass the cache and go directly to
+the backend.
+
+This can be overly conservative. A lot of sites use Google Analytics
+(GA) to analyze their traffic. GA sets a cookie to track you. This
+cookie is used by the client side javascript and is therefore of no
+interest to the server. 
+
+Cookies from the client
+~~~~~~~~~~~~~~~~~~~~~~~
+
+For a lot of web application it makes sense to completely disregard the
+cookies unless you are accessing a special part of the web site. This
+VCL snippet in vcl_recv will disregard cookies unless you are
+accessing /admin/::
+
+  if ( !( req.url ~ ^/admin/) ) {
+    unset req.http.Cookie;
+  }
+
+Quite simple. If, however, you need to do something more complicated,
+like removing one out of several cookies, things get
+difficult. Unfortunately Varnish doesn't have good tools for
+manipulating the Cookies. We have to use regular expressions to do the
+work. If you are familiar with regular expressions you'll understand
+whats going on. If you don't I suggest you either pick up a book on
+the subject, read through the *pcrepattern* man page or read through
+one of many online guides.
+
+Let me show you what Varnish Software uses. We use some cookies for
+Google Analytics tracking and similar tools. The cookies are all set
+and used by Javascript. Varnish and Drupal doesn't need to see those
+cookies and since Varnish will cease caching of pages when the client
+sends cookies we will discard these unnecessary cookies in VCL. 
+
+In the following VCL we discard all cookies that start with a
+underscore::
+
+  // Remove has_js and Google Analytics __* cookies.
+  set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(_[_a-z]+|has_js)=[^;]*", "");
+  // Remove a ";" prefix, if present.
+  set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", "");
+
+Let me show you an example where we remove everything except the
+cookies named COOKIE1 and COOKIE2 and you can marvel at it::
+
+  sub vcl_recv {
+    if (req.http.Cookie) {
+      set req.http.Cookie = ";" + req.http.Cookie;
+      set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";");
+      set req.http.Cookie = regsuball(req.http.Cookie, ";(COOKIE1|COOKIE2)=", "; \1=");
+      set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", "");
+      set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", "");
+
+      if (req.http.Cookie == "") {
+          remove req.http.Cookie;
+      }
+    }
+  }
+
+A somewhat simpler example that can accomplish almost the same can be
+found below. Instead of filtering out the other cookies it picks out
+the one cookie that is needed, copies it to another header and then
+copies it back, deleting the original cookie header.::
+
+  sub vcl_recv {
+         # save the original cookie header so we can mangle it
+        set req.http.X-Varnish-PHP_SID = req.http.Cookie;
+        # using a capturing sub pattern, extract the continuous string of 
+        # alphanumerics that immediately follows "PHPSESSID="
+        set req.http.X-Varnish-PHP_SID = 
+           regsuball(req.http.X-Varnish-PHP_SID, ";? ?PHPSESSID=([a-zA-Z0-9]+)( |;| ;).*","\1");
+        set req.http.Cookie = req.X-Varnish-PHP_SID;
+        remove req.X-Varnish-PHP_SID;
+   }   
+
+There are other scary examples of what can be done in VCL in the
+Varnish Cache Wiki.
+
+
+Cookies coming from the backend
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If your backend server sets a cookie using the Set-Cookie header
+Varnish will not cache the page in the default configuration.  A
+hit-for-pass object (see :ref:`tutorial-vcl_fetch_actions`) is created.
+So, if the backend server acts silly and sets unwanted cookies just unset
+the Set-Cookie header and all should be fine. 
diff --git a/doc/sphinx/users-guide/devicedetection.rst b/doc/sphinx/users-guide/devicedetection.rst
new file mode 100644
index 0000000..6bfc4c9
--- /dev/null
+++ b/doc/sphinx/users-guide/devicedetection.rst
@@ -0,0 +1,268 @@
+.. _tutorial-devicedetect:
+
+Device detection
+~~~~~~~~~~~~~~~~
+
+Device detection is figuring out what kind of content to serve to a
+client based on the User-Agent string supplied in a request.
+
+Use cases for this are for example to send size reduced files to mobile
+clients with small screens and on high latency networks, or to 
+provide a streaming video codec that the client understands.
+
+There are a couple of strategies on what to do with such clients:
+1) Redirect them to another URL.
+2) Use a different backend for the special clients.
+3) Change the backend requests so the usual backend sends tailored content.
+
+To make the examples easier to understand, it is assumed in this text 
+that all the req.http.X-UA-Device header is present and unique per client class
+that content is to be served to. 
+
+Setting this header can be as simple as::
+
+   sub vcl_recv { 
+       if (req.http.User-Agent ~ "(?i)iphone" {
+           set req.http.X-UA-Device = "mobile-iphone";
+       }
+   }
+
+There are different commercial and free offerings in doing grouping and
+identifiying clients in further detail than this. For a basic and community
+based regular expression set, see
+https://github.com/varnish/varnish-devicedetect/ .
+
+
+Serve the different content on the same URL
+-------------------------------------------
+
+The tricks involved are: 
+1. Detect the client (pretty simple, just include devicedetect.vcl and call
+it)
+2. Figure out how to signal the backend what client class this is. This
+includes for example setting a header, changing a header or even changing the
+backend request URL.
+3. Modify any response from the backend to add missing Vary headers, so
+Varnish' internal handling of this kicks in.
+4. Modify output sent to the client so any caches outside our control don't
+serve the wrong content.
+
+All this while still making sure that we only get 1 cached object per URL per
+device class.
+
+
+Example 1: Send HTTP header to backend
+''''''''''''''''''''''''''''''''''''''
+
+The basic case is that Varnish adds the X-UA-Device HTTP header on the backend
+requests, and the backend mentions in the response Vary header that the content
+is dependant on this header. 
+
+Everything works out of the box from Varnish' perspective.
+
+.. 071-example1-start
+
+VCL::
+
+    sub vcl_recv { 
+        # call some detection engine that set req.http.X-UA-Device
+    }
+    # req.http.X-UA-Device is copied by Varnish into bereq.http.X-UA-Device
+
+    # so, this is a bit conterintuitive. The backend creates content based on
+    # the normalized User-Agent, but we use Vary on X-UA-Device so Varnish will
+    # use the same cached object for all U-As that map to the same X-UA-Device.
+    #
+    # If the backend does not mention in Vary that it has crafted special
+    # content based on the User-Agent (==X-UA-Device), add it. 
+    # If your backend does set Vary: User-Agent, you may have to remove that here.
+    sub vcl_fetch {
+        if (req.http.X-UA-Device) {
+            if (!beresp.http.Vary) { # no Vary at all
+                set beresp.http.Vary = "X-UA-Device"; 
+            } elseif (beresp.http.Vary !~ "X-UA-Device") { # add to existing Vary
+                set beresp.http.Vary = beresp.http.Vary + ", X-UA-Device"; 
+            } 
+        }
+        # comment this out if you don't want the client to know your
+        # classification
+        set beresp.http.X-UA-Device = req.http.X-UA-Device;
+    }
+
+    # to keep any caches in the wild from serving wrong content to client #2
+    # behind them, we need to transform the Vary on the way out.
+    sub vcl_deliver {
+        if ((req.http.X-UA-Device) && (resp.http.Vary)) {
+            set resp.http.Vary = regsub(resp.http.Vary, "X-UA-Device", "User-Agent");
+        }
+    }
+
+.. 071-example1-end
+
+Example 2: Normalize the User-Agent string
+''''''''''''''''''''''''''''''''''''''''''
+
+Another way of signaling the device type is to override or normalize the
+User-Agent header sent to the backend.
+
+For example
+
+    User-Agent: Mozilla/5.0 (Linux; U; Android 2.2; nb-no; HTC Desire Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1
+
+becomes:
+
+    User-Agent: mobile-android
+
+when seen by the backend.
+
+This works if you don't need the original header for anything on the backend.
+A possible use for this is for CGI scripts where only a small set of predefined
+headers are (by default) available for the script.
+
+.. 072-example2-start
+
+VCL::
+
+    sub vcl_recv { 
+        # call some detection engine that set req.http.X-UA-Device
+    }
+
+    # override the header before it is sent to the backend
+    sub vcl_miss { if (req.http.X-UA-Device) { set bereq.http.User-Agent = req.http.X-UA-Device; } }
+    sub vcl_pass { if (req.http.X-UA-Device) { set bereq.http.User-Agent = req.http.X-UA-Device; } }
+
+    # standard Vary handling code from previous examples.
+    sub vcl_fetch {
+        if (req.http.X-UA-Device) {
+            if (!beresp.http.Vary) { # no Vary at all
+                set beresp.http.Vary = "X-UA-Device";
+            } elseif (beresp.http.Vary !~ "X-UA-Device") { # add to existing Vary
+                set beresp.http.Vary = beresp.http.Vary + ", X-UA-Device";
+            }
+        }
+        set beresp.http.X-UA-Device = req.http.X-UA-Device;
+    }
+    sub vcl_deliver {
+        if ((req.http.X-UA-Device) && (resp.http.Vary)) {
+            set resp.http.Vary = regsub(resp.http.Vary, "X-UA-Device", "User-Agent");
+        }
+    }
+
+.. 072-example2-end
+
+Example 3: Add the device class as a GET query parameter
+''''''''''''''''''''''''''''''''''''''''''''''''''''''''
+
+If everything else fails, you can add the device type as a GET argument.
+
+    http://example.com/article/1234.html --> http://example.com/article/1234.html?devicetype=mobile-iphone
+
+The client itself does not see this classification, only the backend request
+is changed.
+
+.. 073-example3-start
+
+VCL::
+
+    sub vcl_recv { 
+        # call some detection engine that set req.http.X-UA-Device
+    }
+
+    sub append_ua {
+        if ((req.http.X-UA-Device) && (req.request == "GET")) {
+            # if there are existing GET arguments;
+            if (req.url ~ "\?") {
+                set req.http.X-get-devicetype = "&devicetype=" + req.http.X-UA-Device;
+            } else { 
+                set req.http.X-get-devicetype = "?devicetype=" + req.http.X-UA-Device;
+            }
+            set req.url = req.url + req.http.X-get-devicetype;
+            unset req.http.X-get-devicetype;
+        }
+    }
+
+    # do this after vcl_hash, so all Vary-ants can be purged in one go. (avoid ban()ing)
+    sub vcl_miss { call append_ua; }
+    sub vcl_pass { call append_ua; }
+
+    # Handle redirects, otherwise standard Vary handling code from previous
+    # examples.
+    sub vcl_fetch {
+        if (req.http.X-UA-Device) {
+            if (!beresp.http.Vary) { # no Vary at all
+                set beresp.http.Vary = "X-UA-Device";
+            } elseif (beresp.http.Vary !~ "X-UA-Device") { # add to existing Vary
+                set beresp.http.Vary = beresp.http.Vary + ", X-UA-Device";
+            }
+
+            # if the backend returns a redirect (think missing trailing slash),
+            # we will potentially show the extra address to the client. we
+            # don't want that.  if the backend reorders the get parameters, you
+            # may need to be smarter here. (? and & ordering)
+
+            if (beresp.status == 301 || beresp.status == 302 || beresp.status == 303) {
+                set beresp.http.location = regsub(beresp.http.location, "[?&]devicetype=.*$", "");
+            }
+        }
+        set beresp.http.X-UA-Device = req.http.X-UA-Device;
+    }
+    sub vcl_deliver {
+        if ((req.http.X-UA-Device) && (resp.http.Vary)) {
+            set resp.http.Vary = regsub(resp.http.Vary, "X-UA-Device", "User-Agent");
+        }
+    }
+
+.. 073-example3-end
+
+Different backend for mobile clients
+------------------------------------
+
+If you have a different backend that serves pages for mobile clients, or any
+special needs in VCL, you can use the X-UA-Device header like this::
+
+    backend mobile {
+        .host = "10.0.0.1";
+        .port = "80";
+    }
+
+    sub vcl_recv {
+        # call some detection engine
+
+        if (req.http.X-UA-Device ~ "^mobile" || req.http.X-UA-device ~ "^tablet") {
+            set req.backend = mobile;
+        }
+    }
+    sub vcl_hash {
+        if (req.http.X-UA-Device) {
+            hash_data(req.http.X-UA-Device);
+        }
+    }
+
+Redirecting mobile clients
+--------------------------
+
+If you want to redirect mobile clients you can use the following snippet.
+
+.. 065-redir-mobile-start
+
+VCL::
+
+    sub vcl_recv {
+        # call some detection engine
+
+        if (req.http.X-UA-Device ~ "^mobile" || req.http.X-UA-device ~ "^tablet") {
+            error 750 "Moved Temporarily";
+        }
+    }
+     
+    sub vcl_error {
+        if (obj.status == 750) {
+            set obj.http.Location = "http://m.example.com" + req.url;
+            set obj.status = 302;
+            return(deliver);
+        }
+    }
+
+.. 065-redir-mobile-end
+
+
diff --git a/doc/sphinx/users-guide/esi.rst b/doc/sphinx/users-guide/esi.rst
new file mode 100644
index 0000000..720b790
--- /dev/null
+++ b/doc/sphinx/users-guide/esi.rst
@@ -0,0 +1,79 @@
+.. _tutorial-esi:
+
+Edge Side Includes
+------------------
+
+*Edge Side Includes* is a language to include *fragments* of web pages
+in other web pages. Think of it as HTML include statement that works
+over HTTP. 
+
+On most web sites a lot of content is shared between
+pages. Regenerating this content for every page view is wasteful and
+ESI tries to address that letting you decide the cache policy for
+each fragment individually.
+
+In Varnish we've only implemented a small subset of ESI. As of 2.1 we
+have three ESI statements:
+
+ * esi:include 
+ * esi:remove
+ * <!--esi ...-->
+
+Content substitution based on variables and cookies is not implemented
+but is on the roadmap. 
+
+Varnish will not process ESI instructions in HTML comments.
+
+Example: esi:include
+~~~~~~~~~~~~~~~~~~~~
+
+Lets see an example how this could be used. This simple cgi script
+outputs the date::
+
+     #!/bin/sh
+     
+     echo 'Content-type: text/html'
+     echo ''
+     date "+%Y-%m-%d %H:%M"
+
+Now, lets have an HTML file that has an ESI include statement::
+
+     <HTML>
+     <BODY>
+     The time is: <esi:include src="/cgi-bin/date.cgi"/>
+     at this very moment.
+     </BODY>
+     </HTML>
+
+For ESI to work you need to activate ESI processing in VCL, like this::
+
+    sub vcl_fetch {
+    	if (req.url == "/test.html") {
+           set beresp.do_esi = true; /* Do ESI processing		*/
+           set beresp.ttl = 24 h;    /* Sets the TTL on the HTML above  */
+    	} elseif (req.url == "/cgi-bin/date.cgi") {
+           set beresp.ttl = 1m;      /* Sets a one minute TTL on	*/
+	       	       	 	     /*  the included object		*/
+        }
+    }
+
+Example: esi:remove and <!--esi ... -->
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The <esi:remove> and <!--esi ... --> constructs can be used to present
+appropriate content whether or not ESI is available, for example you can
+include content when ESI is available or link to it when it is not.
+ESI processors will remove the start ("<!--esi") and end ("-->") when
+the page is processed, while still processing the contents. If the page
+is not processed, it will remain, becoming an HTML/XML comment tag.
+ESI processors will remove <esi:remove> tags and all content contained
+in them, allowing you to only render the content when the page is not
+being ESI-processed.
+For example::
+
+  <esi:remove> 
+    <a href="http://www.example.com/LICENSE">The license</a>
+  </esi:remove>
+  <!--esi  
+  <p>The full text of the license:</p>
+  <esi:include src="http://example.com/LICENSE" />
+  -->
diff --git a/doc/sphinx/users-guide/handling_misbehaving_servers.rst b/doc/sphinx/users-guide/handling_misbehaving_servers.rst
new file mode 100644
index 0000000..406b4b3
--- /dev/null
+++ b/doc/sphinx/users-guide/handling_misbehaving_servers.rst
@@ -0,0 +1,103 @@
+.. _tutorial-handling_misbehaving_servers:
+
+Misbehaving servers
+-------------------
+
+A key feature of Varnish is its ability to shield you from misbehaving
+web- and application servers.
+
+
+
+Grace mode
+~~~~~~~~~~
+
+When several clients are requesting the same page Varnish will send
+one request to the backend and place the others on hold while fetching
+one copy from the back end. In some products this is called request
+coalescing and Varnish does this automatically.
+
+If you are serving thousands of hits per second the queue of waiting
+requests can get huge. There are two potential problems - one is a
+thundering herd problem - suddenly releasing a thousand threads to
+serve content might send the load sky high. Secondly - nobody likes to
+wait. To deal with this we can instruct Varnish to keep
+the objects in cache beyond their TTL and to serve the waiting
+requests somewhat stale content.
+
+So, in order to serve stale content we must first have some content to
+serve. So to make Varnish keep all objects for 30 minutes beyond their
+TTL use the following VCL::
+
+  sub vcl_fetch {
+    set beresp.grace = 30m;
+  }
+
+Varnish still won't serve the stale objects. In order to enable
+Varnish to actually serve the stale object we must enable this on the
+request. Lets us say that we accept serving 15s old object.::
+
+  sub vcl_recv {
+    set req.grace = 15s;
+  }
+
+You might wonder why we should keep the objects in the cache for 30
+minutes if we are unable to serve them? Well, if you have enabled
+:ref:`tutorial-advanced_backend_servers-health` you can check if the
+backend is sick and if it is we can serve the stale content for a bit
+longer.::
+
+   if (! req.backend.healthy) {
+      set req.grace = 5m;
+   } else {
+      set req.grace = 15s;
+   }
+
+So, to sum up, grace mode solves two problems:
+ * it serves stale content to avoid request pile-up.
+ * it serves stale content if the backend is not healthy.
+
+Saint mode
+~~~~~~~~~~
+
+Sometimes servers get flaky. They start throwing out random
+errors. You can instruct Varnish to try to handle this in a
+more-than-graceful way - enter *Saint mode*. Saint mode enables you to
+discard a certain page from one backend server and either try another
+server or serve stale content from cache. Lets have a look at how this
+can be enabled in VCL::
+
+  sub vcl_fetch {
+    if (beresp.status == 500) { 
+      set beresp.saintmode = 10s;
+      return(restart);
+    }
+    set beresp.grace = 5m;
+  } 
+
+When we set beresp.saintmode to 10 seconds Varnish will not ask *that*
+server for URL for 10 seconds. A blacklist, more or less. Also a
+restart is performed so if you have other backends capable of serving
+that content Varnish will try those. When you are out of backends
+Varnish will serve the content from its stale cache.
+
+This can really be a life saver.
+
+Known limitations on grace- and saint mode
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If your request fails while it is being fetched you're thrown into
+vcl_error. vcl_error has access to a rather limited set of data so you
+can't enable saint mode or grace mode here. This will be addressed in a
+future release but a work-around available.
+
+* Declare a backend that is always sick.
+* Set a magic marker in vcl_error
+* Restart the transaction
+* Note the magic marker in vcl_recv and set the backend to the one mentioned
+* Varnish will now serve stale data is any is available
+
+
+God mode
+~~~~~~~~
+Not implemented yet. :-)
+
diff --git a/doc/sphinx/users-guide/increasing_your_hitrate.rst b/doc/sphinx/users-guide/increasing_your_hitrate.rst
new file mode 100644
index 0000000..b9fa7e6
--- /dev/null
+++ b/doc/sphinx/users-guide/increasing_your_hitrate.rst
@@ -0,0 +1,213 @@
+.. _tutorial-increasing_your_hitrate:
+
+Achieving a high hitrate
+------------------------
+
+Now that Varnish is up and running, and you can access your web
+application through Varnish. Unless your application is specifically
+written to work behind a web accelerator you'll probably need to do
+some changes to either the configuration or the application in order
+to get a high hit rate in Varnish.
+
+Varnish will not cache your data unless it's absolutely sure it is
+safe to do so. So, for you to understand how Varnish decides if and
+how to cache a page, I'll guide you through a couple of tools that you
+will find useful.
+
+Note that you need a tool to see what HTTP headers fly between you and
+the web server. On the Varnish server, the easiest is to use
+varnishlog and varnishtop but sometimes a client-side tool makes
+sense. Here are the ones I use.
+
+Tool: varnishtop
+~~~~~~~~~~~~~~~~
+
+You can use varnishtop to identify what URLs are hitting the backend
+the most. ``varnishtop -i txurl`` is an essential command. You can see
+some other examples of varnishtop usage in :ref:`tutorial-statistics`.
+
+
+Tool: varnishlog
+~~~~~~~~~~~~~~~~
+
+When you have identified the an URL which is frequently sent to the
+backend you can use varnishlog to have a look at the request.
+``varnishlog -c -m 'RxURL:^/foo/bar`` will show you the requests
+coming from the client (-c) matching /foo/bar.
+
+For more information on how varnishlog works please see
+:ref:`tutorial-logging` or man :ref:`ref-varnishlog`.
+
+For extended diagnostics headers, see
+http://www.varnish-cache.org/trac/wiki/VCLExampleHitMissHeader
+
+
+Tool: lwp-request
+~~~~~~~~~~~~~~~~~
+
+lwp-request is part of The World-Wide Web library for Perl. It's a
+couple of really basic programs that can execute an HTTP request and
+give you the result. I mostly use two programs, GET and HEAD.
+
+vg.no was the first site to use Varnish and the people running Varnish
+there are quite clueful. So it's interesting to look at their HTTP
+Headers. Let's send a GET request for their home page::
+
+  $ GET -H 'Host: www.vg.no' -Used http://vg.no/
+  GET http://vg.no/
+  Host: www.vg.no
+  User-Agent: lwp-request/5.834 libwww-perl/5.834
+  
+  200 OK
+  Cache-Control: must-revalidate
+  Refresh: 600
+  Title: VG Nett - Forsiden - VG Nett
+  X-Age: 463
+  X-Cache: HIT
+  X-Rick-Would-Never: Let you down
+  X-VG-Jobb: http://www.finn.no/finn/job/fulltime/result?keyword=vg+multimedia Merk:HeaderNinja
+  X-VG-Korken: http://www.youtube.com/watch?v=Fcj8CnD5188
+  X-VG-WebCache: joanie
+  X-VG-WebServer: leon
+
+OK. Let me explain what it does. GET usually sends off HTTP 0.9
+requests, which lack the Host header. So I add a Host header with the
+-H option. -U print request headers, -s prints response status, -e
+prints response headers and -d discards the actual content. We don't
+really care about the content, only the headers.
+
+As you can see, VG adds quite a bit of information in their
+headers. Some of the headers, like the X-Rick-Would-Never are specific
+to vg.no and their somewhat odd sense of humour. Others, like the
+X-VG-Webcache are for debugging purposes. 
+
+So, to check whether a site sets cookies for a specific URL, just do::
+
+  GET -Used http://example.com/ |grep ^Set-Cookie
+
+Tool: Live HTTP Headers
+~~~~~~~~~~~~~~~~~~~~~~~
+
+There is also a plugin for Firefox. *Live HTTP Headers* can show you
+what headers are being sent and recieved. Live HTTP Headers can be
+found at https://addons.mozilla.org/en-US/firefox/addon/3829/ or by
+googling "Live HTTP Headers".
+
+
+The role of HTTP Headers
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Along with each HTTP request and response comes a bunch of headers
+carrying metadata. Varnish will look at these headers to determine if
+it is appropriate to cache the contents and how long Varnish can keep
+the content.
+
+Please note that when considering these headers Varnish actually
+considers itself *part of* the actual webserver. The rationale being
+that both are under your control. 
+
+The term *surrogate origin cache* is not really well defined by the
+IETF so RFC 2616 so the various ways Varnish works might differ from
+your expectations.
+
+Let's take a look at the important headers you should be aware of:
+
+Cache-Control
+~~~~~~~~~~~~~
+
+The Cache-Control instructs caches how to handle the content. Varnish
+cares about the *max-age* parameter and uses it to calculate the TTL
+for an object. 
+
+"Cache-Control: nocache" is ignored but if you need this you can
+easily add support for it.
+
+So make sure you issue a Cache-Control header with a max-age
+header. You can have a look at what Varnish Software's drupal server
+issues::
+
+  $ GET -Used http://www.varnish-software.com/|grep ^Cache-Control
+  Cache-Control: public, max-age=600
+
+Age
+~~~
+
+Varnish adds an Age header to indicate how long the object has been
+kept inside Varnish. You can grep out Age from varnishlog like this::
+
+  varnishlog -i TxHeader -I ^Age
+
+Pragma
+~~~~~~
+
+An HTTP 1.0 server might send "Pragma: nocache". Varnish ignores this
+header. You could easily add support for this header in VCL.
+
+In vcl_fetch::
+
+  if (beresp.http.Pragma ~ "nocache") {
+     return(hit_for_pass);
+  }
+
+Authorization
+~~~~~~~~~~~~~
+
+If Varnish sees an Authorization header it will pass the request. If
+this is not what you want you can unset the header.
+
+Overriding the time-to-live (ttl)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Sometimes your backend will misbehave. It might, depending on your
+setup, be easier to override the ttl in Varnish than to fix your
+somewhat cumbersome backend. 
+
+You need VCL to identify the objects you want and then you set the
+beresp.ttl to whatever you want::
+
+  sub vcl_fetch {
+      if (req.url ~ "^/legacy_broken_cms/") {
+          set beresp.ttl = 5d;
+      }
+  }
+
+The example will set the TTL to 5 days for the old legacy stuff on
+your site.
+
+Forcing caching for certain requests and certain responses
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Since you still have this cumbersome backend that isn't very friendly
+to work with you might want to override more stuff in Varnish. We
+recommend that you rely as much as you can on the default caching
+rules. It is perfectly easy to force Varnish to lookup an object in
+the cache but it isn't really recommended.
+
+
+Normalizing your namespace
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Some sites are accessed via lots of
+hostnames. http://www.varnish-software.com/,
+http://varnish-software.com/ and http://varnishsoftware.com/ all point
+at the same site. Since Varnish doesn't know they are different,
+Varnish will cache different versions of every page for every
+hostname. You can mitigate this in your web server configuration by
+setting up redirects or by using the following VCL::
+
+  if (req.http.host ~ "(?i)^(www.)?varnish-?software.com") {
+    set req.http.host = "varnish-software.com";
+  }
+
+
+Ways of increasing your hitrate even more
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following chapters should give your ways of further increasing
+your hitrate, especially the chapter on Cookies.
+
+ * :ref:`tutorial-cookies`
+ * :ref:`tutorial-vary`
+ * :ref:`tutorial-purging`
+ * :ref:`tutorial-esi`
+
diff --git a/doc/sphinx/users-guide/index.rst b/doc/sphinx/users-guide/index.rst
new file mode 100644
index 0000000..6cf20a2
--- /dev/null
+++ b/doc/sphinx/users-guide/index.rst
@@ -0,0 +1,39 @@
+.. _users-guide-index:
+
+%%%%%%%%%%%%%
+Using Varnish
+%%%%%%%%%%%%%
+
+This guide is intended for system administrators managing Varnish
+Cache. The reader should know how to configure her web- or application
+server and have basic knowledge of the HTTP protocol. The reader
+should have Varnish Cache up and running with the default
+configuration.
+
+The tutorial is split into short chapters, each chapter taking on a
+separate topic. Good luck.
+
+.. toctree:: :maxdepth: 1
+
+        introduction
+	backend_servers
+        starting_varnish
+	logging
+        sizing_your_cache
+        putting_varnish_on_port_80
+	vcl
+        statistics
+        increasing_your_hitrate
+	cookies
+	vary
+	purging
+	compression
+	esi
+	virtualized
+	websockets
+	devicedetection
+	advanced_backend_servers
+        handling_misbehaving_servers
+        advanced_topics
+	troubleshooting
+
diff --git a/doc/sphinx/users-guide/introduction.rst b/doc/sphinx/users-guide/introduction.rst
new file mode 100644
index 0000000..0d43623
--- /dev/null
+++ b/doc/sphinx/users-guide/introduction.rst
@@ -0,0 +1,37 @@
+.. _tutorial-intro:
+
+What is Varnish?
+----------------
+
+Varnish Cache is a web application accelerator also known as a caching
+HTTP reverse proxy. You install it in front of any server that speaks
+HTTP and configure it to cache the contents. Varnish Cache is really,
+really fast. It typically speeds up delivery with a factor of 300 -
+1000x, depending on your architecture.
+
+
+Performance
+~~~~~~~~~~~
+
+Varnish performs really, really well. It is usually bound by the speed
+of the network, effectivly turning performance into a non-issue. We've
+seen Varnish delivering 20 Gbps on regular off-the-shelf hardware.
+
+Flexibility
+~~~~~~~~~~~
+
+One of the key features of Varnish Cache, in addition to it's
+performance, is the flexibility of it's configuration language,
+VCL. VCL enables you to write policies on how incoming requests should
+be handled. In such a policy you can decide what content you want to
+serve, from where you want to get the content and how the request or
+response should be altered. You can read more about this in our
+tutorial.
+
+
+Supported plattforms
+~~~~~~~~~~~~~~~~~~~~
+
+Varnish is written to run on modern versions of Linux and FreeBSD and
+the best experience is had on those plattforms. Thanks to our
+contributors it also runs on NetBSD, OpenBSD and OS X.
diff --git a/doc/sphinx/users-guide/logging.rst b/doc/sphinx/users-guide/logging.rst
new file mode 100644
index 0000000..1f0bc18
--- /dev/null
+++ b/doc/sphinx/users-guide/logging.rst
@@ -0,0 +1,68 @@
+.. _tutorial-logging:
+
+Logging in Varnish
+------------------
+
+One of the really nice features in Varnish is how logging
+works. Instead of logging to normal log file Varnish logs to a shared
+memory segment. When the end of the segment is reached we start over,
+overwriting old data. This is much, much faster then logging to a file
+and it doesn't require disk space.
+
+The flip side is that if you forget to have a program actually write the
+logs to disk they will disappear.
+
+varnishlog is one of the programs you can use to look at what Varnish
+is logging. Varnishlog gives you the raw logs, everything that is
+written to the logs. There are other clients as well, we'll show you
+these later.
+
+In the terminal window you started varnish now type *varnishlog* and
+press enter.
+
+You'll see lines like these scrolling slowly by.::
+
+    0 CLI          - Rd ping
+    0 CLI          - Wr 200 PONG 1273698726 1.0
+
+These is the Varnish master process checking up on the caching process
+to see that everything is OK. 
+
+Now go to the browser and reload the page displaying your web
+app. You'll see lines like these.::
+
+   11 SessionOpen  c 127.0.0.1 58912 0.0.0.0:8080
+   11 ReqStart     c 127.0.0.1 58912 595005213
+   11 RxRequest    c GET
+   11 RxURL        c /
+   11 RxProtocol   c HTTP/1.1
+   11 RxHeader     c Host: localhost:8080
+   11 RxHeader     c Connection: keep-alive
+
+The first column is an arbitrary number, it defines the request. Lines
+with the same number are part of the same HTTP transaction. The second
+column is the *tag* of the log message. All log entries are tagged
+with a tag indicating what sort of activity is being logged. Tags
+starting with Rx indicate Varnish is recieving data and Tx indicates
+sending data.
+
+The third column tell us whether this is is data coming or going to
+the client (c) or to/from the backend (b). The forth column is the
+data being logged.
+
+Now, you can filter quite a bit with varnishlog. The basic option you
+want to know are:
+
+-b
+ Only show log lines from traffic going between Varnish and the backend 
+ servers. This will be useful when we want to optimize cache hit rates.
+
+-c 
+ Same as -b but for client side traffic.
+
+-m tag:regex
+ Only list transactions where the tag matches a regular expression. If
+ it matches you will get the whole transaction.
+
+Now that Varnish seem to work OK it's time to put Varnish on port 80
+while we tune it.
diff --git a/doc/sphinx/users-guide/purging.rst b/doc/sphinx/users-guide/purging.rst
new file mode 100644
index 0000000..422f9f4
--- /dev/null
+++ b/doc/sphinx/users-guide/purging.rst
@@ -0,0 +1,175 @@
+.. _tutorial-purging:
+
+=====================
+ Purging and banning
+=====================
+
+One of the most effective ways of increasing your hit ratio is to
+increase the time-to-live (ttl) of your objects. But, as you're aware
+of, in this twitterific day of age serving content that is outdated is
+bad for business.
+
+The solution is to notify Varnish when there is fresh content
+available. This can be done through three mechanisms. HTTP purging,
+banning and forced cache misses. First, let me explain the HTTP purges. 
+
+
+HTTP Purges
+===========
+
+A *purge* is what happens when you pick out an object from the cache
+and discard it along with its variants. Usually a purge is invoked
+through HTTP with the method PURGE.
+
+An HTTP purge is similar to an HTTP GET request, except that the
+*method* is PURGE. Actually you can call the method whatever you'd
+like, but most people refer to this as purging. Squid supports the
+same mechanism. In order to support purging in Varnish you need the
+following VCL in place::
+
+  acl purge {
+	  "localhost";
+	  "192.168.55.0"/24;
+  }
+  
+  sub vcl_recv {
+      	  # allow PURGE from localhost and 192.168.55...
+
+	  if (req.request == "PURGE") {
+		  if (!client.ip ~ purge) {
+			  error 405 "Not allowed.";
+		  }
+		  return (lookup);
+	  }
+  }
+  
+  sub vcl_hit {
+	  if (req.request == "PURGE") {
+	          purge;
+		  error 200 "Purged.";
+	  }
+  }
+  
+  sub vcl_miss {
+	  if (req.request == "PURGE") {
+	          purge;
+		  error 200 "Purged.";
+	  }
+  }
+
+As you can see we have used to new VCL subroutines, vcl_hit and
+vcl_miss. When we call lookup Varnish will try to lookup the object in
+its cache. It will either hit an object or miss it and so the
+corresponding subroutine is called. In vcl_hit the object that is
+stored in cache is available and we can set the TTL. The purge in
+vcl_miss is necessary to purge all variants in the cases where you hit an
+object, but miss a particular variant.
+
+So for example.com to invalidate their front page they would call out
+to Varnish like this::
+
+  PURGE / HTTP/1.0
+  Host: example.com
+
+And Varnish would then discard the front page. This will remove all
+variants as defined by Vary.
+
+Bans
+====
+
+There is another way to invalidate content: Bans. You can think of
+bans as a sort of a filter on objects already in the cache. You *ban*
+certain content from being served from your cache. You can ban
+content based on any metadata we have.
+A ban will only work on objects already in the cache, it does not
+prevent new content from entering the cache or being served.
+
+Support for bans is built into Varnish and available in the CLI
+interface. To ban every png object belonging on example.com, issue
+the following command::
+
+  ban req.http.host == "example.com" && req.url ~ "\.png$"
+
+Quite powerful, really.
+
+Bans are checked when we hit an object in the cache, but before we
+deliver it. *An object is only checked against newer bans*. 
+
+Bans that only match against obj.* are also processed by a background
+worker threads called the *ban lurker*. The ban lurker will walk the
+heap and try to match objects and will evict the matching objects. How
+aggressive the ban lurker is can be controlled by the parameter
+ban_lurker_sleep. The ban lurker can be disabled by setting
+ban_lurker_sleep to 0.
+
+Bans that are older than the oldest objects in the cache are discarded
+without evaluation.  If you have a lot of objects with long TTL, that
+are seldom accessed you might accumulate a lot of bans. This might
+impact CPU usage and thereby performance.
+
+You can also add bans to Varnish via HTTP. Doing so requires a bit of VCL::
+
+  sub vcl_recv {
+	  if (req.request == "BAN") {
+                  # Same ACL check as above:
+		  if (!client.ip ~ purge) {
+			  error 405 "Not allowed.";
+		  }
+		  ban("req.http.host == " + req.http.host +
+		        "&& req.url == " + req.url);
+
+		  # Throw a synthetic page so the
+                  # request won't go to the backend.
+		  error 200 "Ban added";
+	  }
+  }
+
+This VCL sniplet enables Varnish to handle an HTTP BAN method, adding a
+ban on the URL, including the host part.
+
+The ban lurker can help you keep the ban list at a manageable size, so
+we recommend that you avoid using req.* in your bans, as the request
+object is not available in the ban lurker thread.
+
+You can use the following template to write ban lurker friendly bans::
+
+  sub vcl_fetch {
+    set beresp.http.x-url = req.url;
+  }
+
+  sub vcl_deliver {
+    unset resp.http.x-url; # Optional
+  }
+
+  sub vcl_recv {
+    if (req.request == "PURGE") {
+      if (client.ip !~ purge) {
+        error 401 "Not allowed";
+      }
+      ban("obj.http.x-url ~ " + req.url); # Assumes req.url is a regex. This might be a bit too simple
+    }
+  }
+
+To inspect the current ban list, issue the ban.list command in CLI. This
+will produce a status of all current bans::
+
+  0xb75096d0 1318329475.377475    10      obj.http.x-url ~ test
+  0xb7509610 1318329470.785875    20G     obj.http.x-url ~ test
+
+The ban list contains the ID of the ban, the timestamp when the ban
+entered the ban list. A count of the objects that has reached this point
+in the ban list, optionally postfixed with a 'G' for "Gone", if the ban
+is no longer valid.  Finally, the ban expression is listed. The ban can
+be marked as Gone if it is a duplicate ban, but is still kept in the list
+for optimization purposes.
+
+Forcing a cache miss
+====================
+
+The final way to invalidate an object is a method that allows you to
+refresh an object by forcing a hash miss for a single request. If you set
+req.hash_always_miss to true, varnish will miss the current object in the
+cache, thus forcing a fetch from the backend. This can in turn add the
+freshly fetched object to the cache, thus overriding the current one. The
+old object will stay in the cache until ttl expires or it is evicted by
+some other means.
diff --git a/doc/sphinx/users-guide/putting_varnish_on_port_80.rst b/doc/sphinx/users-guide/putting_varnish_on_port_80.rst
new file mode 100644
index 0000000..73a80ff
--- /dev/null
+++ b/doc/sphinx/users-guide/putting_varnish_on_port_80.rst
@@ -0,0 +1,25 @@
+
+Put Varnish on port 80
+----------------------
+
+Until now we've been running with Varnish on a high port, for testing
+purposes. You should test your application and if it works OK we can
+switch, so Varnish will be running on port 80 and your web server on a
+high port.
+
+First we kill off varnishd::
+
+     # pkill varnishd
+
+and stop your web server. Edit the configuration for your web server
+and make it bind to port 8080 instead of 80. Now open the Varnish
+default.vcl and change the port of the *default* backend to 8080.
+
+Start up your web server and then start varnish::
+
+      # varnishd -f /usr/local/etc/varnish/default.vcl -s malloc,1G -T 127.0.0.1:2000
+
+Note that we've removed the -a option. Now Varnish, as its default
+setting dictates, will bind to the http port (80). Now everyone
+accessing your site will be accessing through Varnish.
+
diff --git a/doc/sphinx/users-guide/sizing_your_cache.rst b/doc/sphinx/users-guide/sizing_your_cache.rst
new file mode 100644
index 0000000..c19647c
--- /dev/null
+++ b/doc/sphinx/users-guide/sizing_your_cache.rst
@@ -0,0 +1,25 @@
+
+Sizing your cache
+-----------------
+
+Picking how much memory you should give Varnish can be a tricky
+task. A few things to consider:
+
+ * How big is your *hot* data set. For a portal or news site that
+   would be the size of the front page with all the stuff on it, and
+   the size of all the pages and objects linked from the first page. 
+ * How expensive is it to generate an object? Sometimes it makes sense
+   to only cache images a little while or not to cache them at all if
+   they are cheap to serve from the backend and you have a limited
+   amount of memory.
+ * Watch the n_lru_nuked counter with :ref:`reference-varnishstat` or some other
+   tool. If you have a lot of LRU activity then your cache is evicting
+   objects due to space constraints and you should consider increasing
+   the size of the cache.
+
+Be aware that every object that is stored also carries overhead that
+is kept outside the actually storage area. So, even if you specify -s
+malloc,16G varnish might actually use **double** that. Varnish has a
+overhead of about 1k per object. So, if you have lots of small objects
+in your cache the overhead might be significant.
+
diff --git a/doc/sphinx/users-guide/starting_varnish.rst b/doc/sphinx/users-guide/starting_varnish.rst
new file mode 100644
index 0000000..6c89f54
--- /dev/null
+++ b/doc/sphinx/users-guide/starting_varnish.rst
@@ -0,0 +1,51 @@
+.. _tutorial-starting_varnish:
+
+Starting Varnish
+----------------
+
+I assume varnishd is in your path. You might want to run ``pkill
+varnishd`` to make sure varnishd isn't running. Become root and type:
+
+``# varnishd -f /usr/local/etc/varnish/default.vcl -s malloc,1G -T 127.0.0.1:2000 -a 0.0.0.0:8080``
+
+I added a few options, lets go through them:
+
+``-f /usr/local/etc/varnish/default.vcl``
+ The -f options specifies what configuration varnishd should use.
+
+``-s malloc,1G``
+ The -s options chooses the storage type Varnish should use for
+ storing its content. I used the type *malloc*, which just uses memory
+ for storage. There are other backends as well, described in 
+ :ref:tutorial-storage. 1G specifies how much memory should be allocated 
+ - one gigabyte. 
+
+``-T 127.0.0.1:2000``
+ Varnish has a built-in text-based administration
+ interface. Activating the interface makes Varnish manageble without
+ stopping it. You can specify what interface the management interface
+ should listen to. Make sure you don't expose the management interface
+ to the world as you can easily gain root access to a system via the
+ Varnish management interface. I recommend tieing it to localhost. If
+ you have users on your system that you don't fully trust, use firewall
+ rules to restrict access to the interface to root only.
+
+``-a 0.0.0.0:8080``
+ I specify that I want Varnish to listen on port 8080 for incomming
+ HTTP requests. For a production environment you would probably make
+ Varnish listen on port 80, which is the default.
+
+Now you have Varnish running. Let us make sure that it works
+properly. Use your browser to go to http://192.168.2.2:8080/
+(obviously, you should replace the IP address with one on your own
+system) - you should now see your web application running there.
+
+Whether or not the application actually goes faster when run through
+Varnish depends on a few factors. If you application uses cookies for
+every session (a lot of PHP and Java applications seem to send a
+session cookie if it is needed or not) or if it uses authentication
+chances are Varnish won't do much caching. Ignore that for the moment,
+we come back to that in :ref:`tutorial-increasing_your_hitrate`.
+
+Lets make sure that Varnish really does do something to your web
+site. To do that we'll take a look at the logs.
diff --git a/doc/sphinx/users-guide/statistics.rst b/doc/sphinx/users-guide/statistics.rst
new file mode 100644
index 0000000..4386111
--- /dev/null
+++ b/doc/sphinx/users-guide/statistics.rst
@@ -0,0 +1,57 @@
+.. _tutorial-statistics:
+
+
+Statistics
+----------
+
+Now that your varnish is up and running let's have a look at how it is
+doing. There are several tools that can help.
+
+varnishtop
+~~~~~~~~~~
+
+The varnishtop utility reads the shared memory logs and presents a
+continuously updated list of the most commonly occurring log entries.
+
+With suitable filtering using the -I, -i, -X and -x options, it can be
+used to display a ranking of requested documents, clients, user
+agents, or any other information which is recorded in the log.
+
+``varnishtop -i rxurl`` will show you what URLs are being asked for
+by the client. ``varnishtop -i txurl`` will show you what your backend
+is being asked the most. ``varnishtop -i RxHeader -I
+Accept-Encoding`` will show the most popular Accept-Encoding header
+the client are sending you.
+
+varnishhist
+~~~~~~~~~~~
+
+The varnishhist utility reads varnishd(1) shared memory logs and
+presents a continuously updated histogram showing the distribution of
+the last N requests by their processing.  The value of N and the
+vertical scale are displayed in the top left corner.  The horizontal
+scale is logarithmic.  Hits are marked with a pipe character ("|"),
+and misses are marked with a hash character ("#").
+
+
+varnishsizes
+~~~~~~~~~~~~
+
+Varnishsizes does the same as varnishhist, except it shows the size of
+the objects and not the time take to complete the request. This gives
+you a good overview of how big the objects you are serving are.
+
+
+varnishstat
+~~~~~~~~~~~
+
+Varnish has lots of counters. We count misses, hits, information about
+the storage, threads created, deleted objects. Just about
+everything. varnishstat will dump these counters. This is useful when
+tuning varnish. 
+
+There are programs that can poll varnishstat regularly and make nice
+graphs of these counters. One such program is Munin. Munin can be
+found at http://munin-monitoring.org/ . There is a plugin for munin in
+the varnish source code.
+
diff --git a/doc/sphinx/users-guide/troubleshooting.rst b/doc/sphinx/users-guide/troubleshooting.rst
new file mode 100644
index 0000000..5bbcf6c
--- /dev/null
+++ b/doc/sphinx/users-guide/troubleshooting.rst
@@ -0,0 +1,99 @@
+Troubleshooting Varnish
+-----------------------
+
+Sometimes Varnish misbehaves. In order for you to understand whats
+going on there are a couple of places you can check. varnishlog,
+/var/log/syslog, /var/log/messages are all places where varnish might
+leave clues of whats going on.
+
+
+When Varnish won't start
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Sometimes Varnish wont start. There is a plethora of reasons why
+Varnish wont start on your machine. We've seen everything from wrong
+permissions on /dev/null to other processes blocking the ports.
+
+Starting Varnish in debug mode to see what is going on.
+
+Try to start varnish by::
+
+    # varnishd -f /usr/local/etc/varnish/default.vcl -s malloc,1G -T 127.0.0.1:2000  -a 0.0.0.0:8080 -d
+
+Notice the -d option. It will give you some more information on what
+is going on. Let us see how Varnish will react to something else
+listening on its port.::
+
+    # varnishd -n foo -f /usr/local/etc/varnish/default.vcl -s malloc,1G -T 127.0.0.1:2000  -a 0.0.0.0:8080 -d
+    storage_malloc: max size 1024 MB.
+    Using old SHMFILE
+    Platform: Linux,2.6.32-21-generic,i686,-smalloc,-hcritbit
+    200 193     
+    -----------------------------
+    Varnish Cache CLI.
+    -----------------------------
+    Type 'help' for command list.
+    Type 'quit' to close CLI session.
+    Type 'start' to launch worker process.
+
+Now Varnish is running. Only the master process is running, in debug
+mode the cache does not start. Now you're on the console. You can
+instruct the master process to start the cache by issuing "start".::
+
+	 start
+	 bind(): Address already in use
+	 300 22      
+	 Could not open sockets
+
+And here we have our problem. Something else is bound to the HTTP port
+of Varnish. If this doesn't help try strace or truss or come find us
+on IRC.
+
+
+Varnish is crashing
+~~~~~~~~~~~~~~~~~~~
+
+When varnish goes bust the child processes crashes. Usually the mother
+process will manage this by restarting the child process again. Any
+errors will be logged in syslog. It might look like this::
+
+       Mar  8 13:23:38 smoke varnishd[15670]: Child (15671) not responding to CLI, killing it.
+       Mar  8 13:23:43 smoke varnishd[15670]: last message repeated 2 times
+       Mar  8 13:23:43 smoke varnishd[15670]: Child (15671) died signal=3
+       Mar  8 13:23:43 smoke varnishd[15670]: Child cleanup complete
+       Mar  8 13:23:43 smoke varnishd[15670]: child (15697) Started
+
+Specifically if you see the "Error in munmap" error on Linux you might
+want to increase the amount of maps available. Linux is limited to a
+maximum of 64k maps. Setting vm.max_max_count i sysctl.conf will
+enable you to increase this limit. You can inspect the number of maps
+your program is consuming by counting the lines in /proc/$PID/maps
+
+This is a rather odd thing to document here - but hopefully Google
+will serve you this page if you ever encounter this error. 
+
+Varnish gives me Guru meditation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+First find the relevant log entries in varnishlog. That will probably
+give you a clue. Since varnishlog logs so much data it might be hard
+to track the entries down. You can set varnishlog to log all your 503
+errors by issuing the following command::
+
+   $ varnishlog -c -m TxStatus:503
+
+If the error happened just a short time ago the transaction might still
+be in the shared memory log segment. To get varnishlog to process the
+whole shared memory log just add the -d option::
+
+   $ varnishlog -d -c -m TxStatus:503
+
+Please see the varnishlog man page for elaborations on further
+filtering capabilities and explanation of the various options.
+
+
+Varnish doesn't cache
+~~~~~~~~~~~~~~~~~~~~~
+
+See :ref:`tutorial-increasing_your_hitrate`.
+
diff --git a/doc/sphinx/users-guide/vary.rst b/doc/sphinx/users-guide/vary.rst
new file mode 100644
index 0000000..ad7b48d
--- /dev/null
+++ b/doc/sphinx/users-guide/vary.rst
@@ -0,0 +1,58 @@
+.. _tutorial-vary:
+
+Vary
+~~~~
+
+The Vary header is sent by the web server to indicate what makes a
+HTTP object Vary. This makes a lot of sense with headers like
+Accept-Encoding. When a server issues a "Vary: Accept-Encoding" it
+tells Varnish that its needs to cache a separate version for every
+different Accept-Encoding that is coming from the clients. So, if a
+clients only accepts gzip encoding Varnish won't serve the version of
+the page encoded with the deflate encoding.
+
+The problem is that the Accept-Encoding field contains a lot of
+different encodings. If one browser sends::
+
+  Accept-Encoding: gzip,deflate
+
+And another one sends::
+
+  Accept-Encoding: deflate,gzip
+
+Varnish will keep two variants of the page requested due to the
+different Accept-Encoding headers. Normalizing the accept-encoding
+header will sure that you have as few variants as possible. The
+following VCL code will normalize the Accept-Encoding headers::
+
+    if (req.http.Accept-Encoding) {
+        if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") {
+            # No point in compressing these
+            remove req.http.Accept-Encoding;
+        } elsif (req.http.Accept-Encoding ~ "gzip") {
+            set req.http.Accept-Encoding = "gzip";
+        } elsif (req.http.Accept-Encoding ~ "deflate") {
+            set req.http.Accept-Encoding = "deflate";
+        } else {
+            # unknown algorithm
+            remove req.http.Accept-Encoding;
+        }
+    }
+
+The code sets the Accept-Encoding header from the client to either
+gzip, deflate with a preference for gzip.
+
+Pitfall - Vary: User-Agent
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Some applications or application servers send *Vary: User-Agent* along
+with their content. This instructs Varnish to cache a separate copy
+for every variation of User-Agent there is. There are plenty. Even a
+single patchlevel of the same browser will generate at least 10
+different User-Agent headers based just on what operating system they
+are running. 
+
+So if you *really* need to Vary based on User-Agent be sure to
+normalize the header or your hit rate will suffer badly. Use the above
+code as a template.
+
diff --git a/doc/sphinx/users-guide/vcl.rst b/doc/sphinx/users-guide/vcl.rst
new file mode 100644
index 0000000..0601468
--- /dev/null
+++ b/doc/sphinx/users-guide/vcl.rst
@@ -0,0 +1,200 @@
+Varnish Configuration Language - VCL
+-------------------------------------
+
+Varnish has a great configuration system. Most other systems use
+configuration directives, where you basically turn on and off lots of
+switches. Varnish uses a domain specific language called Varnish
+Configuration Language, or VCL for short. Varnish translates this
+configuration into binary code which is then executed when requests
+arrive.
+
+The VCL files are divided into subroutines. The different subroutines
+are executed at different times. One is executed when we get the
+request, another when files are fetched from the backend server.
+
+Varnish will execute these subroutines of code at different stages of
+its work. Because it is code it is execute line by line precedence
+isn't a problem. At some point you call an action in this subroutine
+and then the execution of the subroutine stops.
+
+If you don't call an action in your subroutine and it reaches the end
+Varnish will execute some built in VCL code. You will see this VCL
+code commented out in default.vcl.
+
+99% of all the changes you'll need to do will be done in two of these
+subroutines. *vcl_recv* and *vcl_fetch*.
+
+vcl_recv
+~~~~~~~~
+
+vcl_recv (yes, we're skimpy with characters, it's Unix) is called at
+the beginning of a request, after the complete request has been
+received and parsed.  Its purpose is to decide whether or not to serve
+the request, how to do it, and, if applicable, which backend to use.
+
+In vcl_recv you can also alter the request. Typically you can alter
+the cookies and add and remove request headers.
+
+Note that in vcl_recv only the request object, req is available.
+
+vcl_fetch
+~~~~~~~~~
+
+vcl_fetch is called *after* a document has been successfully retrieved
+from the backend. Normal tasks her are to alter the response headers,
+trigger ESI processing, try alternate backend servers in case the
+request failed.
+
+In vcl_fetch you still have the request object, req, available. There
+is also a *backend response*, beresp. beresp will contain the HTTP
+headers from the backend.
+
+.. _tutorial-vcl_fetch_actions:
+
+actions
+~~~~~~~
+
+The most common actions to return are these:
+
+*pass*
+ When you return pass the request and subsequent response will be passed to
+ and from the backend server. It won't be cached. pass can be returned from
+ vcl_recv
+
+*hit_for_pass*
+  Similar to pass, but accessible from vcl_fetch. Unlike pass, hit_for_pass
+  will create a hitforpass object in the cache. This has the side-effect of
+  caching the decision not to cache. This is to allow would-be uncachable
+  requests to be passed to the backend at the same time. The same logic is
+  not necessary in vcl_recv because this happens before any potential
+  queueing for an object takes place.
+
+*lookup*
+  When you return lookup from vcl_recv you tell Varnish to deliver content 
+  from cache even if the request othervise indicates that the request 
+  should be passed. You can't return lookup from vcl_fetch.
+
+*pipe*
+  Pipe can be returned from vcl_recv as well. Pipe short circuits the
+  client and the backend connections and Varnish will just sit there
+  and shuffle bytes back and forth. Varnish will not look at the data being 
+  send back and forth - so your logs will be incomplete. 
+  Beware that with HTTP 1.1 a client can send several requests on the same 
+  connection and so you should instruct Varnish to add a "Connection: close"
+  header before actually returning pipe. 
+
+*deliver*
+ Deliver the cached object to the client.  Usually returned from vcl_fetch. 
+
+Requests, responses and objects
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In VCL, there are three important data structures. The request, coming
+from the client, the response coming from the backend server and the
+object, stored in cache.
+
+In VCL you should know the following structures.
+
+*req*
+ The request object. When Varnish has received the request the req object is 
+ created and populated. Most of the work you do in vcl_recv you 
+ do on or with the req object.
+
+*beresp*
+ The backend respons object. It contains the headers of the object 
+ comming from the backend. Most of the work you do in vcl_fetch you 
+ do on the beresp object.
+
+*obj*
+ The cached object. Mostly a read only object that resides in memory. 
+ obj.ttl is writable, the rest is read only.
+
+Operators
+~~~~~~~~~
+
+The following operators are available in VCL. See the examples further
+down for, uhm, examples.
+
+= 
+ Assignment operator.
+
+== 
+ Comparison.
+
+~
+ Match. Can either be used with regular expressions or ACLs.
+
+!
+ Negation.
+
+&&
+ Logical *and*
+
+||
+ Logical *or*
+
+Example 1 - manipulating headers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Lets say we want to remove the cookie for all objects in the /images
+directory of our web server::
+
+  sub vcl_recv {
+    if (req.url ~ "^/images") {
+      unset req.http.cookie;
+    }
+  }
+
+Now, when the request is handled to the backend server there will be
+no cookie header. The interesting line is the one with the
+if-statement. It matches the URL, taken from the request object, and
+matches it against the regular expression. Note the match operator. If
+it matches the Cookie: header of the request is unset (deleted). 
+
+Example 2 - manipulating beresp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Here we override the TTL of a object comming from the backend if it
+matches certain criteria::
+
+  sub vcl_fetch {
+     if (req.url ~ "\.(png|gif|jpg)$") {
+       unset beresp.http.set-cookie;
+       set beresp.ttl = 1h;
+    }
+  }
+
+Example 3 - ACLs
+~~~~~~~~~~~~~~~~
+
+You create a named access control list with the *acl* keyword. You can match
+the IP address of the client against an ACL with the match operator.::
+
+  # Who is allowed to purge....
+  acl local {
+      "localhost";
+      "192.168.1.0"/24; /* and everyone on the local network */
+      ! "192.168.1.23"; /* except for the dialin router */
+  }
+  
+  sub vcl_recv {
+    if (req.request == "PURGE") {
+      if (client.ip ~ local) {
+         return(lookup);
+      }
+    } 
+  }
+  
+  sub vcl_hit {
+     if (req.request == "PURGE") {
+       set obj.ttl = 0s;
+       error 200 "Purged.";
+      }
+  }
+
+  sub vcl_miss {
+    if (req.request == "PURGE") {
+      error 404 "Not in cache.";
+    }
+  }
+
diff --git a/doc/sphinx/users-guide/virtualized.rst b/doc/sphinx/users-guide/virtualized.rst
new file mode 100644
index 0000000..317d3e2
--- /dev/null
+++ b/doc/sphinx/users-guide/virtualized.rst
@@ -0,0 +1,23 @@
+
+Running Varnish in a virtualized environment
+--------------------------------------------
+
+It is possible, but not recommended for high performance, to run
+Varnish on virtualized hardware. Reduced disk- and network performance
+will reduce the performance a bit so make sure your system has good IO
+performance.
+
+OpenVZ
+~~~~~~
+
+If you are running on 64bit OpenVZ (or Parallels VPS), you must reduce
+the maximum stack size before starting Varnish. The default allocates
+to much memory per thread, which will make varnish fail as soon as the
+number of threads (==traffic) increases.
+
+Reduce the maximum stack size by running::
+
+    ulimit -s 256
+
+in the startup script.
+
diff --git a/doc/sphinx/users-guide/websockets.rst b/doc/sphinx/users-guide/websockets.rst
new file mode 100644
index 0000000..a74353e
--- /dev/null
+++ b/doc/sphinx/users-guide/websockets.rst
@@ -0,0 +1,20 @@
+
+Using Websockets 
+----------------
+
+Websockets is a technology for creating a bidirectional stream-based channel over HTTP.
+
+To run websockets through Varnish you need to pipe it, and copy the Upgrade header. Use the following
+VCL config to do so::
+
+    sub vcl_pipe {
+         if (req.http.upgrade) {
+             set bereq.http.upgrade = req.http.upgrade;
+         }
+    }
+    sub vcl_recv {
+         if (req.http.Upgrade ~ "(?i)websocket") {
+             return (pipe);
+         }
+    }
+



More information about the varnish-commit mailing list