[master] 88b33f6 New docs structure according to the scoof ToC

Per Buer perbu at varnish-cache.org
Sat Sep 8 10:36:06 CEST 2012


commit 88b33f6987008506c583d8757764dc534a25c678
Author: Per Buer <per.buer at gmail.com>
Date:   Sat Sep 8 10:36:08 2012 +0200

    New docs structure according to the scoof ToC

diff --git a/doc/sphinx/index.rst b/doc/sphinx/index.rst
index 9f367c3..a663170 100644
--- a/doc/sphinx/index.rst
+++ b/doc/sphinx/index.rst
@@ -16,7 +16,7 @@ our tutorial - :ref:`tutorial-index`.
 Contents:
 
 .. toctree::
-   :maxdepth: 1
+   :maxdepth: 2
 
    installation/index.rst
    tutorial/index.rst
diff --git a/doc/sphinx/reference/varnishd.rst b/doc/sphinx/reference/varnishd.rst
index e6857a8..87141bf 100644
--- a/doc/sphinx/reference/varnishd.rst
+++ b/doc/sphinx/reference/varnishd.rst
@@ -31,6 +31,8 @@ DESCRIPTION
 The varnishd daemon accepts HTTP requests from clients, passes them on to a backend server and caches the
 returned documents to better satisfy future requests for the same document.
 
+.. _ref-varnishd-options:
+
 OPTIONS
 =======
 
diff --git a/doc/sphinx/tutorial/introduction.rst b/doc/sphinx/tutorial/introduction.rst
index cdbab1d..3968811 100644
--- a/doc/sphinx/tutorial/introduction.rst
+++ b/doc/sphinx/tutorial/introduction.rst
@@ -1,7 +1,7 @@
 .. _tutorial-intro:
 
 What is Varnish?
--------------
+----------------
 
 Varnish Cache is a web application accelerator. It can also be called
 a HTTP reverse proxy. The next chapter :ref:`tutorial-web-accelerator`
@@ -27,14 +27,14 @@ where you want to get the content and how the request or response
 should be altered. 
 
 Supported plattforms
------------------
+--------------------
 
 Varnish is written to run on modern versions of Linux and FreeBSD and
 the best experience is had on those plattforms. Thanks to our
 contributors it also runs on NetBSD, OpenBSD and OS X.
 
 About the Varnish development process
--------------------------------
+-------------------------------------
 
 Varnish is a community driven project. The development is overseen by
 the Varnish Governing Board which currently consist of Poul-Henning
@@ -42,7 +42,7 @@ Kamp (Architect), Rogier Mulhuijzen (Fastly) and Kristian Lyngstøl
 (Varnish Software).
 
 Getting in touch
--------------
+----------------
 
 You can get in touch with us trough many channels. For real time chat
 you can reach us on IRC trough the server irc.linpro.net on the
diff --git a/doc/sphinx/tutorial/web_accelerator.rst b/doc/sphinx/tutorial/web_accelerator.rst
index 4d455ae..28f8365 100644
--- a/doc/sphinx/tutorial/web_accelerator.rst
+++ b/doc/sphinx/tutorial/web_accelerator.rst
@@ -3,6 +3,8 @@
 What is a web accelerator
 -------------------------
 
+Really.XXX.
+
 
 The problem
 -----------
diff --git a/doc/sphinx/users-guide/advanced_topics.rst b/doc/sphinx/users-guide/advanced_topics.rst
deleted file mode 100644
index 6368d6e..0000000
--- a/doc/sphinx/users-guide/advanced_topics.rst
+++ /dev/null
@@ -1,41 +0,0 @@
-.. _users-guide-advanced_topics:
-
-Advanced topics
----------------
-
-This guide has covered the basics in Varnish. If you read through
-it all you should now have the skills to run Varnish.
-
-Here is a short overview of topics that we haven't covered in the guide. 
-
-More VCL
-~~~~~~~~
-
-VCL is a bit more complex then what we've covered so far. There are a
-few more subroutines available and there a few actions that we haven't
-discussed. For a complete(ish) guide to VCL have a look at the VCL man
-page - ref:`reference-vcl`.
-
-Using In-line C to extend Varnish
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-(Here there be dragons)
-
-You can use *in-line C* to extend Varnish. Please note that you can
-seriously mess up Varnish this way. The C code runs within the Varnish
-Cache process so if your code generates a segfault the cache will crash.
-
-One of the first uses I saw of In-line C was logging to syslog.::
-
-	# The include statements must be outside the subroutines.
-	C{
-		#include <syslog.h>
-        }C
-	
-        sub vcl_something {
-                C{
-		        syslog(LOG_INFO, "Something happened at VCL line XX.");
-	        }C
-        }
-
-
diff --git a/doc/sphinx/users-guide/backend_servers.rst b/doc/sphinx/users-guide/backend_servers.rst
deleted file mode 100644
index 4f721aa..0000000
--- a/doc/sphinx/users-guide/backend_servers.rst
+++ /dev/null
@@ -1,197 +0,0 @@
-.. _users-guide-backend_servers:
-
-Backend servers
----------------
-
-Varnish has a concept of "backend" or "origin" servers. A backend
-server is the server providing the content Varnish will accelerate.
-
-Our first task is to tell Varnish where it can find its content. Start
-your favorite text editor and open the varnish default configuration
-file. If you installed from source this is
-/usr/local/etc/varnish/default.vcl, if you installed from a package it
-is probably /etc/varnish/default.vcl.
-
-Somewhere in the top there will be a section that looks a bit like this.::
-
-	  # backend default {
-	  #     .host = "127.0.0.1";
-	  #     .port = "8080";
-	  # }
-
-We comment in this bit of text making the text look like.::
-
-          backend default {
-                .host = "127.0.0.1";
-    		.port = "8080";
-	  }
-
-Now, this piece of configuration defines a backend in Varnish called
-*default*. When Varnish needs to get content from this backend it will
-connect to port 8080 on localhost (127.0.0.1).
-
-Varnish can have several backends defined and can you can even join
-several backends together into clusters of backends for load balancing
-purposes. 
-
-Multiple backends
------------------
-
-At some point you might need Varnish to cache content from several
-servers. You might want Varnish to map all the URL into one single
-host or not. There are lot of options.
-
-Lets say we need to introduce a Java application into out PHP web
-site. Lets say our Java application should handle URL beginning with
-/java/.
-
-We manage to get the thing up and running on port 8000. Now, lets have
-a look at the default.vcl.::
-
-  backend default {
-      .host = "127.0.0.1";
-      .port = "8080";
-  }
-
-We add a new backend.::
-
-  backend java {
-      .host = "127.0.0.1";
-      .port = "8000";
-  }
-
-Now we need tell where to send the difference URL. Lets look at vcl_recv.::
-
-  sub vcl_recv {
-      if (req.url ~ "^/java/") {
-          set req.backend = java;
-      } else {
-          set req.backend = default.
-      }
-  }
-
-It's quite simple, really. Lets stop and think about this for a
-moment. As you can see you can define how you choose backends based on
-really arbitrary data. You want to send mobile devices to a different
-backend? No problem. if (req.User-agent ~ /mobile/) .. should do the
-trick. 
-
-.. _users-guide-advanced_backend_servers-directors:
-
-Directors
----------
-
-You can also group several backend into a group of backends. These
-groups are called directors. This will give you increased performance
-and resilience. You can define several backends and group them
-together in a director.::
-
-	 backend server1 {
-	     .host = "192.168.0.10";
-	 }
-	 backend server2{
-	     .host = "192.168.0.10";
-	 }
-
-Now we create the director.::
-
-       	director example_director round-robin {
-        {
-                .backend = server1;
-        }
-	# server2
-        {
-                .backend = server2;
-        }
-	# foo
-	}
-
-
-This director is a round-robin director. This means the director will
-distribute the incoming requests on a round-robin basis. There is
-also a *random* director which distributes requests in a, you guessed
-it, random fashion.
-
-But what if one of your servers goes down? Can Varnish direct all the
-requests to the healthy server? Sure it can. This is where the Health
-Checks come into play.
-
-.. _users-guide-advanced_backend_servers-health:
-
-Health checks
--------------
-
-Lets set up a director with two backends and health checks. First lets
-define the backends.::
-
-       backend server1 {
-         .host = "server1.example.com";
-	 .probe = {
-                .url = "/";
-                .interval = 5s;
-                .timeout = 1 s;
-                .window = 5;
-                .threshold = 3;
-	   }
-         }
-       backend server2 {
-  	  .host = "server2.example.com";
-  	  .probe = {
-                .url = "/";
-                .interval = 5s;
-                .timeout = 1 s;
-                .window = 5;
-                .threshold = 3;
-	  }
-        }
-
-Whats new here is the probe. Varnish will check the health of each
-backend with a probe. The options are
-
-url
- What URL should varnish request.
-
-interval
- How often should we poll
-
-timeout
- What is the timeout of the probe
-
-window
- Varnish will maintain a *sliding window* of the results. Here the
- window has five checks.
-
-threshold 
- How many of the .window last polls must be good for the backend to be declared healthy.
-
-initial 
- How many of the of the probes a good when Varnish starts - defaults
- to the same amount as the threshold.
-
-Now we define the director.::
-
-  director example_director round-robin {
-        {
-                .backend = server1;
-        }
-        # server2 
-        {
-                .backend = server2;
-        }
-	
-        }
-
-You use this director just as you would use any other director or
-backend. Varnish will not send traffic to hosts that are marked as
-unhealthy. Varnish can also serve stale content if all the backends are
-down. See :ref:`users-guide-handling_misbehaving_servers` for more
-information on how to enable this.
-
-Please note that Varnish will keep probes active for all loaded
-VCLs. Varnish will coalesce probes that seem identical - so be careful
-not to change the probe config if you do a lot of VCL
-loading. Unloading the VCL will discard the probes.
-
-For more information on how to do this please see
-ref:`reference-vcl-director`.
-
diff --git a/doc/sphinx/users-guide/command-line.rst b/doc/sphinx/users-guide/command-line.rst
new file mode 100644
index 0000000..b91e4f0
--- /dev/null
+++ b/doc/sphinx/users-guide/command-line.rst
@@ -0,0 +1,44 @@
+.. _users-guide-command-line:
+
+XXX: Total rewrite of this
+
+Command Line options
+--------------------
+
+I assume varnishd is in your path. You might want to run ``pkill
+varnishd`` to make sure varnishd isn't running. 
+
+Become root and type:
+
+``# varnishd -f /usr/local/etc/varnish/default.vcl -s malloc,1G -T 127.0.0.1:2000 -a 0.0.0.0:8080``
+
+I added a few options, lets go through them:
+
+``-f /usr/local/etc/varnish/default.vcl``
+ The -f options specifies what configuration varnishd should use.
+
+``-s malloc,1G``
+ The -s options chooses the storage type Varnish should use for
+ storing its content. I used the type *malloc*, which just uses memory
+ for storage. There are other backends as well, described in 
+ :ref:users-guide-storage. 1G specifies how much memory should be allocated 
+ - one gigabyte. 
+
+``-T 127.0.0.1:2000``
+ Varnish has a built-in text-based administration
+ interface. Activating the interface makes Varnish manageble without
+ stopping it. You can specify what interface the management interface
+ should listen to. Make sure you don't expose the management interface
+ to the world as you can easily gain root access to a system via the
+ Varnish management interface. I recommend tieing it to localhost. If
+ you have users on your system that you don't fully trust, use firewall
+ rules to restrict access to the interface to root only.
+
+``-a 0.0.0.0:8080``
+ I specify that I want Varnish to listen on port 8080 for incomming
+ HTTP requests. For a production environment you would probably make
+ Varnish listen on port 80, which is the default.
+
+For a complete list of the command line parameters please see
+:ref:`ref-varnishd-options`.
+
diff --git a/doc/sphinx/users-guide/command_line.rst b/doc/sphinx/users-guide/command_line.rst
deleted file mode 100644
index 318868f..0000000
--- a/doc/sphinx/users-guide/command_line.rst
+++ /dev/null
@@ -1,55 +0,0 @@
-.. _users-guide-command-line:
-
-XXX: Total rewrite of this
-
-Starting Varnish
-----------------
-
-I assume varnishd is in your path. You might want to run ``pkill
-varnishd`` to make sure varnishd isn't running. 
-
-Become root and type:
-
-``# varnishd -f /usr/local/etc/varnish/default.vcl -s malloc,1G -T 127.0.0.1:2000 -a 0.0.0.0:8080``
-
-I added a few options, lets go through them:
-
-``-f /usr/local/etc/varnish/default.vcl``
- The -f options specifies what configuration varnishd should use.
-
-``-s malloc,1G``
- The -s options chooses the storage type Varnish should use for
- storing its content. I used the type *malloc*, which just uses memory
- for storage. There are other backends as well, described in 
- :ref:users-guide-storage. 1G specifies how much memory should be allocated 
- - one gigabyte. 
-
-``-T 127.0.0.1:2000``
- Varnish has a built-in text-based administration
- interface. Activating the interface makes Varnish manageble without
- stopping it. You can specify what interface the management interface
- should listen to. Make sure you don't expose the management interface
- to the world as you can easily gain root access to a system via the
- Varnish management interface. I recommend tieing it to localhost. If
- you have users on your system that you don't fully trust, use firewall
- rules to restrict access to the interface to root only.
-
-``-a 0.0.0.0:8080``
- I specify that I want Varnish to listen on port 8080 for incomming
- HTTP requests. For a production environment you would probably make
- Varnish listen on port 80, which is the default.
-
-Now you have Varnish running. Let us make sure that it works
-properly. Use your browser to go to http://192.168.2.2:8080/
-(obviously, you should replace the IP address with one on your own
-system) - you should now see your web application running there.
-
-Whether or not the application actually goes faster when run through
-Varnish depends on a few factors. If you application uses cookies for
-every session (a lot of PHP and Java applications seem to send a
-session cookie if it is needed or not) or if it uses authentication
-chances are Varnish won't do much caching. Ignore that for the moment,
-we come back to that in :ref:`users-guide-increasing_your_hitrate`.
-
-Lets make sure that Varnish really does do something to your web
-site. To do that we'll take a look at the logs.
diff --git a/doc/sphinx/users-guide/configuration.rst b/doc/sphinx/users-guide/configuration.rst
new file mode 100644
index 0000000..e4f3e00
--- /dev/null
+++ b/doc/sphinx/users-guide/configuration.rst
@@ -0,0 +1,12 @@
+Configuration
+=============
+
+This should deal with 
+
+.. toctree::
+   :maxdepth: 2
+
+   command-line
+   storage-backends
+   params
+
diff --git a/doc/sphinx/users-guide/handling_misbehaving_servers.rst b/doc/sphinx/users-guide/handling_misbehaving_servers.rst
deleted file mode 100644
index bb5fd35..0000000
--- a/doc/sphinx/users-guide/handling_misbehaving_servers.rst
+++ /dev/null
@@ -1,103 +0,0 @@
-.. _users-guide-handling_misbehaving_servers:
-
-Misbehaving servers
--------------------
-
-A key feature of Varnish is its ability to shield you from misbehaving
-web- and application servers.
-
-
-
-Grace mode
-~~~~~~~~~~
-
-When several clients are requesting the same page Varnish will send
-one request to the backend and place the others on hold while fetching
-one copy from the back end. In some products this is called request
-coalescing and Varnish does this automatically.
-
-If you are serving thousands of hits per second the queue of waiting
-requests can get huge. There are two potential problems - one is a
-thundering herd problem - suddenly releasing a thousand threads to
-serve content might send the load sky high. Secondly - nobody likes to
-wait. To deal with this we can instruct Varnish to keep
-the objects in cache beyond their TTL and to serve the waiting
-requests somewhat stale content.
-
-So, in order to serve stale content we must first have some content to
-serve. So to make Varnish keep all objects for 30 minutes beyond their
-TTL use the following VCL::
-
-  sub vcl_fetch {
-    set beresp.grace = 30m;
-  }
-
-Varnish still won't serve the stale objects. In order to enable
-Varnish to actually serve the stale object we must enable this on the
-request. Lets us say that we accept serving 15s old object.::
-
-  sub vcl_recv {
-    set req.grace = 15s;
-  }
-
-You might wonder why we should keep the objects in the cache for 30
-minutes if we are unable to serve them? Well, if you have enabled
-:ref:`users-guide-advanced_backend_servers-health` you can check if the
-backend is sick and if it is we can serve the stale content for a bit
-longer.::
-
-   if (! req.backend.healthy) {
-      set req.grace = 5m;
-   } else {
-      set req.grace = 15s;
-   }
-
-So, to sum up, grace mode solves two problems:
- * it serves stale content to avoid request pile-up.
- * it serves stale content if the backend is not healthy.
-
-Saint mode
-~~~~~~~~~~
-
-Sometimes servers get flaky. They start throwing out random
-errors. You can instruct Varnish to try to handle this in a
-more-than-graceful way - enter *Saint mode*. Saint mode enables you to
-discard a certain page from one backend server and either try another
-server or serve stale content from cache. Lets have a look at how this
-can be enabled in VCL::
-
-  sub vcl_fetch {
-    if (beresp.status == 500) { 
-      set beresp.saintmode = 10s;
-      return(restart);
-    }
-    set beresp.grace = 5m;
-  } 
-
-When we set beresp.saintmode to 10 seconds Varnish will not ask *that*
-server for URL for 10 seconds. A blacklist, more or less. Also a
-restart is performed so if you have other backends capable of serving
-that content Varnish will try those. When you are out of backends
-Varnish will serve the content from its stale cache.
-
-This can really be a life saver.
-
-Known limitations on grace- and saint mode
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-If your request fails while it is being fetched you're thrown into
-vcl_error. vcl_error has access to a rather limited set of data so you
-can't enable saint mode or grace mode here. This will be addressed in a
-future release but a work-around available.
-
-* Declare a backend that is always sick.
-* Set a magic marker in vcl_error
-* Restart the transaction
-* Note the magic marker in vcl_recv and set the backend to the one mentioned
-* Varnish will now serve stale data is any is available
-
-
-God mode
-~~~~~~~~
-Not implemented yet. :-)
-
diff --git a/doc/sphinx/users-guide/hashing.rst b/doc/sphinx/users-guide/hashing.rst
deleted file mode 100644
index b7bbe62..0000000
--- a/doc/sphinx/users-guide/hashing.rst
+++ /dev/null
@@ -1,51 +0,0 @@
-
-Hashing
--------
-
-Internally, when Varnish stores content in it's store it uses a hash
-key to find the object again. In the default setup this key is
-calculated based on the content of the *Host* header or the IP adress
-of the server and the URL.
-
-Behold the default vcl.::
-
- sub vcl_hash {
-     hash_data(req.url);
-     if (req.http.host) {
-         hash_data(req.http.host);
-     } else {
-         hash_data(server.ip);
-     }
-     return (hash);
- }
-
-As you can see it first chucks in req.url then req.http.host if it
-exsists. It is worth pointing out that Varnish doesn't lowercase the
-hostname or the URL before hashing it so in thery having Varnish.org/
-and varnish.org/ would result in different cache entries. Browers
-however, tend to lowercase hostnames.
-
-You can change what goes into the hash. This way you can make Varnish
-serve up different content to different clients based on arbitrary
-criteria.
-
-Let's say you want to serve pages in different languages to your users
-based on where their IP address is located. You would need some Vmod
-to get a country code and then put it into the hash. It might look
-like this.
-
-In vcl_recv::
-
-  set req.http.X-Country-Code = geoip.lookup(client.ip);
-
-And then add a vcl_hash:
-
-sub vcl_hash {
-  hash_data(req.http.X-Country-Code);
-}
-
-As the default VCL will take care of adding the host and URL to the
-hash we don't have to do anything else. Be careful calling
-return(hash) as this will abort the execution of the default VCL and
-thereby you can end up with a Varnish that will return data based on
-more or less random inputs.
diff --git a/doc/sphinx/users-guide/increasing-your-hitrate.rst b/doc/sphinx/users-guide/increasing-your-hitrate.rst
new file mode 100644
index 0000000..462a781
--- /dev/null
+++ b/doc/sphinx/users-guide/increasing-your-hitrate.rst
@@ -0,0 +1,213 @@
+.. _users-guide-increasing_your_hitrate:
+
+Achieving a high hitrate
+------------------------
+
+Now that Varnish is up and running, and you can access your web
+application through Varnish. Unless your application is specifically
+written to work behind a web accelerator you'll probably need to do
+some changes to either the configuration or the application in order
+to get a high hit rate in Varnish.
+
+Varnish will not cache your data unless it's absolutely sure it is
+safe to do so. So, for you to understand how Varnish decides if and
+how to cache a page, I'll guide you through a couple of tools that you
+will find useful.
+
+Note that you need a tool to see what HTTP headers fly between you and
+the web server. On the Varnish server, the easiest is to use
+varnishlog and varnishtop but sometimes a client-side tool makes
+sense. Here are the ones I use.
+
+Tool: varnishtop
+~~~~~~~~~~~~~~~~
+
+You can use varnishtop to identify what URLs are hitting the backend
+the most. ``varnishtop -i txurl`` is an essential command. You can see
+some other examples of varnishtop usage in :ref:`users-guide-statistics`.
+
+
+Tool: varnishlog
+~~~~~~~~~~~~~~~~
+
+When you have identified the an URL which is frequently sent to the
+backend you can use varnishlog to have a look at the request.
+``varnishlog -c -m 'RxURL:^/foo/bar`` will show you the requests
+coming from the client (-c) matching /foo/bar.
+
+For more information on how varnishlog works please see
+:ref:`users-guide-logging` or man :ref:`ref-varnishlog`.
+
+For extended diagnostics headers, see
+http://www.varnish-cache.org/trac/wiki/VCLExampleHitMissHeader
+
+
+Tool: lwp-request
+~~~~~~~~~~~~~~~~~
+
+lwp-request is part of The World-Wide Web library for Perl. It's a
+couple of really basic programs that can execute an HTTP request and
+give you the result. I mostly use two programs, GET and HEAD.
+
+vg.no was the first site to use Varnish and the people running Varnish
+there are quite clueful. So it's interesting to look at their HTTP
+Headers. Let's send a GET request for their home page::
+
+  $ GET -H 'Host: www.vg.no' -Used http://vg.no/
+  GET http://vg.no/
+  Host: www.vg.no
+  User-Agent: lwp-request/5.834 libwww-perl/5.834
+  
+  200 OK
+  Cache-Control: must-revalidate
+  Refresh: 600
+  Title: VG Nett - Forsiden - VG Nett
+  X-Age: 463
+  X-Cache: HIT
+  X-Rick-Would-Never: Let you down
+  X-VG-Jobb: http://www.finn.no/finn/job/fulltime/result?keyword=vg+multimedia Merk:HeaderNinja
+  X-VG-Korken: http://www.youtube.com/watch?v=Fcj8CnD5188
+  X-VG-WebCache: joanie
+  X-VG-WebServer: leon
+
+OK. Let me explain what it does. GET usually sends off HTTP 0.9
+requests, which lack the Host header. So I add a Host header with the
+-H option. -U print request headers, -s prints response status, -e
+prints response headers and -d discards the actual content. We don't
+really care about the content, only the headers.
+
+As you can see, VG adds quite a bit of information in their
+headers. Some of the headers, like the X-Rick-Would-Never are specific
+to vg.no and their somewhat odd sense of humour. Others, like the
+X-VG-Webcache are for debugging purposes. 
+
+So, to check whether a site sets cookies for a specific URL, just do::
+
+  GET -Used http://example.com/ |grep ^Set-Cookie
+
+Tool: Live HTTP Headers
+~~~~~~~~~~~~~~~~~~~~~~~
+
+There is also a plugin for Firefox. *Live HTTP Headers* can show you
+what headers are being sent and recieved. Live HTTP Headers can be
+found at https://addons.mozilla.org/en-US/firefox/addon/3829/ or by
+googling "Live HTTP Headers".
+
+
+The role of HTTP Headers
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Along with each HTTP request and response comes a bunch of headers
+carrying metadata. Varnish will look at these headers to determine if
+it is appropriate to cache the contents and how long Varnish can keep
+the content.
+
+Please note that when considering these headers Varnish actually
+considers itself *part of* the actual webserver. The rationale being
+that both are under your control. 
+
+The term *surrogate origin cache* is not really well defined by the
+IETF so RFC 2616 so the various ways Varnish works might differ from
+your expectations.
+
+Let's take a look at the important headers you should be aware of:
+
+Cache-Control
+~~~~~~~~~~~~~
+
+The Cache-Control instructs caches how to handle the content. Varnish
+cares about the *max-age* parameter and uses it to calculate the TTL
+for an object. 
+
+"Cache-Control: nocache" is ignored but if you need this you can
+easily add support for it.
+
+So make sure you issue a Cache-Control header with a max-age
+header. You can have a look at what Varnish Software's drupal server
+issues::
+
+  $ GET -Used http://www.varnish-software.com/|grep ^Cache-Control
+  Cache-Control: public, max-age=600
+
+Age
+~~~
+
+Varnish adds an Age header to indicate how long the object has been
+kept inside Varnish. You can grep out Age from varnishlog like this::
+
+  varnishlog -i TxHeader -I ^Age
+
+Pragma
+~~~~~~
+
+An HTTP 1.0 server might send "Pragma: nocache". Varnish ignores this
+header. You could easily add support for this header in VCL.
+
+In vcl_fetch::
+
+  if (beresp.http.Pragma ~ "nocache") {
+     return(hit_for_pass);
+  }
+
+Authorization
+~~~~~~~~~~~~~
+
+If Varnish sees an Authorization header it will pass the request. If
+this is not what you want you can unset the header.
+
+Overriding the time-to-live (ttl)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Sometimes your backend will misbehave. It might, depending on your
+setup, be easier to override the ttl in Varnish than to fix your
+somewhat cumbersome backend. 
+
+You need VCL to identify the objects you want and then you set the
+beresp.ttl to whatever you want::
+
+  sub vcl_fetch {
+      if (req.url ~ "^/legacy_broken_cms/") {
+          set beresp.ttl = 5d;
+      }
+  }
+
+The example will set the TTL to 5 days for the old legacy stuff on
+your site.
+
+Forcing caching for certain requests and certain responses
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Since you still have this cumbersome backend that isn't very friendly
+to work with you might want to override more stuff in Varnish. We
+recommend that you rely as much as you can on the default caching
+rules. It is perfectly easy to force Varnish to lookup an object in
+the cache but it isn't really recommended.
+
+
+Normalizing your namespace
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Some sites are accessed via lots of
+hostnames. http://www.varnish-software.com/,
+http://varnish-software.com/ and http://varnishsoftware.com/ all point
+at the same site. Since Varnish doesn't know they are different,
+Varnish will cache different versions of every page for every
+hostname. You can mitigate this in your web server configuration by
+setting up redirects or by using the following VCL::
+
+  if (req.http.host ~ "(?i)^(www.)?varnish-?software.com") {
+    set req.http.host = "varnish-software.com";
+  }
+
+
+Ways of increasing your hitrate even more
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following chapters should give your ways of further increasing
+your hitrate, especially the chapter on Cookies.
+
+ * :ref:`users-guide-cookies`
+ * :ref:`users-guide-vary`
+ * :ref:`users-guide-purging`
+ * :ref:`users-guide-esi`
+
diff --git a/doc/sphinx/users-guide/increasing_your_hitrate.rst b/doc/sphinx/users-guide/increasing_your_hitrate.rst
deleted file mode 100644
index 462a781..0000000
--- a/doc/sphinx/users-guide/increasing_your_hitrate.rst
+++ /dev/null
@@ -1,213 +0,0 @@
-.. _users-guide-increasing_your_hitrate:
-
-Achieving a high hitrate
-------------------------
-
-Now that Varnish is up and running, and you can access your web
-application through Varnish. Unless your application is specifically
-written to work behind a web accelerator you'll probably need to do
-some changes to either the configuration or the application in order
-to get a high hit rate in Varnish.
-
-Varnish will not cache your data unless it's absolutely sure it is
-safe to do so. So, for you to understand how Varnish decides if and
-how to cache a page, I'll guide you through a couple of tools that you
-will find useful.
-
-Note that you need a tool to see what HTTP headers fly between you and
-the web server. On the Varnish server, the easiest is to use
-varnishlog and varnishtop but sometimes a client-side tool makes
-sense. Here are the ones I use.
-
-Tool: varnishtop
-~~~~~~~~~~~~~~~~
-
-You can use varnishtop to identify what URLs are hitting the backend
-the most. ``varnishtop -i txurl`` is an essential command. You can see
-some other examples of varnishtop usage in :ref:`users-guide-statistics`.
-
-
-Tool: varnishlog
-~~~~~~~~~~~~~~~~
-
-When you have identified the an URL which is frequently sent to the
-backend you can use varnishlog to have a look at the request.
-``varnishlog -c -m 'RxURL:^/foo/bar`` will show you the requests
-coming from the client (-c) matching /foo/bar.
-
-For more information on how varnishlog works please see
-:ref:`users-guide-logging` or man :ref:`ref-varnishlog`.
-
-For extended diagnostics headers, see
-http://www.varnish-cache.org/trac/wiki/VCLExampleHitMissHeader
-
-
-Tool: lwp-request
-~~~~~~~~~~~~~~~~~
-
-lwp-request is part of The World-Wide Web library for Perl. It's a
-couple of really basic programs that can execute an HTTP request and
-give you the result. I mostly use two programs, GET and HEAD.
-
-vg.no was the first site to use Varnish and the people running Varnish
-there are quite clueful. So it's interesting to look at their HTTP
-Headers. Let's send a GET request for their home page::
-
-  $ GET -H 'Host: www.vg.no' -Used http://vg.no/
-  GET http://vg.no/
-  Host: www.vg.no
-  User-Agent: lwp-request/5.834 libwww-perl/5.834
-  
-  200 OK
-  Cache-Control: must-revalidate
-  Refresh: 600
-  Title: VG Nett - Forsiden - VG Nett
-  X-Age: 463
-  X-Cache: HIT
-  X-Rick-Would-Never: Let you down
-  X-VG-Jobb: http://www.finn.no/finn/job/fulltime/result?keyword=vg+multimedia Merk:HeaderNinja
-  X-VG-Korken: http://www.youtube.com/watch?v=Fcj8CnD5188
-  X-VG-WebCache: joanie
-  X-VG-WebServer: leon
-
-OK. Let me explain what it does. GET usually sends off HTTP 0.9
-requests, which lack the Host header. So I add a Host header with the
--H option. -U print request headers, -s prints response status, -e
-prints response headers and -d discards the actual content. We don't
-really care about the content, only the headers.
-
-As you can see, VG adds quite a bit of information in their
-headers. Some of the headers, like the X-Rick-Would-Never are specific
-to vg.no and their somewhat odd sense of humour. Others, like the
-X-VG-Webcache are for debugging purposes. 
-
-So, to check whether a site sets cookies for a specific URL, just do::
-
-  GET -Used http://example.com/ |grep ^Set-Cookie
-
-Tool: Live HTTP Headers
-~~~~~~~~~~~~~~~~~~~~~~~
-
-There is also a plugin for Firefox. *Live HTTP Headers* can show you
-what headers are being sent and recieved. Live HTTP Headers can be
-found at https://addons.mozilla.org/en-US/firefox/addon/3829/ or by
-googling "Live HTTP Headers".
-
-
-The role of HTTP Headers
-~~~~~~~~~~~~~~~~~~~~~~~~
-
-Along with each HTTP request and response comes a bunch of headers
-carrying metadata. Varnish will look at these headers to determine if
-it is appropriate to cache the contents and how long Varnish can keep
-the content.
-
-Please note that when considering these headers Varnish actually
-considers itself *part of* the actual webserver. The rationale being
-that both are under your control. 
-
-The term *surrogate origin cache* is not really well defined by the
-IETF so RFC 2616 so the various ways Varnish works might differ from
-your expectations.
-
-Let's take a look at the important headers you should be aware of:
-
-Cache-Control
-~~~~~~~~~~~~~
-
-The Cache-Control instructs caches how to handle the content. Varnish
-cares about the *max-age* parameter and uses it to calculate the TTL
-for an object. 
-
-"Cache-Control: nocache" is ignored but if you need this you can
-easily add support for it.
-
-So make sure you issue a Cache-Control header with a max-age
-header. You can have a look at what Varnish Software's drupal server
-issues::
-
-  $ GET -Used http://www.varnish-software.com/|grep ^Cache-Control
-  Cache-Control: public, max-age=600
-
-Age
-~~~
-
-Varnish adds an Age header to indicate how long the object has been
-kept inside Varnish. You can grep out Age from varnishlog like this::
-
-  varnishlog -i TxHeader -I ^Age
-
-Pragma
-~~~~~~
-
-An HTTP 1.0 server might send "Pragma: nocache". Varnish ignores this
-header. You could easily add support for this header in VCL.
-
-In vcl_fetch::
-
-  if (beresp.http.Pragma ~ "nocache") {
-     return(hit_for_pass);
-  }
-
-Authorization
-~~~~~~~~~~~~~
-
-If Varnish sees an Authorization header it will pass the request. If
-this is not what you want you can unset the header.
-
-Overriding the time-to-live (ttl)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Sometimes your backend will misbehave. It might, depending on your
-setup, be easier to override the ttl in Varnish than to fix your
-somewhat cumbersome backend. 
-
-You need VCL to identify the objects you want and then you set the
-beresp.ttl to whatever you want::
-
-  sub vcl_fetch {
-      if (req.url ~ "^/legacy_broken_cms/") {
-          set beresp.ttl = 5d;
-      }
-  }
-
-The example will set the TTL to 5 days for the old legacy stuff on
-your site.
-
-Forcing caching for certain requests and certain responses
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Since you still have this cumbersome backend that isn't very friendly
-to work with you might want to override more stuff in Varnish. We
-recommend that you rely as much as you can on the default caching
-rules. It is perfectly easy to force Varnish to lookup an object in
-the cache but it isn't really recommended.
-
-
-Normalizing your namespace
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Some sites are accessed via lots of
-hostnames. http://www.varnish-software.com/,
-http://varnish-software.com/ and http://varnishsoftware.com/ all point
-at the same site. Since Varnish doesn't know they are different,
-Varnish will cache different versions of every page for every
-hostname. You can mitigate this in your web server configuration by
-setting up redirects or by using the following VCL::
-
-  if (req.http.host ~ "(?i)^(www.)?varnish-?software.com") {
-    set req.http.host = "varnish-software.com";
-  }
-
-
-Ways of increasing your hitrate even more
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The following chapters should give your ways of further increasing
-your hitrate, especially the chapter on Cookies.
-
- * :ref:`users-guide-cookies`
- * :ref:`users-guide-vary`
- * :ref:`users-guide-purging`
- * :ref:`users-guide-esi`
-
diff --git a/doc/sphinx/users-guide/index.rst b/doc/sphinx/users-guide/index.rst
index dacb13d..f3f4a2c 100644
--- a/doc/sphinx/users-guide/index.rst
+++ b/doc/sphinx/users-guide/index.rst
@@ -1,8 +1,8 @@
 .. _users-guide-index:
 
-%%%%%%%%%%%%%
-Using Varnish
-%%%%%%%%%%%%%
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+Using Varnish - A Users Guide
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 
 This guide is intended for system administrators managing Varnish
 Cache. 
@@ -10,13 +10,24 @@ Cache.
 The guide is split into short chapters, each chapter explaining a
 separate topic.
 
-.. toctree:: :maxdepth: 1
+.. toctree::
+   :maxdepth: 3
 
+   configuration
+   vcl
+   operation
+   troubleshooting
+
+.. customizing (which is a non ideal title)
+
+.. No longer used:
+
+        configuration
         command_line
+        VCL
 	backend_servers
 	logging
         sizing_your_cache
-	vcl
         statistics
         increasing_your_hitrate
 	cookies
diff --git a/doc/sphinx/users-guide/logging.rst b/doc/sphinx/users-guide/logging.rst
deleted file mode 100644
index 01d8f36..0000000
--- a/doc/sphinx/users-guide/logging.rst
+++ /dev/null
@@ -1,68 +0,0 @@
-.. _users-guide-logging:
-
-Logging in Varnish
-------------------
-
-One of the really nice features in Varnish is how logging
-works. Instead of logging to normal log file Varnish logs to a shared
-memory segment. When the end of the segment is reached we start over,
-overwriting old data. This is much, much faster then logging to a file
-and it doesn't require disk space. Besides it gives you much, much
-more information when you need it.
-
-The flip side is that if you forget to have a program actually write the
-logs to disk they will disappear.
-
-varnishlog is one of the programs you can use to look at what Varnish
-is logging. Varnishlog gives you the raw logs, everything that is
-written to the logs. There are other clients as well, we'll show you
-these later.
-
-In the terminal window you started varnish now type *varnishlog* and
-press enter.
-
-You'll see lines like these scrolling slowly by.::
-
-    0 CLI          - Rd ping
-    0 CLI          - Wr 200 PONG 1273698726 1.0
-
-These is the Varnish master process checking up on the caching process
-to see that everything is OK. 
-
-Now go to the browser and reload the page displaying your web
-app. You'll see lines like these.::
-
-   11 SessionOpen  c 127.0.0.1 58912 0.0.0.0:8080
-   11 ReqStart     c 127.0.0.1 58912 595005213
-   11 RxRequest    c GET
-   11 RxURL        c /
-   11 RxProtocol   c HTTP/1.1
-   11 RxHeader     c Host: localhost:8080
-   11 RxHeader     c Connection: keep-alive
-
-The first column is an arbitrary number, it defines the request. Lines
-with the same number are part of the same HTTP transaction. The second
-column is the *tag* of the log message. All log entries are tagged
-with a tag indicating what sort of activity is being logged. Tags
-starting with Rx indicate Varnish is recieving data and Tx indicates
-sending data.
-
-The third column tell us whether this is is data coming or going to
-the client (c) or to/from the backend (b). The forth column is the
-data being logged.
-
-Now, you can filter quite a bit with varnishlog. The basic option you
-want to know are:
-
--b
- Only show log lines from traffic going between Varnish and the backend 
- servers. This will be useful when we want to optimize cache hit rates.
-
--c 
- Same as -b but for client side traffic.
-
--m tag:regex
- Only list transactions where the tag matches a regular expression. If
- it matches you will get the whole transaction.
-
-For more information on this topic please see ref:`ref-varnishlog`.
diff --git a/doc/sphinx/users-guide/operation-cli.rst b/doc/sphinx/users-guide/operation-cli.rst
new file mode 100644
index 0000000..5cb40a5
--- /dev/null
+++ b/doc/sphinx/users-guide/operation-cli.rst
@@ -0,0 +1,6 @@
+
+
+Varnishadm
+----------
+
+You connect to it and everything becomes awesome.
diff --git a/doc/sphinx/users-guide/operation-logging.rst b/doc/sphinx/users-guide/operation-logging.rst
new file mode 100644
index 0000000..01d8f36
--- /dev/null
+++ b/doc/sphinx/users-guide/operation-logging.rst
@@ -0,0 +1,68 @@
+.. _users-guide-logging:
+
+Logging in Varnish
+------------------
+
+One of the really nice features in Varnish is how logging
+works. Instead of logging to normal log file Varnish logs to a shared
+memory segment. When the end of the segment is reached we start over,
+overwriting old data. This is much, much faster then logging to a file
+and it doesn't require disk space. Besides it gives you much, much
+more information when you need it.
+
+The flip side is that if you forget to have a program actually write the
+logs to disk they will disappear.
+
+varnishlog is one of the programs you can use to look at what Varnish
+is logging. Varnishlog gives you the raw logs, everything that is
+written to the logs. There are other clients as well, we'll show you
+these later.
+
+In the terminal window you started varnish now type *varnishlog* and
+press enter.
+
+You'll see lines like these scrolling slowly by.::
+
+    0 CLI          - Rd ping
+    0 CLI          - Wr 200 PONG 1273698726 1.0
+
+These is the Varnish master process checking up on the caching process
+to see that everything is OK. 
+
+Now go to the browser and reload the page displaying your web
+app. You'll see lines like these.::
+
+   11 SessionOpen  c 127.0.0.1 58912 0.0.0.0:8080
+   11 ReqStart     c 127.0.0.1 58912 595005213
+   11 RxRequest    c GET
+   11 RxURL        c /
+   11 RxProtocol   c HTTP/1.1
+   11 RxHeader     c Host: localhost:8080
+   11 RxHeader     c Connection: keep-alive
+
+The first column is an arbitrary number, it defines the request. Lines
+with the same number are part of the same HTTP transaction. The second
+column is the *tag* of the log message. All log entries are tagged
+with a tag indicating what sort of activity is being logged. Tags
+starting with Rx indicate Varnish is recieving data and Tx indicates
+sending data.
+
+The third column tell us whether this is is data coming or going to
+the client (c) or to/from the backend (b). The forth column is the
+data being logged.
+
+Now, you can filter quite a bit with varnishlog. The basic option you
+want to know are:
+
+-b
+ Only show log lines from traffic going between Varnish and the backend 
+ servers. This will be useful when we want to optimize cache hit rates.
+
+-c 
+ Same as -b but for client side traffic.
+
+-m tag:regex
+ Only list transactions where the tag matches a regular expression. If
+ it matches you will get the whole transaction.
+
+For more information on this topic please see ref:`ref-varnishlog`.
diff --git a/doc/sphinx/users-guide/operation-statistics.rst b/doc/sphinx/users-guide/operation-statistics.rst
new file mode 100644
index 0000000..ad57037
--- /dev/null
+++ b/doc/sphinx/users-guide/operation-statistics.rst
@@ -0,0 +1,57 @@
+.. _users-guide-statistics:
+
+
+Statistics
+----------
+
+Now that your varnish is up and running let's have a look at how it is
+doing. There are several tools that can help.
+
+varnishtop
+~~~~~~~~~~
+
+The varnishtop utility reads the shared memory logs and presents a
+continuously updated list of the most commonly occurring log entries.
+
+With suitable filtering using the -I, -i, -X and -x options, it can be
+used to display a ranking of requested documents, clients, user
+agents, or any other information which is recorded in the log.
+
+``varnishtop -i rxurl`` will show you what URLs are being asked for
+by the client. ``varnishtop -i txurl`` will show you what your backend
+is being asked the most. ``varnishtop -i RxHeader -I
+Accept-Encoding`` will show the most popular Accept-Encoding header
+the client are sending you.
+
+varnishhist
+~~~~~~~~~~~
+
+The varnishhist utility reads varnishd(1) shared memory logs and
+presents a continuously updated histogram showing the distribution of
+the last N requests by their processing.  The value of N and the
+vertical scale are displayed in the top left corner.  The horizontal
+scale is logarithmic.  Hits are marked with a pipe character ("|"),
+and misses are marked with a hash character ("#").
+
+
+varnishsizes
+~~~~~~~~~~~~
+
+Varnishsizes does the same as varnishhist, except it shows the size of
+the objects and not the time take to complete the request. This gives
+you a good overview of how big the objects you are serving are.
+
+
+varnishstat
+~~~~~~~~~~~
+
+Varnish has lots of counters. We count misses, hits, information about
+the storage, threads created, deleted objects. Just about
+everything. varnishstat will dump these counters. This is useful when
+tuning varnish. 
+
+There are programs that can poll varnishstat regularly and make nice
+graphs of these counters. One such program is Munin. Munin can be
+found at http://munin-monitoring.org/ . There is a plugin for munin in
+the varnish source code.
+
diff --git a/doc/sphinx/users-guide/operation.rst b/doc/sphinx/users-guide/operation.rst
new file mode 100644
index 0000000..c7c74f9
--- /dev/null
+++ b/doc/sphinx/users-guide/operation.rst
@@ -0,0 +1,17 @@
+Operation
+=========
+
+.. toctree::
+   :maxdepth: 2
+
+   operation-logging
+   operation-statistics
+   operation-cli
+   purging
+   sizing-your-cache
+   increasing-your-hitrate
+   compression
+   esi
+   vary
+   cookies
+   virtualized
\ No newline at end of file
diff --git a/doc/sphinx/users-guide/params.rst b/doc/sphinx/users-guide/params.rst
new file mode 100644
index 0000000..a0123ec
--- /dev/null
+++ b/doc/sphinx/users-guide/params.rst
@@ -0,0 +1,4 @@
+
+
+Parameters
+----------
\ No newline at end of file
diff --git a/doc/sphinx/users-guide/purging.rst b/doc/sphinx/users-guide/purging.rst
index 15b85d2..53facd3 100644
--- a/doc/sphinx/users-guide/purging.rst
+++ b/doc/sphinx/users-guide/purging.rst
@@ -1,8 +1,8 @@
 .. _users-guide-purging:
 
-=====================
- Purging and banning
-=====================
+
+Purging and banning
+-------------------
 
 One of the most effective ways of increasing your hit ratio is to
 increase the time-to-live (ttl) of your objects. But, as you're aware
@@ -15,7 +15,7 @@ banning and forced cache misses. First, let me explain the HTTP purges.
 
 
 HTTP Purges
-===========
+-----------
 
 A *purge* is what happens when you pick out an object from the cache
 and discard it along with its variants. Usually a purge is invoked
@@ -75,7 +75,7 @@ And Varnish would then discard the front page. This will remove all
 variants as defined by Vary.
 
 Bans
-====
+----
 
 There is another way to invalidate content: Bans. You can think of
 bans as a sort of a filter on objects already in the cache. You *ban*
@@ -164,7 +164,7 @@ be marked as Gone if it is a duplicate ban, but is still kept in the list
 for optimization purposes.
 
 Forcing a cache miss
-====================
+--------------------
 
 The final way to invalidate an object is a method that allows you to
 refresh an object by forcing a hash miss for a single request. If you set
diff --git a/doc/sphinx/users-guide/sizing-your-cache.rst b/doc/sphinx/users-guide/sizing-your-cache.rst
new file mode 100644
index 0000000..7d48200
--- /dev/null
+++ b/doc/sphinx/users-guide/sizing-your-cache.rst
@@ -0,0 +1,25 @@
+
+Sizing your cache
+-----------------
+
+Picking how much memory you should give Varnish can be a tricky
+task. A few things to consider:
+
+ * How big is your *hot* data set. For a portal or news site that
+   would be the size of the front page with all the stuff on it, and
+   the size of all the pages and objects linked from the first page. 
+ * How expensive is it to generate an object? Sometimes it makes sense
+   to only cache images a little while or not to cache them at all if
+   they are cheap to serve from the backend and you have a limited
+   amount of memory.
+ * Watch the n_lru_nuked counter with :ref:`reference-varnishstat` or
+   some other tool. If you have a lot of LRU activity then your cache
+   is evicting objects due to space constraints and you should
+   consider increasing the size of the cache.
+
+Be aware that every object that is stored also carries overhead that
+is kept outside the actually storage area. So, even if you specify -s
+malloc,16G varnish might actually use **double** that. Varnish has a
+overhead of about 1k per object. So, if you have lots of small objects
+in your cache the overhead might be significant.
+
diff --git a/doc/sphinx/users-guide/sizing_your_cache.rst b/doc/sphinx/users-guide/sizing_your_cache.rst
deleted file mode 100644
index 7d48200..0000000
--- a/doc/sphinx/users-guide/sizing_your_cache.rst
+++ /dev/null
@@ -1,25 +0,0 @@
-
-Sizing your cache
------------------
-
-Picking how much memory you should give Varnish can be a tricky
-task. A few things to consider:
-
- * How big is your *hot* data set. For a portal or news site that
-   would be the size of the front page with all the stuff on it, and
-   the size of all the pages and objects linked from the first page. 
- * How expensive is it to generate an object? Sometimes it makes sense
-   to only cache images a little while or not to cache them at all if
-   they are cheap to serve from the backend and you have a limited
-   amount of memory.
- * Watch the n_lru_nuked counter with :ref:`reference-varnishstat` or
-   some other tool. If you have a lot of LRU activity then your cache
-   is evicting objects due to space constraints and you should
-   consider increasing the size of the cache.
-
-Be aware that every object that is stored also carries overhead that
-is kept outside the actually storage area. So, even if you specify -s
-malloc,16G varnish might actually use **double** that. Varnish has a
-overhead of about 1k per object. So, if you have lots of small objects
-in your cache the overhead might be significant.
-
diff --git a/doc/sphinx/users-guide/statistics.rst b/doc/sphinx/users-guide/statistics.rst
deleted file mode 100644
index ad57037..0000000
--- a/doc/sphinx/users-guide/statistics.rst
+++ /dev/null
@@ -1,57 +0,0 @@
-.. _users-guide-statistics:
-
-
-Statistics
-----------
-
-Now that your varnish is up and running let's have a look at how it is
-doing. There are several tools that can help.
-
-varnishtop
-~~~~~~~~~~
-
-The varnishtop utility reads the shared memory logs and presents a
-continuously updated list of the most commonly occurring log entries.
-
-With suitable filtering using the -I, -i, -X and -x options, it can be
-used to display a ranking of requested documents, clients, user
-agents, or any other information which is recorded in the log.
-
-``varnishtop -i rxurl`` will show you what URLs are being asked for
-by the client. ``varnishtop -i txurl`` will show you what your backend
-is being asked the most. ``varnishtop -i RxHeader -I
-Accept-Encoding`` will show the most popular Accept-Encoding header
-the client are sending you.
-
-varnishhist
-~~~~~~~~~~~
-
-The varnishhist utility reads varnishd(1) shared memory logs and
-presents a continuously updated histogram showing the distribution of
-the last N requests by their processing.  The value of N and the
-vertical scale are displayed in the top left corner.  The horizontal
-scale is logarithmic.  Hits are marked with a pipe character ("|"),
-and misses are marked with a hash character ("#").
-
-
-varnishsizes
-~~~~~~~~~~~~
-
-Varnishsizes does the same as varnishhist, except it shows the size of
-the objects and not the time take to complete the request. This gives
-you a good overview of how big the objects you are serving are.
-
-
-varnishstat
-~~~~~~~~~~~
-
-Varnish has lots of counters. We count misses, hits, information about
-the storage, threads created, deleted objects. Just about
-everything. varnishstat will dump these counters. This is useful when
-tuning varnish. 
-
-There are programs that can poll varnishstat regularly and make nice
-graphs of these counters. One such program is Munin. Munin can be
-found at http://munin-monitoring.org/ . There is a plugin for munin in
-the varnish source code.
-
diff --git a/doc/sphinx/users-guide/storage-backends.rst b/doc/sphinx/users-guide/storage-backends.rst
new file mode 100644
index 0000000..5d576cf
--- /dev/null
+++ b/doc/sphinx/users-guide/storage-backends.rst
@@ -0,0 +1,21 @@
+
+
+Storage backends
+----------------
+
+Intro
+~~~~~
+
+Malloc
+~~~~~~
+
+File
+~~~~
+
+Persistent
+~~~~~~~~~~
+
+Transient
+~~~~~~~~~
+
+
diff --git a/doc/sphinx/users-guide/troubleshooting.rst b/doc/sphinx/users-guide/troubleshooting.rst
index be24e7a..51797d8 100644
--- a/doc/sphinx/users-guide/troubleshooting.rst
+++ b/doc/sphinx/users-guide/troubleshooting.rst
@@ -1,14 +1,15 @@
 Troubleshooting Varnish
------------------------
+=======================
 
 Sometimes Varnish misbehaves. In order for you to understand whats
 going on there are a couple of places you can check. varnishlog,
 /var/log/syslog, /var/log/messages are all places where varnish might
-leave clues of whats going on.
+leave clues of whats going on. This chapter will guide you through
+basic troubleshooting in Varnish.
 
 
 When Varnish won't start
-~~~~~~~~~~~~~~~~~~~~~~~~
+------------------------
 
 Sometimes Varnish wont start. There is a plethora of reasons why
 Varnish wont start on your machine. We've seen everything from wrong
@@ -18,7 +19,7 @@ Starting Varnish in debug mode to see what is going on.
 
 Try to start varnish by::
 
-    # varnishd -f /usr/local/etc/varnish/default.vcl -s malloc,1G -T 127.0.0.1:2000  -a 0.0.0.0:8080 -d
+    # varnishd -f /usr/local/etc/varnish/default.vcl -s malloc,1G -T 127.0.0.1: 2000  -a 0.0.0.0:8080 -d
 
 Notice the -d option. It will give you some more information on what
 is going on. Let us see how Varnish will react to something else
@@ -51,7 +52,7 @@ on IRC.
 
 
 Varnish is crashing
-~~~~~~~~~~~~~~~~~~~
+-------------------
 
 When varnish goes bust the child processes crashes. Usually the mother
 process will manage this by restarting the child process again. Any
@@ -70,7 +71,7 @@ XXX: Describe crashing child process and crashing mother process here too.
 XXX: panic.show
 
 Varnish gives me Guru meditation
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+--------------------------------
 
 First find the relevant log entries in varnishlog. That will probably
 give you a clue. Since varnishlog logs so much data it might be hard
@@ -90,7 +91,7 @@ filtering capabilities and explanation of the various options.
 
 
 Varnish doesn't cache
-~~~~~~~~~~~~~~~~~~~~~
+---------------------
 
 See :ref:`users-guide-increasing_your_hitrate`.
 
diff --git a/doc/sphinx/users-guide/vcl-actions.rst b/doc/sphinx/users-guide/vcl-actions.rst
new file mode 100644
index 0000000..cfb5019
--- /dev/null
+++ b/doc/sphinx/users-guide/vcl-actions.rst
@@ -0,0 +1,34 @@
+actions
+~~~~~~~
+
+The most common actions to return are these:
+
+*pass*
+ When you return pass the request and subsequent response will be passed to
+ and from the backend server. It won't be cached. pass can be returned from
+ vcl_recv
+
+*hit_for_pass*
+  Similar to pass, but accessible from vcl_fetch. Unlike pass, hit_for_pass
+  will create a hitforpass object in the cache. This has the side-effect of
+  caching the decision not to cache. This is to allow would-be uncachable
+  requests to be passed to the backend at the same time. The same logic is
+  not necessary in vcl_recv because this happens before any potential
+  queueing for an object takes place.
+
+*lookup*
+  When you return lookup from vcl_recv you tell Varnish to deliver content 
+  from cache even if the request othervise indicates that the request 
+  should be passed. You can't return lookup from vcl_fetch.
+
+*pipe*
+  Pipe can be returned from vcl_recv as well. Pipe short circuits the
+  client and the backend connections and Varnish will just sit there
+  and shuffle bytes back and forth. Varnish will not look at the data being 
+  send back and forth - so your logs will be incomplete. 
+  Beware that with HTTP 1.1 a client can send several requests on the same 
+  connection and so you should instruct Varnish to add a "Connection: close"
+  header before actually returning pipe. 
+
+*deliver*
+ Deliver the cached object to the client.  Usually returned from vcl_fetch. 
diff --git a/doc/sphinx/users-guide/vcl-backends.rst b/doc/sphinx/users-guide/vcl-backends.rst
new file mode 100644
index 0000000..4f721aa
--- /dev/null
+++ b/doc/sphinx/users-guide/vcl-backends.rst
@@ -0,0 +1,197 @@
+.. _users-guide-backend_servers:
+
+Backend servers
+---------------
+
+Varnish has a concept of "backend" or "origin" servers. A backend
+server is the server providing the content Varnish will accelerate.
+
+Our first task is to tell Varnish where it can find its content. Start
+your favorite text editor and open the varnish default configuration
+file. If you installed from source this is
+/usr/local/etc/varnish/default.vcl, if you installed from a package it
+is probably /etc/varnish/default.vcl.
+
+Somewhere in the top there will be a section that looks a bit like this.::
+
+	  # backend default {
+	  #     .host = "127.0.0.1";
+	  #     .port = "8080";
+	  # }
+
+We comment in this bit of text making the text look like.::
+
+          backend default {
+                .host = "127.0.0.1";
+    		.port = "8080";
+	  }
+
+Now, this piece of configuration defines a backend in Varnish called
+*default*. When Varnish needs to get content from this backend it will
+connect to port 8080 on localhost (127.0.0.1).
+
+Varnish can have several backends defined and can you can even join
+several backends together into clusters of backends for load balancing
+purposes. 
+
+Multiple backends
+-----------------
+
+At some point you might need Varnish to cache content from several
+servers. You might want Varnish to map all the URL into one single
+host or not. There are lot of options.
+
+Lets say we need to introduce a Java application into out PHP web
+site. Lets say our Java application should handle URL beginning with
+/java/.
+
+We manage to get the thing up and running on port 8000. Now, lets have
+a look at the default.vcl.::
+
+  backend default {
+      .host = "127.0.0.1";
+      .port = "8080";
+  }
+
+We add a new backend.::
+
+  backend java {
+      .host = "127.0.0.1";
+      .port = "8000";
+  }
+
+Now we need tell where to send the difference URL. Lets look at vcl_recv.::
+
+  sub vcl_recv {
+      if (req.url ~ "^/java/") {
+          set req.backend = java;
+      } else {
+          set req.backend = default.
+      }
+  }
+
+It's quite simple, really. Lets stop and think about this for a
+moment. As you can see you can define how you choose backends based on
+really arbitrary data. You want to send mobile devices to a different
+backend? No problem. if (req.User-agent ~ /mobile/) .. should do the
+trick. 
+
+.. _users-guide-advanced_backend_servers-directors:
+
+Directors
+---------
+
+You can also group several backend into a group of backends. These
+groups are called directors. This will give you increased performance
+and resilience. You can define several backends and group them
+together in a director.::
+
+	 backend server1 {
+	     .host = "192.168.0.10";
+	 }
+	 backend server2{
+	     .host = "192.168.0.10";
+	 }
+
+Now we create the director.::
+
+       	director example_director round-robin {
+        {
+                .backend = server1;
+        }
+	# server2
+        {
+                .backend = server2;
+        }
+	# foo
+	}
+
+
+This director is a round-robin director. This means the director will
+distribute the incoming requests on a round-robin basis. There is
+also a *random* director which distributes requests in a, you guessed
+it, random fashion.
+
+But what if one of your servers goes down? Can Varnish direct all the
+requests to the healthy server? Sure it can. This is where the Health
+Checks come into play.
+
+.. _users-guide-advanced_backend_servers-health:
+
+Health checks
+-------------
+
+Lets set up a director with two backends and health checks. First lets
+define the backends.::
+
+       backend server1 {
+         .host = "server1.example.com";
+	 .probe = {
+                .url = "/";
+                .interval = 5s;
+                .timeout = 1 s;
+                .window = 5;
+                .threshold = 3;
+	   }
+         }
+       backend server2 {
+  	  .host = "server2.example.com";
+  	  .probe = {
+                .url = "/";
+                .interval = 5s;
+                .timeout = 1 s;
+                .window = 5;
+                .threshold = 3;
+	  }
+        }
+
+Whats new here is the probe. Varnish will check the health of each
+backend with a probe. The options are
+
+url
+ What URL should varnish request.
+
+interval
+ How often should we poll
+
+timeout
+ What is the timeout of the probe
+
+window
+ Varnish will maintain a *sliding window* of the results. Here the
+ window has five checks.
+
+threshold 
+ How many of the .window last polls must be good for the backend to be declared healthy.
+
+initial 
+ How many of the of the probes a good when Varnish starts - defaults
+ to the same amount as the threshold.
+
+Now we define the director.::
+
+  director example_director round-robin {
+        {
+                .backend = server1;
+        }
+        # server2 
+        {
+                .backend = server2;
+        }
+	
+        }
+
+You use this director just as you would use any other director or
+backend. Varnish will not send traffic to hosts that are marked as
+unhealthy. Varnish can also serve stale content if all the backends are
+down. See :ref:`users-guide-handling_misbehaving_servers` for more
+information on how to enable this.
+
+Please note that Varnish will keep probes active for all loaded
+VCLs. Varnish will coalesce probes that seem identical - so be careful
+not to change the probe config if you do a lot of VCL
+loading. Unloading the VCL will discard the probes.
+
+For more information on how to do this please see
+ref:`reference-vcl-director`.
+
diff --git a/doc/sphinx/users-guide/vcl-examples.rst b/doc/sphinx/users-guide/vcl-examples.rst
new file mode 100644
index 0000000..ffa730f
--- /dev/null
+++ b/doc/sphinx/users-guide/vcl-examples.rst
@@ -0,0 +1,65 @@
+Example 1 - manipulating headers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Lets say we want to remove the cookie for all objects in the /images
+directory of our web server::
+
+  sub vcl_recv {
+    if (req.url ~ "^/images") {
+      unset req.http.cookie;
+    }
+  }
+
+Now, when the request is handled to the backend server there will be
+no cookie header. The interesting line is the one with the
+if-statement. It matches the URL, taken from the request object, and
+matches it against the regular expression. Note the match operator. If
+it matches the Cookie: header of the request is unset (deleted). 
+
+Example 2 - manipulating beresp
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Here we override the TTL of a object comming from the backend if it
+matches certain criteria::
+
+  sub vcl_fetch {
+     if (req.url ~ "\.(png|gif|jpg)$") {
+       unset beresp.http.set-cookie;
+       set beresp.ttl = 1h;
+    }
+  }
+
+Example 3 - ACLs
+~~~~~~~~~~~~~~~~
+
+You create a named access control list with the *acl* keyword. You can match
+the IP address of the client against an ACL with the match operator.::
+
+  # Who is allowed to purge....
+  acl local {
+      "localhost";
+      "192.168.1.0"/24; /* and everyone on the local network */
+      ! "192.168.1.23"; /* except for the dialin router */
+  }
+  
+  sub vcl_recv {
+    if (req.request == "PURGE") {
+      if (client.ip ~ local) {
+         return(lookup);
+      }
+    } 
+  }
+  
+  sub vcl_hit {
+     if (req.request == "PURGE") {
+       set obj.ttl = 0s;
+       error 200 "Purged.";
+      }
+  }
+
+  sub vcl_miss {
+    if (req.request == "PURGE") {
+      error 404 "Not in cache.";
+    }
+  }
+
diff --git a/doc/sphinx/users-guide/vcl-hashing.rst b/doc/sphinx/users-guide/vcl-hashing.rst
new file mode 100644
index 0000000..10b2920
--- /dev/null
+++ b/doc/sphinx/users-guide/vcl-hashing.rst
@@ -0,0 +1,51 @@
+
+Hashing
+-------
+
+Internally, when Varnish stores content in it's store it uses a hash
+key to find the object again. In the default setup this key is
+calculated based on the content of the *Host* header or the IP adress
+of the server and the URL.
+
+Behold the default vcl.::
+
+ sub vcl_hash {
+     hash_data(req.url);
+     if (req.http.host) {
+         hash_data(req.http.host);
+     } else {
+         hash_data(server.ip);
+     }
+     return (hash);
+ }
+
+As you can see it first chucks in req.url then req.http.host if it
+exsists. It is worth pointing out that Varnish doesn't lowercase the
+hostname or the URL before hashing it so in thery having Varnish.org/
+and varnish.org/ would result in different cache entries. Browers
+however, tend to lowercase hostnames.
+
+You can change what goes into the hash. This way you can make Varnish
+serve up different content to different clients based on arbitrary
+criteria.
+
+Let's say you want to serve pages in different languages to your users
+based on where their IP address is located. You would need some Vmod
+to get a country code and then put it into the hash. It might look
+like this.
+
+In vcl_recv::
+
+  set req.http.X-Country-Code = geoip.lookup(client.ip);
+
+And then add a vcl_hash::
+
+ sub vcl_hash {
+   hash_data(req.http.X-Country-Code);
+ }
+
+As the default VCL will take care of adding the host and URL to the
+hash we don't have to do anything else. Be careful calling
+return(hash) as this will abort the execution of the default VCL and
+thereby you can end up with a Varnish that will return data based on
+more or less random inputs.
diff --git a/doc/sphinx/users-guide/vcl-inline-c.rst b/doc/sphinx/users-guide/vcl-inline-c.rst
new file mode 100644
index 0000000..7c88cf9
--- /dev/null
+++ b/doc/sphinx/users-guide/vcl-inline-c.rst
@@ -0,0 +1,24 @@
+
+
+
+Using In-line C to extend Varnish
+---------------------------------
+
+(Here there be dragons. Big and mean ones.)
+
+You can use *in-line C* to extend Varnish. Please note that you can
+seriously mess up Varnish this way. The C code runs within the Varnish
+Cache process so if your code generates a segfault the cache will crash.
+
+One of the first uses I saw of In-line C was logging to syslog.::
+
+        # The include statements must be outside the subroutines.
+        C{
+                #include <syslog.h>
+        }C
+        
+        sub vcl_something {
+                C{
+                        syslog(LOG_INFO, "Something happened at VCL line XX.");
+                }C
+        }
diff --git a/doc/sphinx/users-guide/vcl-intro.rst b/doc/sphinx/users-guide/vcl-intro.rst
new file mode 100644
index 0000000..922310a
--- /dev/null
+++ b/doc/sphinx/users-guide/vcl-intro.rst
@@ -0,0 +1,30 @@
+Varnish Configuration Language - VCL
+-------------------------------------
+
+Varnish has a great configuration system. Most other systems use
+configuration directives, where you basically turn on and off lots of
+switches. Varnish uses a domain specific language called Varnish
+Configuration Language, or VCL for short. Varnish translates this
+configuration into binary code which is then executed when requests
+arrive.
+
+The VCL files are divided into subroutines. The different subroutines
+are executed at different times. One is executed when we get the
+request, another when files are fetched from the backend server.
+
+Varnish will execute these subroutines of code at different stages of
+its work. Because it is code it is execute line by line precedence
+isn't a problem. At some point you call an action in this subroutine
+and then the execution of the subroutine stops.
+
+If you don't call an action in your subroutine and it reaches the end
+Varnish will execute some built in VCL code. You will see this VCL
+code commented out in default.vcl.
+
+99% of all the changes you'll need to do will be done in two of these
+subroutines. *vcl_recv* and *vcl_fetch*.
+
+
+.. _users-guide-vcl_fetch_actions:
+
+
diff --git a/doc/sphinx/users-guide/vcl-saint-and-grace.rst b/doc/sphinx/users-guide/vcl-saint-and-grace.rst
new file mode 100644
index 0000000..bb5fd35
--- /dev/null
+++ b/doc/sphinx/users-guide/vcl-saint-and-grace.rst
@@ -0,0 +1,103 @@
+.. _users-guide-handling_misbehaving_servers:
+
+Misbehaving servers
+-------------------
+
+A key feature of Varnish is its ability to shield you from misbehaving
+web- and application servers.
+
+
+
+Grace mode
+~~~~~~~~~~
+
+When several clients are requesting the same page Varnish will send
+one request to the backend and place the others on hold while fetching
+one copy from the back end. In some products this is called request
+coalescing and Varnish does this automatically.
+
+If you are serving thousands of hits per second the queue of waiting
+requests can get huge. There are two potential problems - one is a
+thundering herd problem - suddenly releasing a thousand threads to
+serve content might send the load sky high. Secondly - nobody likes to
+wait. To deal with this we can instruct Varnish to keep
+the objects in cache beyond their TTL and to serve the waiting
+requests somewhat stale content.
+
+So, in order to serve stale content we must first have some content to
+serve. So to make Varnish keep all objects for 30 minutes beyond their
+TTL use the following VCL::
+
+  sub vcl_fetch {
+    set beresp.grace = 30m;
+  }
+
+Varnish still won't serve the stale objects. In order to enable
+Varnish to actually serve the stale object we must enable this on the
+request. Lets us say that we accept serving 15s old object.::
+
+  sub vcl_recv {
+    set req.grace = 15s;
+  }
+
+You might wonder why we should keep the objects in the cache for 30
+minutes if we are unable to serve them? Well, if you have enabled
+:ref:`users-guide-advanced_backend_servers-health` you can check if the
+backend is sick and if it is we can serve the stale content for a bit
+longer.::
+
+   if (! req.backend.healthy) {
+      set req.grace = 5m;
+   } else {
+      set req.grace = 15s;
+   }
+
+So, to sum up, grace mode solves two problems:
+ * it serves stale content to avoid request pile-up.
+ * it serves stale content if the backend is not healthy.
+
+Saint mode
+~~~~~~~~~~
+
+Sometimes servers get flaky. They start throwing out random
+errors. You can instruct Varnish to try to handle this in a
+more-than-graceful way - enter *Saint mode*. Saint mode enables you to
+discard a certain page from one backend server and either try another
+server or serve stale content from cache. Lets have a look at how this
+can be enabled in VCL::
+
+  sub vcl_fetch {
+    if (beresp.status == 500) { 
+      set beresp.saintmode = 10s;
+      return(restart);
+    }
+    set beresp.grace = 5m;
+  } 
+
+When we set beresp.saintmode to 10 seconds Varnish will not ask *that*
+server for URL for 10 seconds. A blacklist, more or less. Also a
+restart is performed so if you have other backends capable of serving
+that content Varnish will try those. When you are out of backends
+Varnish will serve the content from its stale cache.
+
+This can really be a life saver.
+
+Known limitations on grace- and saint mode
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If your request fails while it is being fetched you're thrown into
+vcl_error. vcl_error has access to a rather limited set of data so you
+can't enable saint mode or grace mode here. This will be addressed in a
+future release but a work-around available.
+
+* Declare a backend that is always sick.
+* Set a magic marker in vcl_error
+* Restart the transaction
+* Note the magic marker in vcl_recv and set the backend to the one mentioned
+* Varnish will now serve stale data is any is available
+
+
+God mode
+~~~~~~~~
+Not implemented yet. :-)
+
diff --git a/doc/sphinx/users-guide/vcl-subs.rst b/doc/sphinx/users-guide/vcl-subs.rst
new file mode 100644
index 0000000..d759b2a
--- /dev/null
+++ b/doc/sphinx/users-guide/vcl-subs.rst
@@ -0,0 +1,25 @@
+
+vcl_recv
+~~~~~~~~
+
+vcl_recv (yes, we're skimpy with characters, it's Unix) is called at
+the beginning of a request, after the complete request has been
+received and parsed.  Its purpose is to decide whether or not to serve
+the request, how to do it, and, if applicable, which backend to use.
+
+In vcl_recv you can also alter the request. Typically you can alter
+the cookies and add and remove request headers.
+
+Note that in vcl_recv only the request object, req is available.
+
+vcl_fetch
+~~~~~~~~~
+
+vcl_fetch is called *after* a document has been successfully retrieved
+from the backend. Normal tasks her are to alter the response headers,
+trigger ESI processing, try alternate backend servers in case the
+request failed.
+
+In vcl_fetch you still have the request object, req, available. There
+is also a *backend response*, beresp. beresp will contain the HTTP
+headers from the backend.
diff --git a/doc/sphinx/users-guide/vcl-syntax.rst b/doc/sphinx/users-guide/vcl-syntax.rst
new file mode 100644
index 0000000..c49ef91
--- /dev/null
+++ b/doc/sphinx/users-guide/vcl-syntax.rst
@@ -0,0 +1,45 @@
+VCL Syntax
+----------
+
+VCL has inherited a lot from C and it reads much like simple C or Perl.
+
+Blocks are delimited by curly braces, statements end with semicolons,
+and comments may be written as in C, C++ or Perl according to your own
+preferences.
+
+Note that VCL doesn't contain any loops or jump statements.
+
+
+Strings
+~~~~~~~
+
+
+
+Access control lists (ACLs)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+Operators
+~~~~~~~~~
+
+The following operators are available in VCL. See the examples further
+down for, uhm, examples.
+
+= 
+ Assignment operator.
+
+== 
+ Comparison.
+
+~
+ Match. Can either be used with regular expressions or ACLs.
+
+!
+ Negation.
+
+&&
+ Logical *and*
+
+||
+ Logical *or*
+
diff --git a/doc/sphinx/users-guide/vcl-variables.rst b/doc/sphinx/users-guide/vcl-variables.rst
new file mode 100644
index 0000000..10af0ab
--- /dev/null
+++ b/doc/sphinx/users-guide/vcl-variables.rst
@@ -0,0 +1,23 @@
+
+Requests, responses and objects
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In VCL, there are three important data structures. The request, coming
+from the client, the response coming from the backend server and the
+object, stored in cache.
+
+In VCL you should know the following structures.
+
+*req*
+ The request object. When Varnish has received the request the req object is 
+ created and populated. Most of the work you do in vcl_recv you 
+ do on or with the req object.
+
+*beresp*
+ The backend respons object. It contains the headers of the object 
+ comming from the backend. Most of the work you do in vcl_fetch you 
+ do on the beresp object.
+
+*obj*
+ The cached object. Mostly a read only object that resides in memory. 
+ obj.ttl is writable, the rest is read only.
diff --git a/doc/sphinx/users-guide/vcl.rst b/doc/sphinx/users-guide/vcl.rst
index 57fb7e0..b36a43f 100644
--- a/doc/sphinx/users-guide/vcl.rst
+++ b/doc/sphinx/users-guide/vcl.rst
@@ -1,200 +1,23 @@
-Varnish Configuration Language - VCL
--------------------------------------
+VCL
+---
 
-Varnish has a great configuration system. Most other systems use
-configuration directives, where you basically turn on and off lots of
-switches. Varnish uses a domain specific language called Varnish
-Configuration Language, or VCL for short. Varnish translates this
-configuration into binary code which is then executed when requests
-arrive.
 
-The VCL files are divided into subroutines. The different subroutines
-are executed at different times. One is executed when we get the
-request, another when files are fetched from the backend server.
+Yes. Is great. Ja.
 
-Varnish will execute these subroutines of code at different stages of
-its work. Because it is code it is execute line by line precedence
-isn't a problem. At some point you call an action in this subroutine
-and then the execution of the subroutine stops.
 
-If you don't call an action in your subroutine and it reaches the end
-Varnish will execute some built in VCL code. You will see this VCL
-code commented out in default.vcl.
-
-99% of all the changes you'll need to do will be done in two of these
-subroutines. *vcl_recv* and *vcl_fetch*.
-
-vcl_recv
-~~~~~~~~
-
-vcl_recv (yes, we're skimpy with characters, it's Unix) is called at
-the beginning of a request, after the complete request has been
-received and parsed.  Its purpose is to decide whether or not to serve
-the request, how to do it, and, if applicable, which backend to use.
-
-In vcl_recv you can also alter the request. Typically you can alter
-the cookies and add and remove request headers.
-
-Note that in vcl_recv only the request object, req is available.
-
-vcl_fetch
-~~~~~~~~~
-
-vcl_fetch is called *after* a document has been successfully retrieved
-from the backend. Normal tasks her are to alter the response headers,
-trigger ESI processing, try alternate backend servers in case the
-request failed.
-
-In vcl_fetch you still have the request object, req, available. There
-is also a *backend response*, beresp. beresp will contain the HTTP
-headers from the backend.
-
-.. _users-guide-vcl_fetch_actions:
-
-actions
-~~~~~~~
-
-The most common actions to return are these:
-
-*pass*
- When you return pass the request and subsequent response will be passed to
- and from the backend server. It won't be cached. pass can be returned from
- vcl_recv
-
-*hit_for_pass*
-  Similar to pass, but accessible from vcl_fetch. Unlike pass, hit_for_pass
-  will create a hitforpass object in the cache. This has the side-effect of
-  caching the decision not to cache. This is to allow would-be uncachable
-  requests to be passed to the backend at the same time. The same logic is
-  not necessary in vcl_recv because this happens before any potential
-  queueing for an object takes place.
-
-*lookup*
-  When you return lookup from vcl_recv you tell Varnish to deliver content 
-  from cache even if the request othervise indicates that the request 
-  should be passed. You can't return lookup from vcl_fetch.
-
-*pipe*
-  Pipe can be returned from vcl_recv as well. Pipe short circuits the
-  client and the backend connections and Varnish will just sit there
-  and shuffle bytes back and forth. Varnish will not look at the data being 
-  send back and forth - so your logs will be incomplete. 
-  Beware that with HTTP 1.1 a client can send several requests on the same 
-  connection and so you should instruct Varnish to add a "Connection: close"
-  header before actually returning pipe. 
-
-*deliver*
- Deliver the cached object to the client.  Usually returned from vcl_fetch. 
-
-Requests, responses and objects
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-In VCL, there are three important data structures. The request, coming
-from the client, the response coming from the backend server and the
-object, stored in cache.
-
-In VCL you should know the following structures.
-
-*req*
- The request object. When Varnish has received the request the req object is 
- created and populated. Most of the work you do in vcl_recv you 
- do on or with the req object.
-
-*beresp*
- The backend respons object. It contains the headers of the object 
- comming from the backend. Most of the work you do in vcl_fetch you 
- do on the beresp object.
-
-*obj*
- The cached object. Mostly a read only object that resides in memory. 
- obj.ttl is writable, the rest is read only.
-
-Operators
-~~~~~~~~~
-
-The following operators are available in VCL. See the examples further
-down for, uhm, examples.
-
-= 
- Assignment operator.
-
-== 
- Comparison.
-
-~
- Match. Can either be used with regular expressions or ACLs.
-
-!
- Negation.
-
-&&
- Logical *and*
-
-||
- Logical *or*
-
-Example 1 - manipulating headers
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Lets say we want to remove the cookie for all objects in the /images
-directory of our web server::
-
-  sub vcl_recv {
-    if (req.url ~ "^/images") {
-      unset req.http.cookie;
-    }
-  }
-
-Now, when the request is handled to the backend server there will be
-no cookie header. The interesting line is the one with the
-if-statement. It matches the URL, taken from the request object, and
-matches it against the regular expression. Note the match operator. If
-it matches the Cookie: header of the request is unset (deleted). 
-
-Example 2 - manipulating beresp
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Here we override the TTL of a object comming from the backend if it
-matches certain criteria::
-
-  sub vcl_fetch {
-     if (req.url ~ "\.(png|gif|jpg)$") {
-       unset beresp.http.set-cookie;
-       set beresp.ttl = 1h;
-    }
-  }
-
-Example 3 - ACLs
-~~~~~~~~~~~~~~~~
-
-You create a named access control list with the *acl* keyword. You can match
-the IP address of the client against an ACL with the match operator.::
-
-  # Who is allowed to purge....
-  acl local {
-      "localhost";
-      "192.168.1.0"/24; /* and everyone on the local network */
-      ! "192.168.1.23"; /* except for the dialin router */
-  }
-  
-  sub vcl_recv {
-    if (req.request == "PURGE") {
-      if (client.ip ~ local) {
-         return(lookup);
-      }
-    } 
-  }
-  
-  sub vcl_hit {
-     if (req.request == "PURGE") {
-       set obj.ttl = 0s;
-       error 200 "Purged.";
-      }
-  }
-
-  sub vcl_miss {
-    if (req.request == "PURGE") {
-      error 404 "Not in cache.";
-    }
-  }
+.. toctree::
+   :maxdepth: 2
 
+   vcl-intro
+   vcl-syntax
+   vcl-variables
+   vcl-actions
+   vcl-subs
+   vcl-backends
+   vcl-hashing
+   vcl-saint-and-grace
+   vcl-inline-c
+   vcl-examples
+   websockets
+   devicedetection
+   
\ No newline at end of file



More information about the varnish-commit mailing list