[4.0] 45637bb Done. Important changes are consequently commented and all comments are marked with the string 'benc' for easy identification. There are some heavy refactorings done, that fundamentally changes content meaing.

Lasse Karstensen lkarsten at varnish-software.com
Tue Apr 1 15:09:51 CEST 2014


commit 45637bb9988ef7f20e1fab51ca8744c0a62a6651
Author: benc <benc at redpill-linpro.com>
Date:   Sun Mar 16 12:50:38 2014 +0100

    Done. Important changes are consequently commented and all comments are marked with the string 'benc' for easy identification. There are some heavy refactorings done, that fundamentally changes content meaing.

diff --git a/doc/sphinx/users-guide/command-line.rst b/doc/sphinx/users-guide/command-line.rst
index dc25087..9016d51 100644
--- a/doc/sphinx/users-guide/command-line.rst
+++ b/doc/sphinx/users-guide/command-line.rst
@@ -3,22 +3,22 @@
 Important command line arguments
 --------------------------------
 
-There a two command line arguments you will simply have choose
-values for, what TCP port serve HTTP from and where the backend
-server can be contacted.
+There a two command line arguments you have to set when starting Varnish, these are: 
+* what TCP port to serve HTTP from, and 
+* where the backend server can be contacted.
 
-If you run Varnish from a package for your operating system,
+If you have installed Varnish through using a provided operating system bound package,
 you will find the startup options here:
 
-* Debian, Ubuntu: /etc/default/varnish
-* Red Hat, Centos: /etc/sysconfig/varnish
-* FreeBSD: /etc/rc.conf (See also: /usr/local/etc/rc.d/varnishd)
+* Debian, Ubuntu: `/etc/default/varnish`
+* Red Hat, Centos: `/etc/sysconfig/varnish`
+* FreeBSD: `/etc/rc.conf` (See also: /usr/local/etc/rc.d/varnishd)
 
 
--a *listen_address*
+'-a' *listen_address*
 ^^^^^^^^^^^^^^^^^^^
 
-What address should Varnish listen to, and service HTTP requests from.
+The '-a' argument defines what address Varnish should listen to, and service HTTP requests from.
 
 You will most likely want to set this to ":80" which is the Well
 Known Port for HTTP.
@@ -26,7 +26,7 @@ Known Port for HTTP.
 You can specify multiple addresses separated by a comma, and you
 can use numeric or host/service names if you like, Varnish will try
 to open and service as many of them as possible, but if none of them
-can be opened, varnishd will not start.
+can be opened, `varnishd` will not start.
 
 Here are some examples::
 
@@ -36,16 +36,19 @@ Here are some examples::
 	-a '[fe80::1]:80'
 	-a '0.0.0.0:8080,[::]:8081'
 
-If your webserver runs on the same computer, you will have to move
+.. XXX:brief explanation of some of the more comples examples perhaps? benc
+
+If your webserver runs on the same machine, you will have to move
 it to another port number first.
 
--f *VCL-file* or -b *backend*
+'-f' *VCL-file* or '-b' *backend*
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
+
 Varnish needs to know where to find the HTTP server it is caching for.
-You can either specify it with -b, or you can put it in your own VCL file.
+You can either specify it with the '-b' argument, or you can put it in your own VCL file, specified with the '-f' argument.
 
-Using -b is a quick way to get started::
+Using '-b' is a quick way to get started::
 
 	-b localhost:81
 	-b thatotherserver.example.com:80
@@ -54,31 +57,34 @@ Using -b is a quick way to get started::
 Notice that if you specify a name, it can at most resolve to one IPv4
 *and* one IPv6 address.
 
-If you go with -f, you can start with a VCL file containing just::
+If you go with '-f', you can start with a VCL file containing just::
 
 	backend default {
 		.host = "localhost:81";
 	}
 
-which is exactly what -b does.
+which is exactly what '-b' does.
+
+.. XXX:What happens if I start with -b and then have the backend defined in my VCL? benc
 
 In both cases the built-in VCL code is appended.
 
 Other options
 ^^^^^^^^^^^^^
 
-Varnish has more command line arguments you can and maybe want
-to tweak, but to get started, the above will be sufficient.
+Varnish comes with an abundance of useful command line arguments. We recommend that you study them but not necessary use them all, but to get started, the above will be sufficient.
 
 By default Varnish will use 100 megabytes of malloc(3) storage
 for caching objects, if you want to cache more than that, you
 should look at the '-s' argument.
 
+.. XXX: 3? benc
+
 If you run a really big site, you may want to tune the number of
 worker threads and other parameters with the '-p' argument,
 but we generally advice not to do that unless you need to.
 
-Before you go into production, you may also want to re-visit the
+Before you go into production, you may also want to revisit the
 chapter
 :ref:`run_security` to see if you need to partition administrative
 privileges.
diff --git a/doc/sphinx/users-guide/compression.rst b/doc/sphinx/users-guide/compression.rst
index d611807..fc84f9f 100644
--- a/doc/sphinx/users-guide/compression.rst
+++ b/doc/sphinx/users-guide/compression.rst
@@ -3,31 +3,33 @@
 Compression
 -----------
 
-New in Varnish 3.0 was native support for compression, using gzip
+In Varnish 3.0 we introduced native support for compression, using gzip
 encoding. *Before* 3.0, Varnish would never compress objects. 
 
-In Varnish 3.0 compression defaults to "on", meaning that it tries to
+In Varnish 4.0 compression defaults to "on", meaning that it tries to
 be smart and do the sensible thing.
 
+.. XXX:Heavy refactoring to VArnish 4 above. benc
+
 If you don't want Varnish tampering with the encoding you can disable
-compression all together by setting the parameter http_gzip_support to
-*false*. Please see man :ref:`ref-varnishd` for details.
+compression all together by setting the parameter 'http_gzip_support' to
+false. Please see man :ref:`ref-varnishd` for details.
 
 
 Default behaviour
 ~~~~~~~~~~~~~~~~~
 
-The default for Varnish is to check if the client supports our
+The default behaviour for Varnish is to check if the client supports our
 compression scheme (gzip) and if it does it will override the
-Accept-Encoding header and set it to "gzip".
+'Accept-Encoding' header and set it to "gzip".
 
-When Varnish then issues a backend request the Accept-Encoding will
+When Varnish then issues a backend request the 'Accept-Encoding' will
 then only consist of "gzip". If the server responds with gzip'ed
 content it will be stored in memory in its compressed form. If the
-backend sends content in clear text it will be stored like that.
+backend sends content in clear text it will be stored in clear text.
 
 You can make Varnish compress content before storing it in cache in
-vcl_fetch by setting do_gzip to true, like this::
+`vcl_fetch` by setting 'do_gzip' to true, like this::
 
    sub vcl_backend_response {
         if (beresp.http.content-type ~ "text") {
@@ -38,9 +40,7 @@ vcl_fetch by setting do_gzip to true, like this::
 Please make sure that you don't try to compress content that is
 uncompressable, like jpgs, gifs and mp3. You'll only waste CPU
 cycles. You can also uncompress objects before storing it in memory by
-setting do_gunzip to *true* but I have no idea why anybody would want
-to do that.
-
+setting 'do_gunzip' to true but that will ususally not be the most sensible thing to do.
 Generally, Varnish doesn't use much CPU so it might make more sense to
 have Varnish spend CPU cycles compressing content than doing it in
 your web- or application servers, which are more likely to be
@@ -49,7 +49,7 @@ CPU-bound.
 GZIP and ESI
 ~~~~~~~~~~~~
 
-If you are using Edge Side Includes you'll be happy to note that ESI
+If you are using Edge Side Includes (ESIs) you'll be happy to note that ESI
 and GZIP work together really well. Varnish will magically decompress
 the content to do the ESI-processing, then recompress it for efficient
 storage and delivery. 
@@ -58,10 +58,10 @@ storage and delivery.
 Clients that don't support gzip
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-If the client does not support gzip the Accept-Encoding header is left
-alone and we'll end up serving whatever we get from the backend
-server. Remember that the Backend might tell Varnish to *Vary* on the
-Accept-Encoding.
+If the client does not support gzip the 'Accept-Encoding' header is left
+alone then we'll end up serving whatever we get from the backend
+server. Remember that the backend might tell Varnish to *Vary* on the
+'Accept-Encoding'.
 
 If the client does not support gzip but we've already got a compressed
 version of the page in memory Varnish will automatically decompress
@@ -71,5 +71,5 @@ the page while delivering it.
 A random outburst
 ~~~~~~~~~~~~~~~~~
 
-Poul has written :ref:`phk_gzip` which talks abit more about how the
+Poul-Henning Kamp has written :ref:`phk_gzip` which talks abit more about how the
 implementation works. 
diff --git a/doc/sphinx/users-guide/devicedetection.rst b/doc/sphinx/users-guide/devicedetection.rst
index 5750918..5067c4c 100644
--- a/doc/sphinx/users-guide/devicedetection.rst
+++ b/doc/sphinx/users-guide/devicedetection.rst
@@ -10,14 +10,13 @@ Use cases for this are for example to send size reduced files to mobile
 clients with small screens and on high latency networks, or to 
 provide a streaming video codec that the client understands.
 
-There are a couple of strategies on what to do with such clients:
-1) Redirect them to another URL.
-2) Use a different backend for the special clients.
-3) Change the backend requests so the usual backend sends tailored content.
+There are a couple of typical strategies to use for this type of scenario:
+1) Redirect to another URL.
+2) Use a different backend for the special client.
+3) Change the backend request so that the backend sends tailored content.
 
-To make the examples easier to understand, it is assumed in this text 
-that all the req.http.X-UA-Device header is present and unique per client class
-that content is to be served to. 
+To perhaps make the strategies easier to understand, we, in this context, assume
+that the `req.http.X-UA-Device` header is present and unique per client class. 
 
 Setting this header can be as simple as::
 
@@ -28,34 +27,34 @@ Setting this header can be as simple as::
    }
 
 There are different commercial and free offerings in doing grouping and
-identifying clients in further detail than this. For a basic and community
+identifying clients in further detail. For a basic and community
 based regular expression set, see
-https://github.com/varnish/varnish-devicedetect/ .
+https://github.com/varnish/varnish-devicedetect/.
 
 
 Serve the different content on the same URL
 -------------------------------------------
 
 The tricks involved are: 
-1. Detect the client (pretty simple, just include devicedetect.vcl and call
-it)
-2. Figure out how to signal the backend what client class this is. This
+1. Detect the client (pretty simple, just include `devicedetect.vcl` and call
+it).
+2. Figure out how to signal the backend the client class. This
 includes for example setting a header, changing a header or even changing the
 backend request URL.
-3. Modify any response from the backend to add missing Vary headers, so
+3. Modify any response from the backend to add missing 'Vary' headers, so
 Varnish' internal handling of this kicks in.
 4. Modify output sent to the client so any caches outside our control don't
 serve the wrong content.
 
-All this while still making sure that we only get 1 cached object per URL per
+All this needs to be done while still making sure that we only get one cached object per URL per
 device class.
 
 
 Example 1: Send HTTP header to backend
 ''''''''''''''''''''''''''''''''''''''
 
-The basic case is that Varnish adds the X-UA-Device HTTP header on the backend
-requests, and the backend mentions in the response Vary header that the content
+The basic case is that Varnish adds the 'X-UA-Device' HTTP header on the backend
+requests, and the backend mentions in the response 'Vary' header that the content
 is dependant on this header. 
 
 Everything works out of the box from Varnish' perspective.
@@ -103,13 +102,13 @@ Example 2: Normalize the User-Agent string
 ''''''''''''''''''''''''''''''''''''''''''
 
 Another way of signaling the device type is to override or normalize the
-User-Agent header sent to the backend.
+'User-Agent' header sent to the backend.
 
-For example
+For example::
 
     User-Agent: Mozilla/5.0 (Linux; U; Android 2.2; nb-no; HTC Desire Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1
 
-becomes:
+becomes::
 
     User-Agent: mobile-android
 
@@ -218,7 +217,7 @@ Different backend for mobile clients
 ------------------------------------
 
 If you have a different backend that serves pages for mobile clients, or any
-special needs in VCL, you can use the X-UA-Device header like this::
+special needs in VCL, you can use the 'X-UA-Device' header like this::
 
     backend mobile {
         .host = "10.0.0.1";
diff --git a/doc/sphinx/users-guide/esi.rst b/doc/sphinx/users-guide/esi.rst
index 9035224..45f8814 100644
--- a/doc/sphinx/users-guide/esi.rst
+++ b/doc/sphinx/users-guide/esi.rst
@@ -3,19 +3,23 @@
 Content composition with Edge Side Includes
 -------------------------------------------
 
-Varnish can cache create web pages by putting different pages
-together. These *fragments* can have individual cache policies. If you
-have a web site with a list showing the 5 most popular articles on
-your site, this list can probably be cached as a fragment and included
-in all the other pages. Used properly it can dramatically increase
+Varnish can cache create web pages by assembling different pages, called `fragments`,
+together into one page. These `fragments` can have individual cache policies. If you
+have a web site with a list showing the five most popular articles on
+your site, this list can probably be cached as a `fragment` and included
+in all the other pages.
+
+.. XXX:What other pages? benc
+
+Used properly this strategy can dramatically increase
 your hit rate and reduce the load on your servers. 
 
-In Varnish we've only implemented a small subset of ESI. As of 2.1 we
-have three ESI statements:
+In Varnish we've only so far implemented a small subset of ESI. As of version 2.1 we
+have three ESI statements::
 
- * esi:include 
- * esi:remove
- * <!--esi ...-->
+ esi:include 
+ esi:remove
+ <!--esi ...-->
 
 Content substitution based on variables and cookies is not implemented
 but is on the roadmap. At least if you look at the roadmap from a
@@ -58,13 +62,13 @@ For ESI to work you need to activate ESI processing in VCL, like this::
 
 Example: esi:remove and <!--esi ... -->
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The <esi:remove> and <!--esi ... --> constructs can be used to present
+The `<esi:remove>` and `<!--esi ... -->` constructs can be used to present
 appropriate content whether or not ESI is available, for example you can
 include content when ESI is available or link to it when it is not.
-ESI processors will remove the start ("<!--esi") and end ("-->") when
+ESI processors will remove the start ("<!--esi") and the end ("-->") when
 the page is processed, while still processing the contents. If the page
-is not processed, it will remain, becoming an HTML/XML comment tag.
-ESI processors will remove <esi:remove> tags and all content contained
+is not processed, it will remain intact, becoming a HTML/XML comment tag.
+ESI processors will remove `<esi:remove>` tags and all content contained
 in them, allowing you to only render the content when the page is not
 being ESI-processed.
 For example::
@@ -77,10 +81,10 @@ For example::
   <esi:include src="http://example.com/LICENSE" />
   -->
 
-Doing ESI on JSON and other non-XMLish content
+Doing ESI on JSON and other non-XML'ish content
 ----------------------------------------------
 
 Please note that Varnish will peek at the included content. If it
 doesn't start with a "<" Varnish assumes you didn't really mean to
 include it and disregard it. You can alter this behaviour by setting
-the esi_syntax parameter (see ref:`ref-varnishd`).
+the 'esi_syntax' parameter (see ref:`ref-varnishd`).
diff --git a/doc/sphinx/users-guide/increasing-your-hitrate.rst b/doc/sphinx/users-guide/increasing-your-hitrate.rst
index 3e1adc5..2f22e3c 100644
--- a/doc/sphinx/users-guide/increasing-your-hitrate.rst
+++ b/doc/sphinx/users-guide/increasing-your-hitrate.rst
@@ -7,37 +7,37 @@ Now that Varnish is up and running, and you can access your web
 application through Varnish. Unless your application is specifically
 written to work behind a web accelerator you'll probably need to do
 some changes to either the configuration or the application in order
-to get a high hit rate in Varnish.
+to get a high hitrate in Varnish.
 
 Varnish will not cache your data unless it's absolutely sure it is
 safe to do so. So, for you to understand how Varnish decides if and
-how to cache a page, I'll guide you through a couple of tools that you
-will find useful.
+how to cache a page, We'll guide you through a couple of tools that you
+should find useful to understand what is happening in your Varnish setup.
 
-Note that you need a tool to see what HTTP headers fly between you and
-the web server. On the Varnish server, the easiest is to use
-varnishlog and varnishtop but sometimes a client-side tool makes
-sense. Here are the ones I use.
+Note that you need a tool to see the HTTP headers that fly between Varnish and
+the backend. On the Varnish server, the easiest way to do this is to use
+`varnishlog` and `varnishtop` but sometimes a client-side tool makes
+sense. Here are the ones we commonly use.
 
 Tool: varnishtop
 ~~~~~~~~~~~~~~~~
 
 You can use varnishtop to identify what URLs are hitting the backend
 the most. ``varnishtop -i txurl`` is an essential command, showing you
-the top txurl requests Varnish is sending towards the backend. You can
-see some other examples of varnishtop usage in
+the top `txurl` requests Varnish is sending to the backend. You can
+see some other examples of `varnishtop` usage in
 :ref:`users-guide-statistics`.
 
 
 Tool: varnishlog
 ~~~~~~~~~~~~~~~~
 
-When you have identified the an URL which is frequently sent to the
-backend you can use varnishlog to have a look at the request.
+When you have identified an URL which is frequently sent to the
+backend you can use `varnishlog` to have a look at the request.
 ``varnishlog -c -m 'RxURL:^/foo/bar`` will show you the requests
-coming from the client (-c) matching /foo/bar.
+coming from the client ('-c') matching `/foo/bar`.
 
-For more information on how varnishlog works please see
+For more information on how `varnishlog` works please see
 :ref:`users-guide-logging` or man :ref:`ref-varnishlog`.
 
 For extended diagnostics headers, see
@@ -47,9 +47,9 @@ http://www.varnish-cache.org/trac/wiki/VCLExampleHitMissHeader
 Tool: lwp-request
 ~~~~~~~~~~~~~~~~~
 
-lwp-request is part of The World-Wide Web library for Perl. It's a
-couple of really basic programs that can execute an HTTP request and
-give you the result. I mostly use two programs, GET and HEAD.
+`lwp-request` is tool that is a part of The World-Wide Web library for Perl. It's a
+couple of really basic programs that can execute a HTTP request and
+show you the result. We mostly use the two programs, ``GET`` and ``HEAD``.
 
 vg.no was the first site to use Varnish and the people running Varnish
 there are quite clueful. So it's interesting to look at their HTTP
@@ -72,26 +72,28 @@ Headers. Let's send a GET request for their home page::
   X-VG-WebCache: joanie
   X-VG-WebServer: leon
 
-OK. Let me explain what it does. GET usually sends off HTTP 0.9
-requests, which lack the Host header. So I add a Host header with the
--H option. -U print request headers, -s prints response status, -e
-prints response headers and -d discards the actual content. We don't
+OK. Lets look at what ``GET`` does. ``GET`` usually sends off HTTP 0.9
+requests, which lack the 'Host' header. So we add a 'Host' header with the
+'-H' option. '-U' print request headers, '-s' prints response status, '-e'
+prints response headers and '-d' discards the actual content. We don't
 really care about the content, only the headers.
 
 As you can see, VG adds quite a bit of information in their
-headers. Some of the headers, like the X-Rick-Would-Never are specific
+headers. Some of the headers, like the 'X-Rick-Would-Never' are specific
 to vg.no and their somewhat odd sense of humour. Others, like the
-X-VG-Webcache are for debugging purposes. 
+'X-VG-Webcache' are for debugging purposes. 
 
 So, to check whether a site sets cookies for a specific URL, just do::
 
   GET -Used http://example.com/ |grep ^Set-Cookie
 
+.. XXX:Missing explanation and sample for HEAD here. benc
+
 Tool: Live HTTP Headers
 ~~~~~~~~~~~~~~~~~~~~~~~
 
-There is also a plugin for Firefox. *Live HTTP Headers* can show you
-what headers are being sent and recieved. Live HTTP Headers can be
+There is also a plugin for Firefox called `Live HTTP Headers`. This plugin can show you
+what headers are being sent and recieved. `Live HTTP Headers` can be
 found at https://addons.mozilla.org/en-US/firefox/addon/3829/ or by
 googling "Live HTTP Headers".
 
@@ -102,14 +104,14 @@ The role of HTTP Headers
 Along with each HTTP request and response comes a bunch of headers
 carrying metadata. Varnish will look at these headers to determine if
 it is appropriate to cache the contents and how long Varnish can keep
-the content.
+the content cached.
 
-Please note that when considering these headers Varnish actually
+Please note that when Varnish considers these headers Varnish actually
 considers itself *part of* the actual webserver. The rationale being
 that both are under your control. 
 
 The term *surrogate origin cache* is not really well defined by the
-IETF so RFC 2616 so the various ways Varnish works might differ from
+IETF or RFC 2616 so the various ways Varnish works might differ from
 your expectations.
 
 Let's take a look at the important headers you should be aware of:
@@ -119,8 +121,8 @@ Let's take a look at the important headers you should be aware of:
 Cookies
 -------
 
-Varnish will, in the default configuration, not cache a object coming
-from the backend with a Set-Cookie header present. Also, if the client
+Varnish will, in the default configuration, not cache an object coming
+from the backend with a 'Set-Cookie' header present. Also, if the client
 sends a Cookie header, Varnish will bypass the cache and go directly to
 the backend.
 
@@ -132,10 +134,10 @@ interest to the server.
 Cookies from the client
 ~~~~~~~~~~~~~~~~~~~~~~~
 
-For a lot of web application it makes sense to completely disregard the
+For a lot of web applications it makes sense to completely disregard the
 cookies unless you are accessing a special part of the web site. This
-VCL snippet in vcl_recv will disregard cookies unless you are
-accessing /admin/::
+VCL snippet in `vcl_recv` will disregard cookies unless you are
+accessing `/admin/`::
 
   if ( !( req.url ~ ^/admin/) ) {
     unset req.http.Cookie;
@@ -146,15 +148,15 @@ like removing one out of several cookies, things get
 difficult. Unfortunately Varnish doesn't have good tools for
 manipulating the Cookies. We have to use regular expressions to do the
 work. If you are familiar with regular expressions you'll understand
-whats going on. If you don't I suggest you either pick up a book on
-the subject, read through the *pcrepattern* man page or read through
+whats going on. If you aren't we recommend that you either pick up a book on
+the subject, read through the *pcrepattern* man page, or read through
 one of many online guides.
 
-Let me show you what Varnish Software uses. We use some cookies for
+Lets use the Varnish Software (VS) web as an example here. Very simplified the setup VS uses can be described as a Drupal-based backend with a Varnish cache infront. VS uses some cookies for
 Google Analytics tracking and similar tools. The cookies are all set
 and used by Javascript. Varnish and Drupal doesn't need to see those
 cookies and since Varnish will cease caching of pages when the client
-sends cookies we will discard these unnecessary cookies in VCL. 
+sends cookies Varnish will discard these unnecessary cookies in VCL. 
 
 In the following VCL we discard all cookies that start with a
 underscore::
@@ -164,8 +166,8 @@ underscore::
   // Remove a ";" prefix, if present.
   set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", "");
 
-Let me show you an example where we remove everything except the
-cookies named COOKIE1 and COOKIE2 and you can marvel at it::
+Lets look at an example where we remove everything except the
+cookies named "COOKIE1" and "COOKIE2" and you can marvel at the "beauty" of it::
 
   sub vcl_recv {
     if (req.http.Cookie) {
@@ -181,10 +183,12 @@ cookies named COOKIE1 and COOKIE2 and you can marvel at it::
     }
   }
 
-A somewhat simpler example that can accomplish almost the same can be
-found below. Instead of filtering out the other cookies it picks out
-the one cookie that is needed, copies it to another header and then
-copies it back, deleting the original cookie header.::
+A somewhat simpler example that can accomplish almost the same functionality can be
+found below. Instead of filtering out "other" cookies it instead picks out
+"the one" cookie that is needed, copies it to another header and then
+copies it back to the request, deleting the original cookie header.
+.. XXX:Verify correctness of request above! benc
+::
 
   sub vcl_recv {
          # save the original cookie header so we can mangle it
@@ -200,30 +204,31 @@ copies it back, deleting the original cookie header.::
 There are other scary examples of what can be done in VCL in the
 Varnish Cache Wiki.
 
+.. XXX:Missing link here.
+
 
 Cookies coming from the backend
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-If your backend server sets a cookie using the Set-Cookie header
-Varnish will not cache the page in the default configuration.  A
-hit-for-pass object (see :ref:`user-guide-vcl_actions`) is created.
+If your backend server sets a cookie using the 'Set-Cookie' header
+Varnish will not cache the page when using the default configuration. A
+`hit-for-pass` object (see :ref:`user-guide-vcl_actions`) is created.
 So, if the backend server acts silly and sets unwanted cookies just unset
-the Set-Cookie header and all should be fine. 
-
+the 'Set-Cookie' header and all should be fine. 
 
 
 Cache-Control
 ~~~~~~~~~~~~~
 
-The Cache-Control instructs caches how to handle the content. Varnish
+The 'Cache-Control' header instructs caches how to handle the content. Varnish
 cares about the *max-age* parameter and uses it to calculate the TTL
 for an object. 
 
-"Cache-Control: nocache" is ignored but if you need this you can
+``Cache-Control: nocache`` is ignored but if you need this you can
 easily add support for it.
 
-So make sure you issue a Cache-Control header with a max-age
-header. You can have a look at what Varnish Software's drupal server
+So make sure you issue a 'Cache-Control' header with a max-age
+header. You can have a look at what Varnish Software's Drupal server
 issues::
 
   $ GET -Used http://www.varnish-software.com/|grep ^Cache-Control
@@ -232,18 +237,18 @@ issues::
 Age
 ~~~
 
-Varnish adds an Age header to indicate how long the object has been
-kept inside Varnish. You can grep out Age from varnishlog like this::
+Varnish adds an 'Age' header to indicate how long the object has been
+kept inside Varnish. You can grep out 'Age' from `varnishlog` like this::
 
   varnishlog -i TxHeader -I ^Age
 
 Pragma
 ~~~~~~
 
-An HTTP 1.0 server might send "Pragma: nocache". Varnish ignores this
+An HTTP 1.0 server might send the header ``Pragma: nocache``. Varnish ignores this
 header. You could easily add support for this header in VCL.
 
-In vcl_backend_response::
+In `vcl_backend_response`::
 
   if (beresp.http.Pragma ~ "nocache") {
         set beresp.uncacheable = true;
@@ -253,7 +258,7 @@ In vcl_backend_response::
 Authorization
 ~~~~~~~~~~~~~
 
-If Varnish sees an Authorization header it will pass the request. If
+If Varnish sees an 'Authorization' header it will pass the request. If
 this is not what you want you can unset the header.
 
 Overriding the time-to-live (ttl)
@@ -264,7 +269,7 @@ setup, be easier to override the ttl in Varnish than to fix your
 somewhat cumbersome backend. 
 
 You need VCL to identify the objects you want and then you set the
-beresp.ttl to whatever you want::
+'beresp.ttl' to whatever you want::
 
   sub vcl_backend_response {
       if (req.url ~ "^/legacy_broken_cms/") {
@@ -272,13 +277,13 @@ beresp.ttl to whatever you want::
       }
   }
 
-The example will set the TTL to 5 days for the old legacy stuff on
+This example will set the ttl to 5 days for the old legacy stuff on
 your site.
 
 Forcing caching for certain requests and certain responses
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Since you still have this cumbersome backend that isn't very friendly
+Since you still might have this cumbersome backend that isn't very friendly
 to work with you might want to override more stuff in Varnish. We
 recommend that you rely as much as you can on the default caching
 rules. It is perfectly easy to force Varnish to lookup an object in
@@ -291,7 +296,8 @@ Normalizing your namespace
 Some sites are accessed via lots of
 hostnames. http://www.varnish-software.com/,
 http://varnish-software.com/ and http://varnishsoftware.com/ all point
-at the same site. Since Varnish doesn't know they are different,
+at the same site. Since Varnish doesn't know they are the same,
+.. XXX: heavy meaning change above. benc
 Varnish will cache different versions of every page for every
 hostname. You can mitigate this in your web server configuration by
 setting up redirects or by using the following VCL::
@@ -310,33 +316,33 @@ HTTP Vary
 misunderstood HTTP header.*
 
 A lot of the response headers tell the client something about the HTTP
-object being delivered. Clients can request different variants a an
+object being delivered. Clients can request different variants of a
 HTTP object, based on their preference. Their preferences might cover
 stuff like encoding or language. When a client prefers UK English this
-is indicated through "Accept-Language: en-uk". Caches need to keep
+is indicated through ``Accept-Language: en-uk``. Caches need to keep
 these different variants apart and this is done through the HTTP
-response header "Vary".
+response header 'Vary'.
 
-When a backend server issues a "Vary: Accept-Language" it tells
+When a backend server issues a ``Vary: Accept-Language`` it tells
 Varnish that its needs to cache a separate version for every different
 Accept-Language that is coming from the clients.
 
 If two clients say they accept the languages "en-us, en-uk" and "da,
 de" respectively, Varnish will cache and serve two different versions
 of the page if the backend indicated that Varnish needs to vary on the
-Accept-Language header.
+'Accept-Language' header.
 
-Please note that the headers that Vary refer to need to match
+Please note that the headers that 'Vary' refer to need to match
 *exactly* for there to be a match. So Varnish will keep two copies of
 a page if one of them was created for "en-us, en-uk" and the other for
-"en-us,en-uk". Just the lack of space will force Varnish to cache
+"en-us,en-uk". Just the lack of a whitespace will force Varnish to cache
 another version.
 
-To achieve a high hitrate whilst using Vary is there therefor crucial
+To achieve a high hitrate whilst using Vary is there therefore crucial
 to normalize the headers the backends varies on. Remember, just a
-difference in case can force different cache entries.
+difference in casing can force different cache entries.
 
-The following VCL code will normalize the Accept-Language headers, to
+The following VCL code will normalize the 'Accept-Language' headers, to
 one of either "en","de" or "fr"::
 
     if (req.http.Accept-Language) {
@@ -353,28 +359,28 @@ one of either "en","de" or "fr"::
         }
     }
 
-The code sets the Accept-Encoding header from the client to either
+The code sets the 'Accept-Encoding' header from the client to either
 gzip, deflate with a preference for gzip.
 
 Vary parse errors
 ~~~~~~~~~~~~~~~~~
 
-Varnish will return a 503 internal server error page when it fails to
-parse the Vary server header, or if any of the client headers listed
-in the Vary header exceeds the limit of 65k characters. An SLT_Error
+Varnish will return a "503 internal server error" page when it fails to
+parse the 'Vary' header, or if any of the client headers listed
+in the Vary header exceeds the limit of 65k characters. An 'SLT_Error'
 log entry is added in these cases.
 
 Pitfall - Vary: User-Agent
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Some applications or application servers send *Vary: User-Agent* along
+Some applications or application servers send ``Vary: User-Agent`` along
 with their content. This instructs Varnish to cache a separate copy
-for every variation of User-Agent there is. There are plenty. Even a
+for every variation of 'User-Agent' there is and there are plenty. Even a
 single patchlevel of the same browser will generate at least 10
-different User-Agent headers based just on what operating system they
+different 'User-Agent' headers based just on what operating system they
 are running. 
 
-So if you *really* need to Vary based on User-Agent be sure to
+So if you *really* need to vary based on 'User-Agent' be sure to
 normalize the header or your hit rate will suffer badly. Use the above
 code as a template.
 
diff --git a/doc/sphinx/users-guide/intro.rst b/doc/sphinx/users-guide/intro.rst
index 8b31b32..f9c5587 100644
--- a/doc/sphinx/users-guide/intro.rst
+++ b/doc/sphinx/users-guide/intro.rst
@@ -3,7 +3,7 @@
 The Big Varnish Picture
 =======================
 
-In this section we will cover the questions:
+In this section we will cover answers to the questions:
 - What is in this package called "Varnish"?
 - what are all the different bits and pieces named? 
 - Will you need a hex-wrench for assembly?
diff --git a/doc/sphinx/users-guide/operation-logging.rst b/doc/sphinx/users-guide/operation-logging.rst
index 547a987..4bfdca0 100644
--- a/doc/sphinx/users-guide/operation-logging.rst
+++ b/doc/sphinx/users-guide/operation-logging.rst
@@ -3,8 +3,8 @@
 Logging in Varnish
 ------------------
 
-One of the really nice features in Varnish is how logging
-works. Instead of logging to normal log file Varnish logs to a shared
+One of the really nice features in Varnish is the way logging
+works. Instead of logging to a normal log file Varnish logs to a shared
 memory segment, called the VSL - the Varnish Shared Log. When the end
 of the segment is reached we start over, overwriting old data. 
 
@@ -13,14 +13,14 @@ require disk space. Besides it gives you much, much more information
 when you need it.
 
 The flip side is that if you forget to have a program actually write the
-logs to disk they will disappear.
+logs to disk they will be overwritten.
 
-varnishlog is one of the programs you can use to look at what Varnish
-is logging. Varnishlog gives you the raw logs, everything that is
-written to the logs. There are other clients as well, we'll show you
+`varnishlog` is one of the programs you can use to look at what Varnish
+is logging. `varnishlog` gives you the raw logs, everything that is
+written to the logs. There are other clients that can access the logs as well, we'll show you
 these later.
 
-In the terminal window you started Varnish now type *varnishlog* and
+In the terminal window you started Varnish now type ``varnishlog`` and
 press enter.
 
 You'll see lines like these scrolling slowly by.::
@@ -32,7 +32,10 @@ These is the Varnish master process checking up on the caching process
 to see that everything is OK.
 
 Now go to the browser and reload the page displaying your web
-app. You'll see lines like these.::
+app. 
+.. XXX:Doesn't this require a setup of a running varnishd and a web application being cached? benc
+
+You'll see lines like these.::
 
    11 SessionOpen  c 127.0.0.1 58912 0.0.0.0:8080
    11 ReqStart     c 127.0.0.1 58912 595005213
@@ -42,29 +45,32 @@ app. You'll see lines like these.::
    11 RxHeader     c Host: localhost:8080
    11 RxHeader     c Connection: keep-alive
 
+
 The first column is an arbitrary number, it identifies the
 session. Lines with the same number are coming from the same session
 and are being handled by the same thread. The second column is the
 *tag* of the log message. All log entries are tagged with a tag
 indicating what sort of activity is being logged. Tags starting with
-Rx indicate Varnish is recieving data and Tx indicates sending data.
+'Rx' indicate Varnish is recieving data and 'Tx' indicates sending data.
 
 The third column tell us whether this is is data coming or going to
-the client (c) or to/from the backend (b). The forth column is the
+the client ('c') or to/from the backend ('b'). The forth column is the
 data being logged.
 
-Now, you can filter quite a bit with varnishlog. The basic option you
+Now, you can filter quite a bit with `varnishlog`. The basic options we think you
 want to know are:
 
--b
+'-b'
  Only show log lines from traffic going between Varnish and the backend
  servers. This will be useful when we want to optimize cache hit rates.
 
--c
- Same as -b but for client side traffic.
+'-c'
+ Same as '-b' but for client side traffic.
 
--m tag:regex
+'-m tag:regex'
  Only list transactions where the tag matches a regular expression. If
  it matches you will get the whole transaction.
 
+.. XXX:Maybe a couple of sample commands here? benc
+
 For more information on this topic please see :ref:`ref-varnishlog`.
diff --git a/doc/sphinx/users-guide/operation-statistics.rst b/doc/sphinx/users-guide/operation-statistics.rst
index c891342..788e600 100644
--- a/doc/sphinx/users-guide/operation-statistics.rst
+++ b/doc/sphinx/users-guide/operation-statistics.rst
@@ -4,13 +4,14 @@
 Statistics
 ----------
 
-Now that your Varnish is up and running let's have a look at how it is
-doing. There are several tools that can help.
+Varnish comes with a couple of nifty and very useful statistics generating tools that generates statistics in real time by constantly updating and presenting a specific dataset by aggregating and analyzing logdata from the shared memory logs.
+
+.. XXX:Heavy rewrite above. benc
 
 varnishtop
 ~~~~~~~~~~
 
-The varnishtop utility reads the shared memory logs and presents a
+The `varnishtop` utility reads the shared memory logs and presents a
 continuously updated list of the most commonly occurring log entries.
 
 With suitable filtering using the -I, -i, -X and -x options, it can be
@@ -28,9 +29,11 @@ For more information please see :ref:`ref-varnishtop`.
 varnishhist
 ~~~~~~~~~~~
 
-The varnishhist utility reads varnishd(1) shared memory logs and
+The `varnishhist` utility reads `varnishd(1)` shared memory logs and
 presents a continuously updated histogram showing the distribution of
-the last N requests by their processing.  The value of N and the
+the last N requests by their processing.  
+.. XXX:1? benc
+The value of N and the
 vertical scale are displayed in the top left corner.  The horizontal
 scale is logarithmic.  Hits are marked with a pipe character ("|"),
 and misses are marked with a hash character ("#").
@@ -43,10 +46,10 @@ varnishstat
 
 Varnish has lots of counters. We count misses, hits, information about
 the storage, threads created, deleted objects. Just about
-everything. varnishstat will dump these counters. This is useful when
+everything. `varnishstat` will dump these counters. This is useful when
 tuning Varnish.
 
-There are programs that can poll varnishstat regularly and make nice
+There are programs that can poll `varnishstat` regularly and make nice
 graphs of these counters. One such program is Munin. Munin can be
 found at http://munin-monitoring.org/ . There is a plugin for munin in
 the Varnish source code.
diff --git a/doc/sphinx/users-guide/params.rst b/doc/sphinx/users-guide/params.rst
index a283d51..df4bfdf 100644
--- a/doc/sphinx/users-guide/params.rst
+++ b/doc/sphinx/users-guide/params.rst
@@ -3,21 +3,21 @@
 Parameters
 ----------
 
-Varnish Cache has a set of parameters that affect its behaviour and
+Varnish Cache comes with a set of parameters that affects behaviour and
 performance. Most of these parameters can be set on the Varnish
-command line (through varnishadm) using the param.set keyword.
+command line (through `varnishadm`) using the ``param.set`` keyword.
 
-Some parameters can, for security purposes be read only using the "-r"
-command line switch to varnishd.
+Some parameters can, for security purposes be read only using the '-r'
+command line switch to `varnishd`.
 
-I don't recommend tweaking the parameters unless you're sure of what
+We don't recommend that you tweak parameters unless you're sure of what
 you're doing. We've worked hard to make the defaults sane and Varnish
 should be able to handle most workloads with the default settings.
 
 For a complete listing of all the parameters and a short descriptions
 type ``param.show`` in the CLI. To inspect a certain parameter and get
 a somewhat longer description on what it does and what the default is
-type param.show and the name of the parameter, like this::
+type ``param.show`` and the name of the parameter, like this::
 
   varnish> param.show shortlived
   200        
diff --git a/doc/sphinx/users-guide/performance.rst b/doc/sphinx/users-guide/performance.rst
index baf9c75..cd00ff5 100644
--- a/doc/sphinx/users-guide/performance.rst
+++ b/doc/sphinx/users-guide/performance.rst
@@ -3,19 +3,18 @@
 Varnish and Website Performance
 ===============================
 
-This section is about tuning the performance of your Varnish server,
-and about tuning the performance of your website using Varnish.
+This section focuses on how to tune the performance of your Varnish server,
+and how to tune the performance of your website using Varnish.
 
-The section is split in three sections. One deals with the various tools and
-functions of Varnish that you should be aware of and the other focuses
+The section is split in three subsections. The first subsection deals with the various tools and
+functions of Varnish that you should be aware of. The next subsection focuses
 on the how to purge content out of your cache. Purging of content is
 essential in a performance context because it allows you to extend the
 *time-to-live* (TTL) of your cached objects. Having a long TTL allows
-Varnish to keep the content in cache longer, meaning Varnish will make
-send fewer requests to your relativly slow backend.
+Varnish to keep the content in cache longer, meaning Varnish will make fewer requests to your relativly slower backend.
 
-The final section deals with compression of web content. Varnish can
-gzip content when fetching it from the backend and then deliver
+The final subsection deals with compression of web content. Varnish can
+gzip content when fetching it from the backend and then deliver it
 compressed. This will reduce the time it takes to download the content
 thereby increasing the performance of your website.
 
diff --git a/doc/sphinx/users-guide/purging.rst b/doc/sphinx/users-guide/purging.rst
index 058bd7d..2be32e3 100644
--- a/doc/sphinx/users-guide/purging.rst
+++ b/doc/sphinx/users-guide/purging.rst
@@ -6,24 +6,24 @@ Purging and banning
 
 One of the most effective ways of increasing your hit ratio is to
 increase the time-to-live (ttl) of your objects. But, as you're aware
-of, in this twitterific day of age serving content that is outdated is
+of, in this twitterific day of age, serving content that is outdated is
 bad for business.
 
 The solution is to notify Varnish when there is fresh content
 available. This can be done through three mechanisms. HTTP purging,
-banning and forced cache misses. First, let me explain the HTTP purges.
+banning and forced cache misses. First, lets look at HTTP purging.
 
 
-HTTP Purges
-~~~~~~~~~~~
+HTTP Purging
+~~~~~~~~~~~~
 
 A *purge* is what happens when you pick out an object from the cache
 and discard it along with its variants. Usually a purge is invoked
-through HTTP with the method PURGE.
+through HTTP with the method `PURGE`.
 
-An HTTP purge is similar to an HTTP GET request, except that the
-*method* is PURGE. Actually you can call the method whatever you'd
-like, but most people refer to this as purging. Squid supports the
+An HTTP purge is similar to a HTTP GET request, except that the
+*method* is `PURGE`. Actually you can call the method whatever you'd
+like, but most people refer to this as purging. Squid, for example, supports the
 same mechanism. In order to support purging in Varnish you need the
 following VCL in place::
 
@@ -64,7 +64,7 @@ Bans
 ~~~~
 
 There is another way to invalidate content: Bans. You can think of
-bans as a sort of a filter on objects already in the cache. You *ban*
+bans as a sort of a filter on objects already in the cache. You ``ban``
 certain content from being served from your cache. You can ban
 content based on any metadata we have.
 A ban will only work on objects already in the cache, it does not
@@ -81,16 +81,18 @@ Quite powerful, really.
 Bans are checked when we hit an object in the cache, but before we
 deliver it. *An object is only checked against newer bans*.
 
-Bans that only match against obj.* are also processed by a background
-worker threads called the *ban lurker*. The ban lurker will walk the
+Bans that only match against `obj.*` are also processed by a background
+worker threads called the `ban lurker`. The `ban lurker` will walk the
 heap and try to match objects and will evict the matching objects. How
-aggressive the ban lurker is can be controlled by the parameter
-ban_lurker_sleep. The ban lurker can be disabled by setting
-ban_lurker_sleep to 0.
+aggressive the `ban lurker` is can be controlled by the parameter
+'ban_lurker_sleep'. The `ban lurker` can be disabled by setting
+'ban_lurker_sleep' to 0.
+
+.. XXX: sample here? benc
 
 Bans that are older than the oldest objects in the cache are discarded
-without evaluation.  If you have a lot of objects with long TTL, that
-are seldom accessed you might accumulate a lot of bans. This might
+without evaluation. If you have a lot of objects with long TTL, that
+are seldom accessed, you might accumulate a lot of bans. This might
 impact CPU usage and thereby performance.
 
 You can also add bans to Varnish via HTTP. Doing so requires a bit of VCL::
@@ -110,14 +112,14 @@ You can also add bans to Varnish via HTTP. Doing so requires a bit of VCL::
 	  }
   }
 
-This VCL sniplet enables Varnish to handle an HTTP BAN method, adding a
+This VCL stanza enables Varnish to handle a `HTTP BAN` method, adding a
 ban on the URL, including the host part.
 
-The ban lurker can help you keep the ban list at a manageable size, so
-we recommend that you avoid using req.* in your bans, as the request
-object is not available in the ban lurker thread.
+The `ban lurker` can help you keep the ban list at a manageable size, so
+we recommend that you avoid using `req.*` in your bans, as the request
+object is not available in the `ban lurker` thread.
 
-You can use the following template to write ban lurker friendly bans::
+You can use the following template to write `ban lurker` friendly bans::
 
   sub vcl_backend_response {
     set beresp.http.x-url = req.url;
@@ -136,7 +138,7 @@ You can use the following template to write ban lurker friendly bans::
     }
   }
 
-To inspect the current ban list, issue the ban.list command in CLI. This
+To inspect the current ban list, issue the ``ban.list`` command in the CLI. This
 will produce a status of all current bans::
 
   0xb75096d0 1318329475.377475    10      obj.http.x-url ~ test
@@ -146,15 +148,15 @@ The ban list contains the ID of the ban, the timestamp when the ban
 entered the ban list. A count of the objects that has reached this point
 in the ban list, optionally postfixed with a 'G' for "Gone", if the ban
 is no longer valid.  Finally, the ban expression is listed. The ban can
-be marked as Gone if it is a duplicate ban, but is still kept in the list
+be marked as "Gone" if it is a duplicate ban, but is still kept in the list
 for optimization purposes.
 
 Forcing a cache miss
 ~~~~~~~~~~~~~~~~~~~~
 
 The final way to invalidate an object is a method that allows you to
-refresh an object by forcing a hash miss for a single request. If you set
-req.hash_always_miss to true, Varnish will miss the current object in the
+refresh an object by forcing a `hash miss` for a single request. If you set
+'req.hash_always_miss' to true, Varnish will miss the current object in the
 cache, thus forcing a fetch from the backend. This can in turn add the
 freshly fetched object to the cache, thus overriding the current one. The
 old object will stay in the cache until ttl expires or it is evicted by
diff --git a/doc/sphinx/users-guide/report.rst b/doc/sphinx/users-guide/report.rst
index 5a444ab..13daf3e 100644
--- a/doc/sphinx/users-guide/report.rst
+++ b/doc/sphinx/users-guide/report.rst
@@ -3,7 +3,7 @@
 Reporting and statistics
 ========================
 
-This section is about how to find out what Varnish is doing, from
+This section covers how to find out what Varnish is doing, from
 the detailed per HTTP request blow-by-blow logrecords to the global
 summary statistics counters.
 
diff --git a/doc/sphinx/users-guide/run_cli.rst b/doc/sphinx/users-guide/run_cli.rst
index b534971..3e75ac5 100644
--- a/doc/sphinx/users-guide/run_cli.rst
+++ b/doc/sphinx/users-guide/run_cli.rst
@@ -3,46 +3,46 @@
 CLI - bossing Varnish around
 ============================
 
-Once varnishd is started, you can control it using the command line
+Once `varnishd` is started, you can control it using the command line
 interface.
 
-The easiest way to do this, is using the varnishadm program on the
-same machine as varnishd is running::
+The easiest way to do this, is using `varnishadm` on the
+same machine as `varnishd` is running::
 
 	varnishadm help
 
-If you want to run varnishadm from a remote system, you can do it
+If you want to run `varnishadm` from a remote system, you can do it
 two ways.
 
-You can SSH into the varnishd computer and run varnishadm::
+You can SSH into the `varnishd` computer and run `varnishadm`::
 
 	ssh $http_front_end varnishadm help
 
-But you can also configure varnishd to accept remote CLI connections
-(using the -T and -S arguments)::
+But you can also configure `varnishd` to accept remote CLI connections
+(using the '-T' and '-S' arguments)::
 
 	varnishd -T :6082 -S /etc/varnish_secret
 
-And then on the remote system run varnishadm::
+And then on the remote system run `varnishadm`::
 
 	varnishadm -T $http_front_end -S /etc/copy_of_varnish_secret help
 
 but as you can see, SSH is much more convenient.
 
-If you run varnishadm without arguments, it will read CLI commands from
-stdin, if you give it arguments, it will treat those as the single
+If you run `varnishadm` without arguments, it will read CLI commands from
+`stdin`, if you give it arguments, it will treat those as the single
 CLI command to execute.
 
 The CLI always returns a status code to tell how it went:  '200'
 means OK, anything else means there were some kind of trouble.
 
-varnishadm will exit with status 1 and print the status code on
+`varnishadm` will exit with status 1 and print the status code on
 standard error if it is not 200.
 
 What can you do with the CLI
 ----------------------------
 
-The CLI gives you almost total control over varnishd:
+The CLI gives you almost total control over `varnishd` some of the more important tasks you can perform are:
 
 * load/use/discard VCL programs
 * ban (invalidate) cache content
@@ -64,7 +64,7 @@ To load new VCL program::
 
 	varnish> vcl.load some_name some_filename
 
-Loading will read the VCL program from the file, and compile it.  If
+Loading will read the VCL program from the file, and compile it. If
 the compilation fails, you will get an error messages::
 
 	.../mask is not numeric.
@@ -93,6 +93,8 @@ It is good idea to design an emergency-VCL before you need it,
 and always have it loaded, so you can switch to it with a single
 vcl.use command.
 
+.. XXX:Should above have a clearer admonition like a NOTE:? benc
+
 Ban cache content
 ^^^^^^^^^^^^^^^^^
 
@@ -103,7 +105,7 @@ But sometimes it is useful to be able to throw things out of cache
 without having an exact list of what to throw out.
 
 Imagine for instance that the company logo changed and now you need
-to get all versions of it out of the cache::
+Varnish to stop serving the old logo out of the cache::
 
 	varnish> ban req.url ~ "logo.*[.]png"
 
@@ -119,7 +121,7 @@ a HTTP request asks for it.
 Banning stuff is much cheaper than restarting Varnish to get rid
 of wronly cached content.
 
-.. In addition to handling such special occations, banning can be used
+.. In addition to handling such special occasions, banning can be used
 .. in many creative ways to keep the cache up to date, more about
 .. that in: (TODO: xref)
 
@@ -130,7 +132,7 @@ Change parameters
 Parameters can be set on the command line with the '-p' argument,
 but they can also be examined and changed on the fly from the CLI::
 
-	varnish> param.show perfer_ipv6
+	varnish> param.show prefer_ipv6
 	200
 	prefer_ipv6         off [bool]
                             Default is off
@@ -144,8 +146,12 @@ In general it is not a good idea to modify parameters unless you
 have a good reason, such as performance tuning or security configuration.
 
 Most parameters will take effect instantly, or with a natural delay
-of some duration, but a few of them requires you to restart the
-child process before they take effect.  This is always noted in the
+of some duration,
+
+.. XXX: Natural delay of some duration sounds vague. benc
+
+but a few of them requires you to restart the
+child process before they take effect. This is always noted in the
 description of the parameter.
 
 Starting and stopping the worker process
@@ -160,7 +166,7 @@ and::
 
 	varnish> start
 
-If you start varnishd with the '-d' (debugging) argument, you will
+If you start `varnishd` with the '-d' (debugging) argument, you will
 always need to start the child process explicitly.
 
 Should the child process die, the master process will automatically
diff --git a/doc/sphinx/users-guide/run_security.rst b/doc/sphinx/users-guide/run_security.rst
index dfcfc6d..d7340a0 100644
--- a/doc/sphinx/users-guide/run_security.rst
+++ b/doc/sphinx/users-guide/run_security.rst
@@ -5,8 +5,8 @@ Security first
 
 If you are the only person involved in running Varnish, or if all
 the people involved are trusted to the same degree, you can skip
-this chapter:  We have protected Varnish as well as we can from
-anything which can come in through HTTP socket.
+this chapter. We have protected Varnish as well as we can from
+anything which can come in through a HTTP socket.
 
 If parts of your web infrastructure are outsourced or otherwise
 partitioned along administrative lines, you need to think about
@@ -15,19 +15,18 @@ security.
 Varnish provides four levels of authority, roughly related to
 how and where the command comes into Varnish:
 
-  * The command line arguments
+  * the command line arguments,
 
-  * The CLI interface
+  * the CLI interface,
 
-  * VCL programs
+  * VCL programs, and
 
-  * HTTP requests
+  * HTTP requests.
 
 Command line arguments
 ----------------------
 
-The top level security decisions is taken on and from the command
-line, in order to make them invulnerable to subsequent manipulation.
+The top level security decisions is decided and defined when starting Varnish in the form of command line arguments, we use this strategy in order to make them invulnerable to subsequent manipulation.
 
 The important decisions to make are:
 
@@ -44,8 +43,8 @@ CLI interface access
 
 The command line interface can be accessed three ways.
 
-Varnishd can be told til listen and offer CLI connections
-on a TCP socket.  You can bind the socket to pretty
+`Varnishd` can be told til listen and offer CLI connections
+on a TCP socket. You can bind the socket to pretty
 much anything the kernel will accept::
 
 	-T 127.0.0.1:631
@@ -53,15 +52,17 @@ much anything the kernel will accept::
 	-T 192.168.1.1:34
 	-T '[fe80::1]:8082'
 
-The default is '-T localhost:0' which will pick a random
-port number, which varnishadm(8) can learn in the shared
+The default is ``-T localhost:0`` which will pick a random
+port number, which `varnishadm(8)` can learn in the shared
 memory.
 
+.. XXX:Me no understand sentence above, (8)? and learn in the shared memory? Stored and retrieved by varnishadm from th e shared memory? benc 
+
 By using a "localhost" address, you restrict CLI access
 to the local machine.
 
 You can also bind the CLI port to an IP number reachable across
-the net, and let other computers connect directly.
+the net, and let other machines connect directly.
 
 This gives you no secrecy, ie, the CLI commands will
 go across the network as ASCII text with no encryption, but
@@ -72,42 +73,52 @@ Alternatively you can bind the CLI port to a 'localhost' address,
 and give remote users access via a secure connection to the local
 machine, using ssh/VPN or similar.
 
-If you use ssh you can restrict which commands each user can execute to
-just varnishadm, or even to wrapper scripts around varnishadm, which
+If you use `ssh` you can restrict which commands each user can execute to
+just `varnishadm`, or even to wrapper scripts around `varnishadm`, which
 only allow specific CLI commands.
 
-It is also possible to configure varnishd for "reverse mode", using
-the '-M' argument.  In that case varnishd will attempt to open a
+It is also possible to configure `varnishd` for "reverse mode", using
+the '-M' argument.  In that case `varnishd` will attempt to open a
 TCP connection to the specified address, and initiate a CLI connection
 to your central Varnish management facility.
 
+.. XXX:Maybe a sample command here with a brief explanation? benc
+
 The connection is also in this case without secrecy, but
 the remote end must still satisfy -S/PSK authentication.
 
+.. XXX:Without encryption instead of secrecy? benc
+
 Finally, if you run varnishd with the '-d' option, you get a CLI
 command on stdin/stdout, but since you started the process, it
 would be hard to prevent you getting CLI access, wouldn't it ?
 
+
 CLI interface authentication
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-By default the CLI interface is protected with a simple,  yet
+By default the CLI interface is protected with a simple, yet
 strong "Pre Shared Key" authentication method, which do not provide
 secrecy (ie: The CLI commands and responses are not encrypted).
 
+.. XXX:Encryption instead of secrecy? benc
+
 The way -S/PSK works is really simple:  During startup a file is
 created with a random content and the file is only accessible to
-the user who started varnishd (or the superuser).
+the user who started `varnishd` (or the superuser).
 
 To authenticate and use a CLI connection, you need to know the
 contents of that file, in order to answer the cryptographic
-challenge varnishd issues. (XXX: xref to algo in refman)
+challenge `varnishd` issues. 
+
 
-The varnishadm program knows all about this, it will just work,
+(XXX: xref to algo in refman)
+.. XXX:Dunno what this is? benc
+
+`varnishadm` uses all of this to restrict access, it will only function,
 provided it can read the secret file.
 
-If you want to allow other users on the local system or remote
-users, to be able to access CLI connections, you must create your
+If you want to allow other users, local or remote, to be able to access CLI connections, you must create your
 own secret file and make it possible for (only!) these users to
 read it.
 
@@ -115,18 +126,18 @@ A good way to create the secret file is::
 
 	dd if=/dev/random of=/etc/varnish_secret count=1
 
-When you start varnishd, you specify the filename with -S, and
-it goes without saying that the varnishd master process needs
+When you start `varnishd`, you specify the filename with '-S', and
+it goes without saying that the `varnishd` master process needs
 to be able to read the file too.
 
-You can change the contents of the secret file while varnishd
+You can change the contents of the secret file while `varnishd`
 runs, it is read every time a CLI connection is authenticated.
 
-On the local system, varnishadm can find the filename from
-shared memory, but on remote systems, you need to give it
+On the local system, `varnishadm` can retrieve the filename from
+shared memory, but on remote systems, you need to give `varnishadm`
 a copy of the secret file, with the -S argument.
 
-If you want to disable -S/PSK authentication, specify -S with
+If you want to disable -S/PSK authentication, specify '-S' with
 an empty argument to varnishd::
 
 	varnishd [...] -S "" [...]
@@ -135,17 +146,17 @@ Parameters
 ^^^^^^^^^^
 
 Parameters can be set from the command line, and made "read-only"
-(using -r) so they cannot subsequently be modified from the CLI
+(using '-r') so they cannot subsequently be modified from the CLI
 interface.
 
 Pretty much any parameter can be used to totally mess up your
-HTTP service, but a few can do more damage than that:
+HTTP service, but a few can do more damage than others:
 
 :ref:`ref_param_user` and :ref:`ref_param_group`
 	Access to local system via VCL
 
 :ref:`ref_param_listen_address`
-	Trojan other TCP sockets, like ssh
+	Trojan other TCP sockets, like `ssh`
 
 :ref:`ref_param_cc_command`
 	Execute arbitrary programs
@@ -156,10 +167,12 @@ HTTP service, but a few can do more damage than that:
 Furthermore you may want to look at and lock down:
 
 :ref:`ref_param_syslog_cli_traffic`
-	Log all CLI commands to syslog(8), so you know what goes on.
+	Log all CLI commands to `syslog(8)`, so you know what goes on.
+.. XXX: syslog(8)? benc
+
 
 :ref:`ref_param_vcc_unsafe_path`
-	Retrict VCL/VMODS to :ref:`ref_param_vcl_dir` and :ref:`ref_param_vmod_dir`
+	Restrict VCL/VMODS to :ref:`ref_param_vcl_dir` and :ref:`ref_param_vmod_dir`
 
 :ref:`ref_param_vmod_dir` 
         The directory where Varnish will will look
@@ -178,11 +191,16 @@ certain parameters, but that will only protect the local filesystem,
 and operating system, it will not protect your HTTP service.
 
 We do not currently have a way to restrict specific CLI commands
-to specific CLI connections.   One way to get such an effect is to
-"wrap" all CLI access in pre-approved scripts which use varnishadm(1)
+to specific CLI connections. One way to get such an effect is to
+"wrap" all CLI access in pre-approved scripts which use `varnishadm(1)`
+
+.. XXX:what does the 1 stand for? benc
+
 to submit the sanitized CLI commands, and restrict a remote user
 to only those scripts, for instance using sshd(8)'s configuration.
 
+.. XXX:what does the 8 stand for? benc
+
 VCL programs
 ------------
 
@@ -190,35 +208,35 @@ There are two "dangerous" mechanisms available in VCL code:  VMODs
 and inline-C.
 
 Both of these mechanisms allow execution of arbitrary code and will
-therefore allow a person to get access on the computer, with the
+thus allow a person to get access to the machine, with the
 privileges of the child process.
 
-If varnishd is started as root/superuser, we sandbox the child
+If `varnishd` is started as root/superuser, we sandbox the child
 process, using whatever facilities are available on the operating
-system, but if varnishd is not started as root/superuser, this is
-not possible.  No, don't ask me why you have to be superuser to
+system, but if `varnishd` is not started as root/superuser, this is
+not possible. No, don't ask me why you have to be superuser to
 lower the privilege of a child process...
 
-Inline-C is disabled by default starting with Varnish 4, so unless
+Inline-C is disabled by default starting with Varnish version 4, so unless
 you enable it, you don't have to worry about it.
 
-The parameters mentioned above can restrict VMOD, so they can only
-be imported from a designated directory, restricting VCL wranglers
+The parameters mentioned above can restrict the loading of VMODs to only 
+be loaded from a designated directory, restricting VCL wranglers
 to a pre-approved subset of VMODs.
 
-If you do that, we believe that your local system cannot be compromised
+If you do that, we are confident that your local system cannot be compromised
 from VCL code.
 
 HTTP requests
 -------------
 
 We have gone to great lengths to make Varnish resistant to anything
-coming in throught he socket where HTTP requests are received, and
+coming in throught the socket where HTTP requests are received, and
 you should, generally speaking, not need to protect it any further.
 
 The caveat is that since VCL is a programming language which lets you
-decide exactly what to do about HTTP requests, you can also decide
-to do exactly stupid things to them, including opening youself up
+decide exactly what to do with HTTP requests, you can also decide
+to do stupid and potentially dangerous things with them, including opening youself up
 to various kinds of attacks and subversive activities.
 
 If you have "administrative" HTTP requests, for instance PURGE
diff --git a/doc/sphinx/users-guide/running.rst b/doc/sphinx/users-guide/running.rst
index 2b49147..5cfd539 100644
--- a/doc/sphinx/users-guide/running.rst
+++ b/doc/sphinx/users-guide/running.rst
@@ -3,8 +3,8 @@
 Starting and running Varnish
 ============================
 
-This section is about starting, running, and stopping Varnish, about
-command line flags and options, communicating with the running
+This section covers starting, running, and stopping Varnish,
+command line flags and options, and communicating with the running
 Varnish processes, configuring storage and sockets and, and about
 securing and protecting Varnish against attacks.
 
diff --git a/doc/sphinx/users-guide/sizing-your-cache.rst b/doc/sphinx/users-guide/sizing-your-cache.rst
index 8f2dfba..497ce0e 100644
--- a/doc/sphinx/users-guide/sizing-your-cache.rst
+++ b/doc/sphinx/users-guide/sizing-your-cache.rst
@@ -2,7 +2,7 @@
 Sizing your cache
 -----------------
 
-Picking how much memory you should give Varnish can be a tricky
+Deciding on cache size can be a tricky
 task. A few things to consider:
 
  * How big is your *hot* data set. For a portal or news site that
@@ -12,14 +12,16 @@ task. A few things to consider:
    to only cache images a little while or not to cache them at all if
    they are cheap to serve from the backend and you have a limited
    amount of memory.
- * Watch the n_lru_nuked counter with :ref:`reference-varnishstat` or
+ * Watch the `n_lru_nuked` counter with :ref:`reference-varnishstat` or
    some other tool. If you have a lot of LRU activity then your cache
    is evicting objects due to space constraints and you should
    consider increasing the size of the cache.
 
 Be aware that every object that is stored also carries overhead that
-is kept outside the actually storage area. So, even if you specify -s
-malloc,16G Varnish might actually use **double** that. Varnish has a
+is kept outside the actually storage area. So, even if you specify '-s
+malloc,16G' Varnish might actually use **double** that. Varnish has a
 overhead of about 1k per object. So, if you have lots of small objects
 in your cache the overhead might be significant.
 
+.. XXX:This seems to contradict the last paragraph in "storage-backends". benc
+
diff --git a/doc/sphinx/users-guide/storage-backends.rst b/doc/sphinx/users-guide/storage-backends.rst
index b12dba0..6664d8d 100644
--- a/doc/sphinx/users-guide/storage-backends.rst
+++ b/doc/sphinx/users-guide/storage-backends.rst
@@ -8,9 +8,9 @@ Intro
 ~~~~~
 
 Varnish has pluggable storage backends. It can store data in various
-backends which have different performance characteristics. The default
+backends which can have different performance characteristics. The default
 configuration is to use the malloc backend with a limited size. For a
-serious Varnish deployment you probably need to adjust the storage
+serious Varnish deployment you probably would want to adjust the storage
 settings.
 
 malloc
@@ -21,11 +21,13 @@ syntax: malloc[,size]
 Malloc is a memory based backend. Each object will be allocated from
 memory. If your system runs low on memory swap will be used.
 
-Be aware that the size limitation only limits the actual storage and that
-approximately 1k of memory per object will be used for various internal
-structures.
+Be aware that the size limitation only limits the actual storage and that the
+approximately 1k of memory per object, used for various internal
+structures, is included in the actual storage as well.
 
-The size parameter specifies the maximum amount of memory varnishd
+.. XXX:This seems to contradict the last paragraph in "sizing-your-cache". benc
+
+The size parameter specifies the maximum amount of memory `varnishd`
 will allocate.  The size is assumed to be in bytes, unless followed by
 one of the following suffixes:
 
@@ -39,9 +41,9 @@ one of the following suffixes:
 
 The default size is unlimited.
 
-malloc's performance is bound by memory speed so it is very fast. If
-the dataset is bigger than what can fit in memory performance will
-depend on the operating system and how well it does paging.
+malloc's performance is bound to memory speed so it is very fast. If
+the dataset is bigger than available memory performance will
+depend on the operating systems ability to page effectively.
 
 file
 ~~~~
@@ -49,13 +51,13 @@ file
 syntax: file[,path[,size[,granularity]]]
 
 The file backend stores objects in memory backed by an unlinked file on disk
-with mmap.
+with `mmap`.
 
-The path parameter specifies either the path to the backing file or
-the path to a directory in which varnishd will create the backing
-file.  The default is /tmp.
+The 'path' parameter specifies either the path to the backing file or
+the path to a directory in which `varnishd` will create the backing
+file. The default is `/tmp`.
 
-The size parameter specifies the size of the backing file.  The size
+The size parameter specifies the size of the backing file. The size
 is assumed to be in bytes, unless followed by one of the following
 suffixes:
 
@@ -75,20 +77,22 @@ The default size is to use 50% of the space available on the device.
 If the backing file already exists, it will be truncated or expanded
 to the specified size.
 
-Note that if varnishd has to create or expand the file, it will not
+Note that if `varnishd` has to create or expand the file, it will not
 pre-allocate the added space, leading to fragmentation, which may
 adversely impact performance on rotating hard drives.  Pre-creating
-the storage file using dd(1) will reduce fragmentation to a minimum.
+the storage file using `dd(1)` will reduce fragmentation to a minimum.
+
+.. XXX:1? benc
 
-The granularity parameter specifies the granularity of
-allocation.  All allocations are rounded up to this size.  The
-is assumed to be in bytes, unless followed by one of the
+The 'granularity' parameter specifies the granularity of
+allocation. All allocations are rounded up to this size. The granularity is
+is assumed to be expressed in bytes, unless followed by one of the
 suffixes described for size except for %.
 
-The default granularity is the VM page size.  The size should be reduced if you
+The default granularity is the VM page size. The size should be reduced if you
 have many small objects.
 
-File performance is typically limited by the write speed of the
+File performance is typically limited to the write speed of the
 device, and depending on use, the seek time.
 
 persistent (experimental)
@@ -100,11 +104,11 @@ Persistent storage. Varnish will store objects in a file in a manner
 that will secure the survival of *most* of the objects in the event of
 a planned or unplanned shutdown of Varnish.
 
-The path parameter specifies the path to the backing file. If
+The 'path' parameter specifies the path to the backing file. If
 the file doesn't exist Varnish will create it.
 
-The size parameter specifies the size of the backing file.  The
-size is assumed to be in bytes, unless followed by one of the
+The 'size' parameter specifies the size of the backing file. The
+size is expressed in bytes, unless followed by one of the
 following suffixes:
 
       K, k    The size is expressed in kibibytes.
@@ -122,9 +126,9 @@ starts after a shutdown it will discard the content of any silo that
 isn't sealed.
 
 Note that taking persistent silos offline and at the same time using
-bans can cause problems. This because bans added while the silo was
-offline will not be applied to the silo when it reenters the cache,
-and can make previously banned objects reappear.
+bans can cause problems. This is due to the fact that bans added while the silo was
+offline will not be applied to the silo when it reenters the cache. Consequently enabling
+previously banned objects to reappear.
 
 Transient Storage
 -----------------
@@ -133,6 +137,10 @@ If you name any of your storage backend "Transient" it will be
 used for transient (short lived) objects. By default Varnish
 would use an unlimited malloc backend for this.
 
+.. XXX: Is this another paramater? In that case handled in the same manner as above? benc
+
 Varnish will consider an object short lived if the TTL is below the
-parameter "shortlived".
+parameter 'shortlived'.
+
 
+.. XXX: I am generally missing samples of setting all of these parameters, maybe one sample per section or a couple of examples here with a brief explanation to also work as a summary? benc
diff --git a/doc/sphinx/users-guide/troubleshooting.rst b/doc/sphinx/users-guide/troubleshooting.rst
index fa01890..9d57744 100644
--- a/doc/sphinx/users-guide/troubleshooting.rst
+++ b/doc/sphinx/users-guide/troubleshooting.rst
@@ -3,9 +3,9 @@
 Troubleshooting Varnish
 =======================
 
-Sometimes Varnish misbehaves. In order for you to understand whats
-going on there are a couple of places you can check. varnishlog,
-/var/log/syslog, /var/log/messages are all places where Varnish might
+Sometimes Varnish misbehaves or rather behaves the way you told it to behave but not necessarily the way you want it to behave. In order for you to understand whats
+going on there are a couple of places you can check. `varnishlog`,
+`/var/log/syslog`, `/var/log/messages` are all good places where Varnish might
 leave clues of whats going on. This section will guide you through
 basic troubleshooting in Varnish.
 
@@ -13,9 +13,9 @@ basic troubleshooting in Varnish.
 When Varnish won't start
 ------------------------
 
-Sometimes Varnish wont start. There is a plethora of reasons why
+Sometimes Varnish wont start. There is a plethora of possible reasons why
 Varnish wont start on your machine. We've seen everything from wrong
-permissions on /dev/null to other processes blocking the ports.
+permissions on `/dev/null` to other processes blocking the ports.
 
 Starting Varnish in debug mode to see what is going on.
 
@@ -23,8 +23,8 @@ Try to start Varnish by::
 
     # varnishd -f /usr/local/etc/varnish/default.vcl -s malloc,1G -T 127.0.0.1: 2000  -a 0.0.0.0:8080 -d
 
-Notice the -d option. It will give you some more information on what
-is going on. Let us see how Varnish will react to something else
+Notice the '-d' parameter. It will give you some more information on what
+is going on. Let us see how Varnish will react when something else is
 listening on its port.::
 
     # varnishd -n foo -f /usr/local/etc/varnish/default.vcl -s malloc,1G -T 127.0.0.1:2000  -a 0.0.0.0:8080 -d
@@ -39,7 +39,7 @@ listening on its port.::
     Type 'quit' to close CLI session.
     Type 'start' to launch worker process.
 
-Now Varnish is running. Only the master process is running, in debug
+Now Varnish is running but only the master process is running, in debug
 mode the cache does not start. Now you're on the console. You can
 instruct the master process to start the cache by issuing "start".::
 
@@ -49,7 +49,7 @@ instruct the master process to start the cache by issuing "start".::
 	 Could not open sockets
 
 And here we have our problem. Something else is bound to the HTTP port
-of Varnish. If this doesn't help try strace or truss or come find us
+of Varnish. If this doesn't help try ``strace`` or ``truss`` or come find us
 on IRC.
 
 
@@ -57,34 +57,33 @@ Varnish is crashing - panics
 ----------------------------
 
 When Varnish goes bust the child processes crashes. Most of the
-crashes are caught by one of the many consistency checks spread around
-the Varnish source code. When Varnish hits one of these the caching
-process it will crash itself in a controlled manner, leaving a nice
+crashes are caught by one of the many consistency checks we have included in the Varnish source code. When Varnish hits one of these the caching
+process will crash itself in a controlled manner, leaving a nice
 stack trace with the mother process.
 
-You can inspect any panic messages by typing panic.show in the CLI.
-
-| panic.show
-| Last panic at: Tue, 15 Mar 2011 13:09:05 GMT
-| Assert error in ESI_Deliver(), cache_esi_deliver.c line 354:
-|   Condition(i == Z_OK || i == Z_STREAM_END) not true.
-| thread = (cache-worker)
-| ident = Linux,2.6.32-28-generic,x86_64,-sfile,-smalloc,-hcritbit,epoll
-| Backtrace:
-|   0x42cbe8: pan_ic+b8
-|   0x41f778: ESI_Deliver+438
-|   0x42f838: RES_WriteObj+248
-|   0x416a70: cnt_deliver+230
-|   0x4178fd: CNT_Session+31d
-|   (..)
+You can inspect any panic messages by typing ``panic.show`` in the CLI.::
+
+ panic.show
+ Last panic at: Tue, 15 Mar 2011 13:09:05 GMT
+ Assert error in ESI_Deliver(), cache_esi_deliver.c line 354:
+   Condition(i == Z_OK || i == Z_STREAM_END) not true.
+ thread = (cache-worker)
+ ident = Linux,2.6.32-28-generic,x86_64,-sfile,-smalloc,-hcritbit,epoll
+ Backtrace:
+   0x42cbe8: pan_ic+b8
+   0x41f778: ESI_Deliver+438
+   0x42f838: RES_WriteObj+248
+   0x416a70: cnt_deliver+230
+   0x4178fd: CNT_Session+31d
+   (..)
 
 The crash might be due to misconfiguration or a bug. If you suspect it
-is a bug you can use the output in a bug report.
+is a bug you can use the output in a bug report, see the "Trouble Tickets" section in the Introduction chapter above.
 
 Varnish is crashing - segfaults
 -------------------------------
 
-Sometimes the bug escapes the consistency checks and Varnish get hit
+Sometimes a bug escapes the consistency checks and Varnish gets hit
 with a segmentation error. When this happens with the child process it
 is logged, the core is dumped and the child process starts up again.
 
@@ -93,29 +92,31 @@ debug a segfault the developers need you to provide a fair bit of
 data.
 
  * Make sure you have Varnish installed with symbols
+.. XXX:Symbols? benc
  * Make sure core dumps are enabled (ulimit)
+.. XXX:ulimit? benc
 
-Once you have the core you open it with gdb and issue the command "bt"
+Once you have the core you open it with `gdb` and issue the command ``bt``
 to get a stack trace of the thread that caused the segfault.
 
 
 Varnish gives me Guru meditation
 --------------------------------
 
-First find the relevant log entries in varnishlog. That will probably
-give you a clue. Since varnishlog logs so much data it might be hard
-to track the entries down. You can set varnishlog to log all your 503
+First find the relevant log entries in `varnishlog`. That will probably
+give you a clue. Since `varnishlog` logs a lot of data it might be hard
+to track the entries down. You can set `varnishlog` to log all your 503
 errors by issuing the following command::
 
    $ varnishlog -c -m TxStatus:503
 
 If the error happened just a short time ago the transaction might still
-be in the shared memory log segment. To get varnishlog to process the
-whole shared memory log just add the -d option::
+be in the shared memory log segment. To get `varnishlog` to process the
+whole shared memory log just add the '-d' parameter::
 
    $ varnishlog -d -c -m TxStatus:503
 
-Please see the varnishlog man page for elaborations on further
+Please see the `varnishlog` man page for elaborations on further
 filtering capabilities and explanation of the various options.
 
 
diff --git a/doc/sphinx/users-guide/vcl-actions.rst b/doc/sphinx/users-guide/vcl-actions.rst
index 691baa4..0775afe 100644
--- a/doc/sphinx/users-guide/vcl-actions.rst
+++ b/doc/sphinx/users-guide/vcl-actions.rst
@@ -5,18 +5,22 @@ actions
 
 The most common actions to return are these:
 
+.. XXX:Maybe a bit more explanation here what is an action and how it is returned? benc
+
 *pass*
  When you return pass the request and subsequent response will be passed to
- and from the backend server. It won't be cached. pass can be returned from
- vcl_recv
+ and from the backend server. It won't be cached. `pass` can be returned from
+ `vcl_recv`
 
 *lookup*
-  When you return lookup from vcl_recv you tell Varnish to deliver content 
+  When you return lookup from `vcl_recv` you tell Varnish to deliver content 
   from cache even if the request othervise indicates that the request 
   should be passed. 
 
 *pipe*
-  Pipe can be returned from vcl_recv as well. Pipe short circuits the
+.. XXX:What is pipe? benc
+
+  Pipe can be returned from `vcl_recv` as well. Pipe short circuits the
   client and the backend connections and Varnish will just sit there
   and shuffle bytes back and forth. Varnish will not look at the data being 
   send back and forth - so your logs will be incomplete. 
@@ -25,13 +29,13 @@ The most common actions to return are these:
   header before actually returning pipe. 
 
 *deliver*
- Deliver the object to the client.  Usually returned from vcl_backend_response. 
+ Deliver the object to the client. Usually returned from `vcl_backend_response`. 
 
 *restart*
  Restart processing of the request. You can restart the processing of
- the whole transaction. Changes to the req object are retained.
+ the whole transaction. Changes to the `req` object are retained.
 
 *retry*
  Retry the request against the backend. This can be called from
- vcl_backend_response or vcl_backend_error if you don't like the response 
+ `vcl_backend_response` or `vcl_backend_error` if you don't like the response 
  that the backend delivered.
diff --git a/doc/sphinx/users-guide/vcl-backends.rst b/doc/sphinx/users-guide/vcl-backends.rst
index 099e2a6..c54d951 100644
--- a/doc/sphinx/users-guide/vcl-backends.rst
+++ b/doc/sphinx/users-guide/vcl-backends.rst
@@ -6,7 +6,7 @@ Backend servers
 Varnish has a concept of "backend" or "origin" servers. A backend
 server is the server providing the content Varnish will accelerate.
 
-Our first task is to tell Varnish where it can find its content. Start
+Our first task is to tell Varnish where it can find its backends. Start
 your favorite text editor and open the relevant VCL file.
 
 Somewhere in the top there will be a section that looks a bit like this.::
@@ -16,7 +16,7 @@ Somewhere in the top there will be a section that looks a bit like this.::
     #     .port = "8080";
     # }
 
-We comment in this bit of text making the text look like.::
+We remove the comment markings in this text stanza making the it look like.::
 
     backend default {
         .host = "127.0.0.1";
@@ -27,7 +27,7 @@ Now, this piece of configuration defines a backend in Varnish called
 *default*. When Varnish needs to get content from this backend it will
 connect to port 8080 on localhost (127.0.0.1).
 
-Varnish can have several backends defined and can you can even join
+Varnish can have several backends defined you can even join
 several backends together into clusters of backends for load balancing
 purposes.
 
@@ -41,10 +41,10 @@ host or not. There are lot of options.
 
 Lets say we need to introduce a Java application into out PHP web
 site. Lets say our Java application should handle URL beginning with
-/java/.
+`/java/`.
 
 We manage to get the thing up and running on port 8000. Now, lets have
-a look at the default.vcl.::
+a look at the `default.vcl`.::
 
     backend default {
         .host = "127.0.0.1";
@@ -58,7 +58,7 @@ We add a new backend.::
         .port = "8000";
     }
 
-Now we need tell where to send the difference URL. Lets look at vcl_recv.::
+Now we need tell Varnish where to send the difference URL. Lets look at `vcl_recv`.::
 
     sub vcl_recv {
         if (req.url ~ "^/java/") {
@@ -71,18 +71,18 @@ Now we need tell where to send the difference URL. Lets look at vcl_recv.::
 It's quite simple, really. Lets stop and think about this for a
 moment. As you can see you can define how you choose backends based on
 really arbitrary data. You want to send mobile devices to a different
-backend? No problem. if (req.User-agent ~ /mobile/) .. should do the
+backend? No problem. ``if (req.User-agent ~ /mobile/) ..`` should do the
 trick.
 
 
 Backends and virtual hosts in Varnish
 -------------------------------------
 
-Varnish fully supports virtual hosts. They might work in a somewhat
+Varnish fully supports virtual hosts. They might however work in a somewhat
 counter intuitive fashion since they are never declared
 explicitly. You set up the routing of incoming HTTP requests in
-vcl_recv. If you want this routing to be done on the basis of virtual
-hosts you just need to inspect req.http.host.
+`vcl_recv`. If you want this routing to be done on the basis of virtual
+hosts you just need to inspect `req.http.host`.
 
 You can have something like this:::
 
@@ -94,10 +94,10 @@ You can have something like this:::
         }
     }
 
-Note that the first regular expressions will match foo.com,
-www.foo.com, zoop.foo.com and any other host ending in foo.com. In
+Note that the first regular expressions will match "foo.com",
+"www.foo.com", "zoop.foo.com" and any other host ending in "foo.com". In
 this example this is intentional but you might want it to be a bit
-more tight, maybe relying on the == operator in stead, like this:::
+more tight, maybe relying on the ``==`` operator in stead, like this:::
 
     sub vcl_recv {
         if (req.http.host == "foo.com" or req.http.host == "www.foo.com") {
@@ -118,7 +118,7 @@ and resilience.
 
 You can define several backends and group them together in a
 director. This requires you to load a VMOD, a Varnish module, and then to
-call certain actions in vcl_init.::
+call certain actions in `vcl_init`.::
 
 
     import directors;    # load the directors
@@ -180,11 +180,11 @@ define the backends.::
         }
     }
 
-Whats new here is the probe. Varnish will check the health of each
+Whats new here is the ``probe``. Varnish will check the health of each
 backend with a probe. The options are:
 
 url
-    What URL should Varnish request.
+    The URL Varnish will use to send a probe request.
 
 interval
     How often should we poll
@@ -197,13 +197,17 @@ window
     window has five checks.
 
 threshold
-    How many of the .window last polls must be good for the backend to be declared healthy.
+    How many of the '.window' last polls must be good for the backend to be declared healthy.
+
+.. XXX: .window probably means something but not to me :) benc
 
 initial
-    How many of the of the probes a good when Varnish starts - defaults
+    How many of the probes that needs to be succesful when Varnish starts - defaults
     to the same amount as the threshold.
 
-Now we define the director.::
+Now we define the 'director'.::
+
+.. XXX: Where and why? benc
 
     import directors;
 
diff --git a/doc/sphinx/users-guide/vcl-built-in-subs.rst b/doc/sphinx/users-guide/vcl-built-in-subs.rst
index b30985c..708c05e 100644
--- a/doc/sphinx/users-guide/vcl-built-in-subs.rst
+++ b/doc/sphinx/users-guide/vcl-built-in-subs.rst
@@ -1,7 +1,7 @@
 
 .. _vcl-built-in-subs:
 
-.. XXX This document needs substational review.
+.. XXX:This document needs substantional review.
 
 
 Built in subroutines
@@ -12,14 +12,14 @@ vcl_recv
 ~~~~~~~~
 
 Called at the beginning of a request, after the complete request has
-been received and parsed.  Its purpose is to decide whether or not to
+been received and parsed. Its purpose is to decide whether or not to
 serve the request, how to do it, and, if applicable, which backend to
 use.
 
 It is also used to modify the request, something you'll probably find
 yourself doing frequently. 
 
-The vcl_recv subroutine may terminate with calling ``return()`` on one
+The `vcl_recv` subroutine may terminate with calling ``return()`` on one
 of the following keywords:
 
   synth 
@@ -27,10 +27,10 @@ of the following keywords:
     client and abandon the request.
 
   pass
-    Switch to pass mode.  Control will eventually pass to vcl_pass.
+    Switch to pass mode. Control will eventually pass to vcl_pass.
 
   pipe
-    Switch to pipe mode.  Control will eventually pass to vcl_pipe.
+    Switch to pipe mode. Control will eventually pass to vcl_pipe.
 
   hash
     Continue processing the object as a potential candidate for
@@ -43,13 +43,13 @@ of the following keywords:
 vcl_pipe
 ~~~~~~~~
 
-Called upon entering pipe mode.  In this mode, the request is passed
+Called upon entering pipe mode. In this mode, the request is passed
 on to the backend, and any further data from either client or backend
 is passed on unaltered until either end closes the
 connection. Basically, Varnish will degrade into a simple TCP proxy,
 shuffling bytes back and forth.
 
-The vcl_pipe subroutine may terminate with calling return() with one
+The `vcl_pipe` subroutine may terminate with calling ``return()`` with one
 of the following keywords:
 
   synth(error code, reason)
@@ -61,12 +61,12 @@ of the following keywords:
 vcl_pass
 ~~~~~~~~
 
-Called upon entering pass mode.  In this mode, the request is passed
+Called upon entering pass mode. In this mode, the request is passed
 on to the backend, and the backend's response is passed on to the
-client, but is not entered into the cache.  Subsequent requests
+client, but is not entered into the cache. Subsequent requests
 submitted over the same client connection are handled normally.
 
-The vcl_pass subroutine may terminate with calling return() with one
+The `vcl_pass` subroutine may terminate with calling ``return()`` with one
 of the following keywords:
 
   synth(error code, reason)
@@ -84,7 +84,10 @@ of the following keywords:
 vcl_hit
 ~~~~~~~
 
-Called is a cache lookup is successful. 
+Called when a cache lookup is successful. 
+
+.. XXX: missing the "The `vcl_hit` subroutine may terminate with calling ``return()`` with one of the following keywords:" thing. benc
+
 
   restart
     Restart the transaction. Increases the restart counter. If the number
@@ -92,7 +95,7 @@ Called is a cache lookup is successful.
     error.
 
   deliver
-    Deliver the object. Control passes to vcl_deliver.
+    Deliver the object. Control passes to `vcl_deliver`.
 
   synth(error code, reason)
     Return the specified error code to the client and abandon the request.
@@ -102,26 +105,26 @@ vcl_miss
 ~~~~~~~~
 
 Called after a cache lookup if the requested document was not found in
-the cache.  Its purpose is to decide whether or not to attempt to
+the cache. Its purpose is to decide whether or not to attempt to
 retrieve the document from the backend, and which backend to use.
 
-The vcl_miss subroutine may terminate with calling return() with one
+The `vcl_miss` subroutine may terminate with calling ``return()`` with one
 of the following keywords:
 
   synth(error code, reason)
     Return the specified error code to the client and abandon the request.
 
   pass
-    Switch to pass mode.  Control will eventually pass to vcl_pass.
+    Switch to pass mode. Control will eventually pass to `vcl_pass`.
 
   fetch
-    Retrieve the requested object from the backend.  Control will
-    eventually pass to vcl_fetch.
+    Retrieve the requested object from the backend. Control will
+    eventually pass to `vcl_fetch`.
 
 vcl_hash
 ~~~~~~~~
 
-Called after vcl_recv to create a hash value for the request. This is
+Called after `vcl_recv` to create a hash value for the request. This is
 used as a key to look up the object in Varnish.
 
   lookup
@@ -134,20 +137,21 @@ used as a key to look up the object in Varnish.
 vcl_purge
 ~~~~~~~~~
 
-Called after the purge has been executed and all it's variant have been evited. 
+Called after the purge has been executed and all its variants have been evited.
 
   synth
     Produce a response.
 
 
-
 vcl_deliver
 ~~~~~~~~~~~
 
 Called before a cached object is delivered to the client.
 
-The vcl_deliver subroutine may terminate with one of the following
-keywords:
+The ``vcl_deliver`` subroutine may terminate calling ``return()`` with one
+of the following keywords:
+
+.. XXX: Should perhaps be return as above? benc
 
   deliver
     Deliver the object to the client.
@@ -164,35 +168,46 @@ vcl_backend_fetch
 Called before sending the backend request. In this subroutine you
 typically alter the request before it gets to the backend.
 
+.. XXX: Missing terminate..keywords sentence? benc
+
   fetch
     Fetch the object from the backend.
 
   abandon
     Abandon the backend request and generates an error.
-  
+
 
 vcl_backend_response
 ~~~~~~~~~~~~~~~~~~~~
 
-Called after an response has been successfully retrieved from the
-backend. The response is availble as beresp. Note that Varnish might
+Called after a response has been successfully retrieved from the
+backend. The response is available as `beresp`. 
+
+.. XXX: beresp comes out of the blue here. maybe a short description? benc
+
+Note that Varnish might
 not be talking to an actual client, so operations that require a
-client to be present are not allowed. Specifically there is no req
-object and restarts are not allowed.
+client to be present are not allowed. Specifically there is no `req
+object` and restarts are not allowed.
+
+.. XXX: I do not follow sentence above. benc
 
-The vcl_backend_response subroutine may terminate with calling return() with one
+The `vcl_backend_response` subroutine may terminate with calling ``return()`` with one
 of the following keywords:
 
   deliver
     Possibly insert the object into the cache, then deliver it to the
-    Control will eventually pass to vcl_deliver. Caching is dependant
-    on beresp.cacheable.
+    Control will eventually pass to `vcl_deliver`. Caching is dependant
+    on 'beresp.cacheable'.
+
+.. XXX:A parameter? that is set how? benc
+    
 
   error(error code, reason)
     Return the specified error code to the client and abandon the request.
 
   retry
-    Retry the backend transaction. Increases the retries counter. If the number
+    Retry the backend transaction. Increases the `retries` counter. If the number
     of retries is higher than *max_retries* Varnish emits a guru meditation
     error.
 
@@ -201,11 +216,13 @@ vcl_backend_error
 
 This subroutine is called if we fail the backend fetch. 
 
+.. XXX:Missing the terminate return structure? benc
+
   deliver
     Deliver the error.
 
   retry
-    Retry the backend transaction. Increases the retries counter. If the number
+    Retry the backend transaction. Increases the `retries` counter. If the number
     of retries is higher than *max_retries* Varnish emits a guru meditation
     error.
 
@@ -213,10 +230,12 @@ This subroutine is called if we fail the backend fetch.
 vcl_backend_error
 ~~~~~~~~~~~~~~~~~
 
+.. XXX: Same name as section above? benc
+
 Called when we hit an error, either explicitly or implicitly due to
 backend or internal errors.
 
-The vcl_backend_error subroutine may terminate by calling return with one of
+The `vcl_backend_error` subroutine may terminate by calling ``return()`` with one of
 the following keywords:
 
   deliver
@@ -234,7 +253,9 @@ vcl_init
 Called when VCL is loaded, before any requests pass through it.
 Typically used to initialize VMODs.
 
-  return() values:
+.. XXX: Missing the terminate return structure? benc
+
+  ``return()`` values:
 
   ok
     Normal return, VCL continues loading.
@@ -246,7 +267,13 @@ vcl_fini
 Called when VCL is discarded only after all requests have exited the VCL.
 Typically used to clean up VMODs.
 
-  return() values:
+
+.. XXX: Missing the terminate return structure? benc
+
+  ``return()`` values:
 
   ok
     Normal return, VCL will be discarded.
+
+
+.. XXX: Maybe end here with the detailed flowchart from the book together with a reference to the book? benc
diff --git a/doc/sphinx/users-guide/vcl-example-acls.rst b/doc/sphinx/users-guide/vcl-example-acls.rst
index b460bbe..45afaa2 100644
--- a/doc/sphinx/users-guide/vcl-example-acls.rst
+++ b/doc/sphinx/users-guide/vcl-example-acls.rst
@@ -2,7 +2,7 @@
 ACLs
 ~~~~
 
-You create a named access control list with the *acl* keyword. You can match
+You create a named access control list with the ``acl`` keyword. You can match
 the IP address of the client against an ACL with the match operator.::
 
   # Who is allowed to purge....
@@ -21,4 +21,4 @@ the IP address of the client against an ACL with the match operator.::
       }
     } 
   }
-  
+ 
diff --git a/doc/sphinx/users-guide/vcl-example-manipulating-headers.rst b/doc/sphinx/users-guide/vcl-example-manipulating-headers.rst
index 35dcb1e..7e65663 100644
--- a/doc/sphinx/users-guide/vcl-example-manipulating-headers.rst
+++ b/doc/sphinx/users-guide/vcl-example-manipulating-headers.rst
@@ -4,7 +4,7 @@
 Manipulating request headers in VCL
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Lets say we want to remove the cookie for all objects in the /images
+Lets say we want to remove the cookie for all objects in the `/images`
 directory of our web server::
 
   sub vcl_recv {
diff --git a/doc/sphinx/users-guide/vcl-example-manipulating-responses.rst b/doc/sphinx/users-guide/vcl-example-manipulating-responses.rst
index b5724b4..6719524 100644
--- a/doc/sphinx/users-guide/vcl-example-manipulating-responses.rst
+++ b/doc/sphinx/users-guide/vcl-example-manipulating-responses.rst
@@ -15,5 +15,5 @@ matches certain criteria::
 
 
 
-We also remove any Set-Cookie headers in order to avoid a hit-for-pass
+We also remove any Set-Cookie headers in order to avoid a `hit-for-pass`
 object to be created. See :ref:`user-guide-vcl_actions`.
diff --git a/doc/sphinx/users-guide/vcl-example-websockets.rst b/doc/sphinx/users-guide/vcl-example-websockets.rst
index 7217b88..609fe61 100644
--- a/doc/sphinx/users-guide/vcl-example-websockets.rst
+++ b/doc/sphinx/users-guide/vcl-example-websockets.rst
@@ -18,3 +18,4 @@ VCL config to do so::
          }
     }
 
+.. XXX: Pipe it? maybe a bit more explanation here? benc
diff --git a/doc/sphinx/users-guide/vcl-grace.rst b/doc/sphinx/users-guide/vcl-grace.rst
index f318aa3..afc92bd 100644
--- a/doc/sphinx/users-guide/vcl-grace.rst
+++ b/doc/sphinx/users-guide/vcl-grace.rst
@@ -7,13 +7,12 @@ A key feature of Varnish is its ability to shield you from misbehaving
 web- and application servers.
 
 
-
 Grace mode
 ~~~~~~~~~~
 
 When several clients are requesting the same page Varnish will send
 one request to the backend and place the others on hold while fetching
-one copy from the back end. In some products this is called request
+one copy from the backend. In some products this is called request
 coalescing and Varnish does this automatically.
 
 If you are serving thousands of hits per second the queue of waiting
diff --git a/doc/sphinx/users-guide/vcl-hashing.rst b/doc/sphinx/users-guide/vcl-hashing.rst
index fdbe37f..83758ad 100644
--- a/doc/sphinx/users-guide/vcl-hashing.rst
+++ b/doc/sphinx/users-guide/vcl-hashing.rst
@@ -1,12 +1,12 @@
 Hashing
 -------
 
-Internally, when Varnish stores content in it's store it uses a hash
+Internally, when Varnish stores content in the cache it stores the object together with a hash
 key to find the object again. In the default setup this key is
 calculated based on the content of the *Host* header or the IP address
 of the server and the URL.
 
-Behold the default vcl::
+Behold the `default vcl`::
 
  sub vcl_hash {
      hash_data(req.url);
@@ -18,10 +18,10 @@ Behold the default vcl::
      return (hash);
  }
 
-As you can see it first chucks in req.url then req.http.host if it
+As you can see it first checks in `req.url` then `req.http.host` if it
 exists. It is worth pointing out that Varnish doesn't lowercase the
-hostname or the URL before hashing it so in theory having Varnish.org/
-and varnish.org/ would result in different cache entries. Browers
+hostname or the URL before hashing it so in theory having "Varnish.org/"
+and "varnish.org/" would result in different cache entries. Browsers
 however, tend to lowercase hostnames.
 
 You can change what goes into the hash. This way you can make Varnish
@@ -33,11 +33,11 @@ based on where their IP address is located. You would need some Vmod
 to get a country code and then put it into the hash. It might look
 like this.
 
-In vcl_recv::
+In `vcl_recv`::
 
   set req.http.X-Country-Code = geoip.lookup(client.ip);
 
-And then add a vcl_hash::
+And then add a `vcl_hash`::
 
  sub vcl_hash {
    hash_data(req.http.X-Country-Code);
@@ -45,6 +45,6 @@ And then add a vcl_hash::
 
 As the default VCL will take care of adding the host and URL to the
 hash we don't have to do anything else. Be careful calling
-return(hash) as this will abort the execution of the default VCL and
-thereby you can end up with a Varnish that will return data based on
+``return(hash)`` as this will abort the execution of the default VCL and
+Varnish can end up returning data based on
 more or less random inputs.
diff --git a/doc/sphinx/users-guide/vcl-inline-c.rst b/doc/sphinx/users-guide/vcl-inline-c.rst
index 7c88cf9..5cc0ead 100644
--- a/doc/sphinx/users-guide/vcl-inline-c.rst
+++ b/doc/sphinx/users-guide/vcl-inline-c.rst
@@ -10,7 +10,7 @@ You can use *in-line C* to extend Varnish. Please note that you can
 seriously mess up Varnish this way. The C code runs within the Varnish
 Cache process so if your code generates a segfault the cache will crash.
 
-One of the first uses I saw of In-line C was logging to syslog.::
+One of the first uses of In-line C was logging to `syslog`.::
 
         # The include statements must be outside the subroutines.
         C{
diff --git a/doc/sphinx/users-guide/vcl-syntax.rst b/doc/sphinx/users-guide/vcl-syntax.rst
index 889ea97..4615291 100644
--- a/doc/sphinx/users-guide/vcl-syntax.rst
+++ b/doc/sphinx/users-guide/vcl-syntax.rst
@@ -9,7 +9,7 @@ preferences.
 
 Note that VCL doesn't contain any loops or jump statements.
 
-This document gives an outline of the most important parts of the
+This section provides an outline of the more important parts of the
 syntax. For a full documentation of VCL syntax please see
 :ref:`reference-vcl` in the reference.
 
@@ -18,7 +18,7 @@ Strings
 
 Basic strings are enclosed in " ... ", and may not contain newlines.
 
-Backslash is not special, so for instance in regsub() you do not need
+Backslash is not special, so for instance in `regsub()` you do not need
 to do the "count-the-backslashes" polka:::
 
   regsub("barf", "(b)(a)(r)(f)", "\4\3\2p") -> "frap"
@@ -43,9 +43,9 @@ which can later be used to match client addresses::
        }
 
 If an ACL entry specifies a host name which Varnish is unable to
-resolve, it will match any address it is compared to.  Consequently,
+resolve, it will match any address it is compared to. Consequently,
 if it is preceded by a negation mark, it will reject any address it is
-compared to, which may not be what you intended.  If the entry is
+compared to, which may not be what you intended. If the entry is
 enclosed in parentheses, however, it will simply be ignored.
 
 To match an IP address against an ACL, simply use the match operator::
@@ -93,10 +93,13 @@ A subroutine is used to group code for legibility or reusability:
 
 Subroutines in VCL do not take arguments, nor do they return values.
 
-To call a subroutine, use the call keyword followed by the subroutine's name:
+To call a subroutine, use the call keyword followed by the subroutine's name::
 
 call pipe_if_local;
 
 Varnish has quite a few built in subroutines that are called for each
 transaction as it flows through Varnish. These builtin subroutines are all named vcl_*. Your own subroutines cannot start their name with vcl_.
+
+.. XXX:looks as bit funky as red text? benc
+
 See :ref:`vcl-built-in-subs`.
diff --git a/doc/sphinx/users-guide/vcl-variables.rst b/doc/sphinx/users-guide/vcl-variables.rst
index 20fcd4e..88ab8f4 100644
--- a/doc/sphinx/users-guide/vcl-variables.rst
+++ b/doc/sphinx/users-guide/vcl-variables.rst
@@ -1,27 +1,33 @@
 
-Requests, responses and objects
+Requests and responses as objects
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-In VCL, there several important objects.
+.. XXX: refactored headline. benc
+
+In VCL, there several important objects that you need to be aware of. These objects can be accessed and manipulated using VCL.
 
 
 *req*
- The request object. When Varnish has received the request the req object is 
- created and populated. Most of the work you do in vcl_recv you 
- do on or with the req object.
+ The request object. When Varnish has received the request the `req` object is 
+ created and populated. Most of the work you do in `vcl_recv` you 
+ do on or with the `req` object.
 
 *bereq*
  The backend request object. Varnish contructs this before sending it to the 
- backend. It is based on the req object.
+ backend. It is based on the `req` object.
+
+.. XXX:in what way? benc
 
 *beresp*
  The backend response object. It contains the headers of the object 
  coming from the backend. If you want to modify the reponse coming from the 
- server you modify this object in vcl_backend_reponse. 
+ server you modify this object in `vcl_backend_reponse`. 
 
 *resp*
  The HTTP response right before it is delivered to the client. It is
- typically modified in vcl_deliver.
+ typically modified in `vcl_deliver`.
 
 *obj* 
  The object as it is stored in cache. Mostly read only.
+.. XXX:What object? the current request? benc
+
diff --git a/doc/sphinx/users-guide/vcl.rst b/doc/sphinx/users-guide/vcl.rst
index ad65403..8585241 100644
--- a/doc/sphinx/users-guide/vcl.rst
+++ b/doc/sphinx/users-guide/vcl.rst
@@ -3,12 +3,12 @@
 VCL - Varnish Configuration Language
 ------------------------------------
 
-This section is about getting Varnish to do what you want to
+This section covers how to tell Varnish how to handle
 your HTTP traffic, using the Varnish Configuration Language (VCL).
 
 Varnish has a great configuration system. Most other systems use
 configuration directives, where you basically turn on and off lots of
-switches. Varnish uses a domain specific language called VCL for this.
+switches. We have instead chosen to use a domain specific language called VCL for this.
 
 Every inbound request flows through Varnish and you can influence how
 the request is being handled by altering the VCL code. You can direct
@@ -26,7 +26,7 @@ request, another when files are fetched from the backend server.
 
 If you don't call an action in your subroutine and it reaches the end
 Varnish will execute some built-in VCL code. You will see this VCL
-code commented out in builtin.vcl that ships with Varnish Cache.
+code commented out in the file `builtin.vcl` that ships with Varnish Cache.
 
 .. _users-guide-vcl_fetch_actions:
 
@@ -43,5 +43,7 @@ code commented out in builtin.vcl that ships with Varnish Cache.
    vcl-inline-c
    vcl-examples
    websockets
+.. XXX: websockets seems to be missing? does it refer to the last sample in the vcl index if so already included. benc
+
    devicedetection
 
diff --git a/doc/sphinx/whats-new/changes.rst b/doc/sphinx/whats-new/changes.rst
index cf027c8..274c9f5 100644
--- a/doc/sphinx/whats-new/changes.rst
+++ b/doc/sphinx/whats-new/changes.rst
@@ -3,14 +3,15 @@
 Changes in Varnish 4
 ====================
 
-Varnish 4 is quite an extensive update over Varnish 3, with some very big improvements to central parts of varnish.
+Varnish 4 is quite an extensive update to Varnish 3, with some very big improvements to central parts of varnish.
 
 Client/backend split
 --------------------
 In the past, Varnish has fetched the content from the backend in the same
-thread as the client request. The client and backend code has now been split,
-allowing for some much requested improvements.
-This split allows varnish to refresh content in the background while serving
+thread as the client request.In Varnish 4 we have  split the client and backend code into separate trheads allowing for some much requested improvements.
+This split allows Varnish to refresh content in the background while serving
 stale content quickly to the client.
 
-This split has also necessitated a change of the VCL-functions, in particular functionality has moved from the old vcl_fetch method to the two new methods vcl_backend_fetch and vcl_backend_response.
+This split has also necessitated a change of the VCL-functions, in particular functionality has moved from the old `vcl_fetch` method to the two new methods `vcl_backend_fetch` and `vcl_backend_response`.
+
+.. XXX:Here would an updated flow-diagram over functions be great. benc
diff --git a/doc/sphinx/whats-new/index.rst b/doc/sphinx/whats-new/index.rst
index a1d1d73..c9d9130 100644
--- a/doc/sphinx/whats-new/index.rst
+++ b/doc/sphinx/whats-new/index.rst
@@ -1,14 +1,16 @@
 .. _whats-new-index:
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%
-What's new for Varnish 4.0
+What's new in Varnish 4.0
 %%%%%%%%%%%%%%%%%%%%%%%%%%
 
-This document describes the changes that have been made for Varnish 4. The
-first section will describe the overarching changes that have gone into
-Varnish, while the second section describes what changes you need to make to
-your configuration as well as any changes in behaviour that you need to take
-into consideration while upgrading.
+This section describes the changes that have been made for Varnish 4. The
+first subsection describes overarching changes that have gone into
+Varnish 4.0, while the second subsection describes changes you need to make to
+your current configuration (assuming you are on Varnish 3.x) as well as any changes in behaviour that you need to be aware of and take
+into consideration when upgrading.
+
+.. XXX:Heavy change of meaning above! benc
 
 .. toctree::
    :maxdepth: 2
diff --git a/doc/sphinx/whats-new/upgrading.rst b/doc/sphinx/whats-new/upgrading.rst
index 5f52318..2921835 100644
--- a/doc/sphinx/whats-new/upgrading.rst
+++ b/doc/sphinx/whats-new/upgrading.rst
@@ -37,7 +37,7 @@ Since the client director was already a special case of the hash director, it ha
 
 error() is now a return value
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-You must now explicitly return an error::
+You must explicitly return an error::
 
 	return(error(999, "Response));
 
@@ -76,11 +76,11 @@ vcl_recv should return(hash) instead of lookup now
 
 req.* not available in vcl_backend_response
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-req.* used to be available in vcl_fetch, but after the split of functionality, you only have bereq.* in vcl_backend_response.
+req.* used to be available in `vcl_fetch`, but after the split of functionality, you only have 'bereq.*' in `vcl_backend_response`.
 
 vcl_* reserved
 ~~~~~~~~~~~~~~
-Your own subs cannot be named vcl_* anymore. That is reserved for builtin subs.
+Any custom-made subs cannot be named 'vcl_*' anymore. This namespace is reserved for builtin subs.
 
 req.backend.healthy replaced by std.healthy(req.backend)
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~



More information about the varnish-commit mailing list