[master] f9784fe copy-edits https://github.com/varnish/Varnish-Cache/pull/13

Per Buer perbu at varnish-cache.org
Wed Jun 12 15:11:58 CEST 2013


commit f9784fea2851a32e189955018610ce07e238d6e1
Author: Per Buer <perbu at varnish-software.com>
Date:   Wed Jun 12 15:00:21 2013 +0200

    copy-edits
    https://github.com/varnish/Varnish-Cache/pull/13
    
    by xiongchiamiov

diff --git a/doc/sphinx/phk/spdy.rst b/doc/sphinx/phk/spdy.rst
index 68bf000..7cc1f4f 100644
--- a/doc/sphinx/phk/spdy.rst
+++ b/doc/sphinx/phk/spdy.rst
@@ -8,7 +8,7 @@ It's dawning on me that I'm sort of the hipster of hipsters, in the sense
 that I tend to do things far before other people do, but totally fail to
 communicate what's going on out there in the future, and thus by the
 time the "real hipsters" catch up, I'm already somewhere different and
-more insteresting.
+more interesting.
 
 My one lucky break was the `bikeshed email <http://bikeshed.org/>`_ where
 I actually did sit down and compose some of my thoughts, thus firmly
@@ -27,18 +27,18 @@ The evolution of Varnish
 When we started out, seven years ago, our only and entire goal was to build
 a server-side cache better than squid.  That we did.
 
-Since then we have added stuff to Varnish, ESI:includes, gzip support,
-VMODS and I'm staring at streaming and conditional backend fetches right
+Since then we have added stuff to Varnish (ESI:includes, gzip support,
+VMODS) and I'm staring at streaming and conditional backend fetches right
 now.
 
 Varnish is a bit more than a web-cache now, but it is still, basically,
-a layer of polish you put in front of your webserver to get it too
+a layer of polish you put in front of your webserver to get it to
 look and work better.
 
-Googles experiments with SPDY have forced a HTTP/2.0 effort into motion,
+Google's experiments with SPDY have forced a HTTP/2.0 effort into motion,
 but if past performance is any indication, that is not something we have
-to really worry about for a number of years, the IETF WG has still to
-manage to "clarify" RFC2616 which defines HTTP/1.1 and to say there
+to really worry about for a number of years. The IETF WG has still to
+manage to "clarify" RFC2616 which defines HTTP/1.1, and to say there
 is anything even remotely resembling consensus behind SPDY would be a
 downright lie.
 
@@ -46,20 +46,20 @@ RFC2616 is from June 1999, which, to me, means that we should look at
 2035 when we design HTTP/2.0, and predicting things is well known to
 be hard, in particular with respect to the future.
 
-So what's a Varnish architect to do ?
+So what's a Varnish architect to do?
 
 What I did this summer vacation, was to think a lot about how Varnish
 can be architected to cope with the kind of changes SPDY and maybe HTTP/2.0
-drag in:  Pipelining, multiplexing etc, without committing us to one
+drag in:  Pipelining, multiplexing, etc., without committing us to one
 particular path of science fiction about life in 2035.
 
-Profound insights often sound incredibly simplistic bordering
+Profound insights often sound incredibly simplistic, bordering
 trivial, until you consider the full ramifications.  The implementation
-of "Do Not Kill" is in current law is surprisingly voluminous.  (If
-you don't think so, you probably forgot to #include the Wienna
+of "Do Not Kill" in current law is surprisingly voluminous.  (If
+you don't think so, you probably forgot to #include the Vienna
 Treaty and the convention about chemical and biological weapons.)
 
-So my insight about Varnish, that it has to become a socket-wrench like
+So my insight about Varnish, that it has to become a socket-wrench-like
 toolchest for doing things with HTTP traffic, will probably elicit a lot
 of "duh!" reactions, until people, including me, understand the 
 ramifications more fully.
@@ -77,7 +77,7 @@ of finite sized data elements.
 
 That is not how the future looks.
 
-For instance one of the things SPDY have tried out is "server push",
+For instance one of the things SPDY has tried out is "server push",
 where you fetch index.html and the webserver says "you'll also want
 main.css and cat.gif then" and pushes those objects on the client,
 to save the round-trip times wasted waiting for the client to ask
@@ -87,7 +87,7 @@ Today, something like that is impossible in Varnish, since objects
 are independent and you can only look up one at a time.
 
 I already can hear some of you amazing VCL wizards say "Well,
-if you inline-C grab a refcount, then restart and ..." but lets
+if you inline-C grab a refcount, then restart and ..." but let's
 be honest, that's not how it should look.
 
 You should be able to do something like::
@@ -107,21 +107,21 @@ And doing that is not really *that* hard, I think.  We just need
 to keep track of all the objects we instantiate and make sure they
 disappear and die when nobody is using them any more.
 
-But a lot of the assumptions we made back in 2006 are no longer
+A lot of the assumptions we made back in 2006 are no longer
 valid under such an architecture, but those same assumptions are
 what gives Varnish such astonishing performance, so just replacing
 them with standard CS-textbook solutions like "garbage collection"
-would make Varnish loose a lot of its lustre.
+would make Varnish lose a lot of its lustre.
 
 As some of you know, there is a lot of modularity hidden inside
-Varnish but not quite released for public use in VCL, much of what
-is going to happen, will be polishing up and documenting that
+Varnish but not quite released for public use in VCL. Much of what
+is going to happen will be polishing up and documenting that
 modularity and releasing it for you guys to have fun with, so it
 is not like we are starting from scratch or anything.
 
 But some of that modularity stands on foundations which are no longer
-firm, for instance that the initiating request exists for the
-full duration of a backend fetch.
+firm; for instance, the initiating request exists for the full duration of
+a backend fetch.
 
 Those will take some work to fix.
 
diff --git a/doc/sphinx/phk/varnish_does_not_hash.rst b/doc/sphinx/phk/varnish_does_not_hash.rst
index 8393c87..e03f078 100644
--- a/doc/sphinx/phk/varnish_does_not_hash.rst
+++ b/doc/sphinx/phk/varnish_does_not_hash.rst
@@ -14,7 +14,7 @@ Varnish does not hash, at least not by default, and
 even if it does, it's still as immune to the attacks as can be.
 
 To understand what is going on, I have to introduce a concept from
-Shannons information theory: "entropy."
+Shannon's information theory: "entropy."
 
 Entropy is hard to explain, and according to legend, that is exactly
 why Shannon recycled that term from thermodynamics.
@@ -35,10 +35,10 @@ storing the objects in an array indexed by that key.
 
 Typically, but not always, the key is a string and the index is a
 (smallish) integer, and the job of the hash-function is to squeeze
-the key into the integer, without loosing any of the entropy.
+the key into the integer, without losing any of the entropy.
 
 Needless to say, the more entropy you have to begin with, the more
-of it you can afford to loose, and loose some you almost invariably
+of it you can afford to lose, and lose some you almost invariably
 will.
 
 There are two families of hash-functions, the fast ones, and the good
@@ -64,12 +64,12 @@ What Varnish Does
 -----------------
 
 The way to avoid having hash-collisions is to not use a hash:  Use a
-tree instead, there every object has its own place and there are no
+tree instead. There every object has its own place and there are no
 collisions.
 
 Varnish does that, but with a twist.
 
-The "keys" in varnish can be very long, by default they consist of::
+The "keys" in Varnish can be very long; by default they consist of::
 
 	sub vcl_hash {
 	    hash_data(req.url);
@@ -98,7 +98,7 @@ each object in the far too common case seen above.
 But furthermore, we want the tree to be very fast to do lookups in,
 preferably it should be lockless for lookups, and that means that
 we cannot (realistically) use any of the "smart" trees which
-automatically balance themselves etc.
+automatically balance themselves, etc.
 
 You (generally) don't need a "smart" tree if your keys look
 like random data in the order they arrive, but we can pretty
@@ -109,8 +109,8 @@ But we can make the keys look random, and make them small and fixed
 size at the same time, and the perfect functions designed for just
 that task are the "good" hash-functions, the cryptographic ones.
 
-So what Varnish does is "key-compression":  All the strings hash_data()
-are fed, are pushed through a cryptographic hash algorithm called
+So what Varnish does is "key-compression":  All the strings fed to
+hash_data() are pushed through a cryptographic hash algorithm called
 SHA256, which, as the name says, always spits out 256 bits (= 32
 bytes), no matter how many bits you feed it.
 
@@ -134,8 +134,8 @@ That should be random enough.
 
 But the key-compression does introduce a risk of collisions, since
 not even SHA256 can guarantee different outputs for all possible
-inputs:  Try pushing all the possible 33 bytes long files through
-SHA256 and sooner or later you will get collisions.
+inputs:  Try pushing all the possible 33-byte files through SHA256
+and sooner or later you will get collisions.
 
 The risk of collision is very small however, and I can all but
 promise you, that you will be fully offset in fame and money for



More information about the varnish-commit mailing list