<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div>On Jan 18, 2010, at 2:16 PM, pub crawler wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div><blockquote type="cite">Most kernels cache recently-accessed files in RAM, and so common web servers such as Apache can ?>already serve up static objects very quickly if they are located in the buffer cache. (Varnish's apparent >speed is largely based on the same phenomenon.) If the data is already cached in the origin server's buffer >caches, then interposing an additional caching layer may actually be somewhat harmful because it will add >some additional latency.<br></blockquote><br>So far Varnish is performing very well for us as a web server of these<br>cached objects. The connection time for an item out of Varnish is<br>noticeably faster than with web servers we have used - even where the<br>items have been cached. We are mostly using 3rd party tools like<br><a href="http://webpagetest.org">webpagetest.org</a> to look at the item times.</div></blockquote><blockquote type="cite"><div><font class="Apple-style-span" color="#006312"><br></font>Varnish is good as a slice in a few different place in a cluster and a<br>few more when running distributed geographic clusters. Aside from<br>Nginx or something highly optimized I am fairly certain Varnish<br>provides faster serving of cached objects as an out of the box default<br>experience. I'll eventually find some time to test it in our<br>environment against web servers we use.<br></div></blockquote><div><br></div><div>I have a hard time believing that any difference in the total response time of a cached static object between Varnish and a general-purpose webserver will be statistically significant, especially considering typical Internet network latency. If there's any difference it should be well under a millisecond.</div><div><br></div><div>--Michael</div></div></body></html>