Varnish for large Videofiles with Range requests

Hugo Cisneiros (Eitch) hugo.cisneiros at gmail.com
Mon Aug 12 04:07:50 CEST 2013


On Sun, Aug 11, 2013 at 5:21 AM, Andre Lohmann <lohmann.andre at gmail.com> wrote:
> My Idea is, that the varnish clients shall be hit, when a File is requested. If this file was allready cached by the hit instance, it will be delivered by the cache, otherwise it will be delivered from the origin an being chached for the next requests.
>
> The problem I see here are within the range requests. If I got it right, when the first (range) request hits varnish, before the file is cached, this file will have a big latency, as it first needs to be fully cached. Is it possible to pipe all range requests to the origin, until the file is fully cached?

Did you try the streaming fork (usually called "s" fork, like 3.0.2s)?
It works very well and solves your problem.

Information: https://www.varnish-software.com/blog/http-streaming-varnish
(part "The new and improved streaming implementation")
Test it (only found the 3.0.2):
http://repo.varnish-cache.org/test/3.0.2+streaming/

The original varnish trunk can't do this.

> For highly frequented files it is a possible way, to prefetch the file, so it will be cached, before the first requests hit that file. But there are also archived files, which get requested not that often and that will fill up the cache otherwise, if I need to cache them all too.
>
> My Idea was, that high frequented files stay longer in the cache, as they are reuqested more often and older, archived files are drop from the cache, as they are requested much less.

I don't know if this can be done easily since you can't store
variables across many requests (unless there's a vmod that I don't
know). If anyone knows, I would like to know too :)

But you can process log files, check which files are requested more
frequently and prefetch them the same way as "warming the cache" (look
for it at the documentation). For example: you get the most accessed
files into a text file and do some warming using a header that does
req.hash_always_miss on vcl_recv before the TTL expires. This may work
and these files should be always on the cache.

-- 
[]'s
Hugo
www.devin.com.br



More information about the varnish-misc mailing list