[Fwd: Re: My random thoughts]

Anders Berg andersb at vgnett.no
Thu Feb 16 01:45:54 CET 2006

Thanks for reply Poul.

One thought that keeps coming back to me all the time is the need for a
really well documented/well discussed/tested  HTTP header strategy. It is
crucial and I belive we will spend much of our time next week and much
more later on this. I do not think it is possible to cover all aspects in
spec. alone. This is maybe to state the obvious, but I rather think that I
should so we all have a time to ponder on it.

>>as Poul later comments, squid is slow and dirty. Lets try to avoid it. I
>>am fine with fancy block storage, and I am tempted to suggest: Berkeley
>> DB
>>I have always pictured Varnish with a Berkley DB backend. Why? I _think_
>>it is fast (only website info to go on here).
> We may want to use DB to hash urls into object identity, but I doubt we
> will be putting the objects themselves into DB.

Yes. Objects _could_ work fine for a website with ASCII text HTML pages
and small JPEG's, GIF's, but anybody delivering "large" files and binaries
would curse it. So I see the usage rather limited for objects.

>>its block storage, and wildcard purge could potentially be as easy as:
>>delete from table where URL like '%bye-bye%';
>>Another thing I am just gonna base on my wildest fantasies, could we use
>>the Berkley DB replication to make a cache up-to-date after downtime?
>>Would be fun, wouldn't it? :)
> I fear it would be expensive.

Considering that objects would be kept outside this could work if the
database held some more data like how "hot" the object is, then parse
("select id from table order by hotness limit 200") it and fetch, but I
see that it may be alot more "effective" to do it the w3mir way Dag
suggested. Hotness would be inserted from aggregated shm data? I note
w3mir could maybe give us a License problem?

Anyway, spec week is coming up and I am excited. :)

Anders Berg

More information about the varnish-dev mailing list