r4281 - branches/2.0/varnish-cache/bin/varnishd

tfheen at projects.linpro.no tfheen at projects.linpro.no
Thu Oct 8 11:25:07 CEST 2009


Author: tfheen
Date: 2009-10-08 11:25:07 +0200 (Thu, 08 Oct 2009)
New Revision: 4281

Modified:
   branches/2.0/varnish-cache/bin/varnishd/cache_acceptor.c
   branches/2.0/varnish-cache/bin/varnishd/cache_session.c
Log:
Merge r4071: Shift the responsibility for washing used sessions.

Instead of the acceptor thread doing it when reusing the session, have
the worker threads clean it out before putting it on the free list.

It could be, and probably was, argued that this is a performance
pessimization, but having thought much about it, I can't spot the
argument any longer, and certainly moving load off the acceptor thread
to the massively parallel worker threads is a good idea.



Modified: branches/2.0/varnish-cache/bin/varnishd/cache_acceptor.c
===================================================================
--- branches/2.0/varnish-cache/bin/varnishd/cache_acceptor.c	2009-10-08 09:17:03 UTC (rev 4280)
+++ branches/2.0/varnish-cache/bin/varnishd/cache_acceptor.c	2009-10-08 09:25:07 UTC (rev 4281)
@@ -108,13 +108,17 @@
 	need_test = 0;
 }
 
+/*--------------------------------------------------------------------
+ * Called once the workerthread gets hold of the session, to do setup
+ * setup overhead, we don't want to bother the acceptor thread with.
+ */
+
 void
 VCA_Prep(struct sess *sp)
 {
 	char addr[TCP_ADDRBUFSIZE];
 	char port[TCP_PORTBUFSIZE];
 
-
 	TCP_name(sp->sockaddr, sp->sockaddrlen,
 	    addr, sizeof addr, port, sizeof port);
 	sp->addr = WS_Dup(sp->ws, addr);

Modified: branches/2.0/varnish-cache/bin/varnishd/cache_session.c
===================================================================
--- branches/2.0/varnish-cache/bin/varnishd/cache_session.c	2009-10-08 09:17:03 UTC (rev 4280)
+++ branches/2.0/varnish-cache/bin/varnishd/cache_session.c	2009-10-08 09:25:07 UTC (rev 4281)
@@ -267,6 +267,8 @@
 		sm = malloc(sizeof *sm + u);
 		if (sm == NULL)
 			return (NULL);
+		/* Don't waste time zeroing the workspace */
+		memset(sm, 0, sizeof *sm);
 		sm->magic = SESSMEM_MAGIC;
 		sm->workspace = u;
 		VSL_stats->n_sess_mem++;
@@ -274,7 +276,6 @@
 	CHECK_OBJ_NOTNULL(sm, SESSMEM_MAGIC);
 	VSL_stats->n_sess++;
 	sp = &sm->sess;
-	memset(sp, 0, sizeof *sp);
 	sp->magic = SESS_MAGIC;
 	sp->mem = sm;
 	sp->sockaddr = (void*)(&sm->sockaddr[0]);
@@ -346,6 +347,7 @@
 {
 	struct acct *b = &sp->acct;
 	struct sessmem *sm;
+	unsigned workspace;
 
 	CHECK_OBJ_NOTNULL(sp, SESS_MAGIC);
 	sm = sp->mem;
@@ -365,6 +367,12 @@
 		VSL_stats->n_sess_mem--;
 		free(sm);
 	} else {
+		/* Clean and prepare for reuse */
+		workspace = sm->workspace;
+		memset(sm, 0, sizeof *sm);
+		sm->magic = SESSMEM_MAGIC;
+		sm->workspace = workspace;
+
 		Lck_Lock(&ses_mem_mtx);
 		VTAILQ_INSERT_HEAD(&ses_free_mem[1 - ses_qp], sm, list);
 		Lck_Unlock(&ses_mem_mtx);



More information about the varnish-commit mailing list