What are the steps to problem solve when all I can get are 503s?

Jacques whshub at gmail.com
Tue Jul 13 18:09:04 CEST 2010


Can you try to install with this as the configure:

VCC_CC="/usr/bin/gcc -pthreads -fpic -shared -m64 -o %o %s" \
CC=/usr/bin/gcc \
CFLAGS="-pthreads -m64" \
LDFLAGS="-pthreads" \
./configure --prefix=/opt/extra --enable-debugging-symbols
--enable-diagnostics --enable-dependency-tracking

Or change the VCC_CC path if you have your gcc at a different location.

I don't get any panics.  The child process starts and runs fine.  Everything
runs just fine (except the 503s).

I've connected up to the child process with gdb just fine.  My question now
is where/what should I breakpoint to see why it is coming back as backend
not connected?

Thanks,
Jacques



----------------------------------------
 # /opt/extra/sbin/varnishd -a :80 -b 87.238.47.204:80 -F -p
"connect_timeout=0s"  -s file,/export/varnish_cache.bin,512M -p waiter=poll
storage_file: filename: /export/varnish_cache.bin size 512 MB.
Using old SHMFILE
child (12769) Started
Child (12769) said
Child (12769) said Child starts
Child (12769) said managed to mmap 536870912 bytes of 536870912

<<<<wait whatever time>>>>

^CManager got SIGINT
Stopping Child

--------------------------------------------


On Tue, Jul 13, 2010 at 3:32 AM, Kacper Wysocki <kacperw at gmail.com> wrote:

> On Tue, Jul 13, 2010 at 3:11 AM, Jacques <whshub at gmail.com> wrote:
> > Yes, wget works fine.  As does lynx, etc.  :D
> > I checked ulimit per Jorge's response as well.  No impact.
> > I ran make test, (which I should have done before).  123 of 174 tests
> > failed.  All the same 503 issue.
>
> > I'm going to recompile with debugging enabled and see what I can see.
> > FYI, the log of tests is attached if you have any insights.
>
> I'm getting the same thing on my rig.
>
> Varnish _can't_ work with these tests failing.
> The firstmost basic test checks if anything gets through by setting up
> a local backend, and even this fails:
> #    top  TEST ././tests/b00000.vtc starting
> #    top  TEST Does anything get through at all ?
>
> Since Jorge is running his rig fine, is there a big difference between
> Solaris10 and OpenSolaris? Maybe in what kind of libc is employed?
> Like a libc which doesn't have thread-safe error handling? A little
> bit of debugging shows me that child creation fails due to basic
> thread safety checks, the relevant line being:
>
> Child (21308) Panic message: Assert error in wrk_herdtimer_thread(),
> cache_pool.c line 419:
>
>
> root at os# varnishd -a :80 -b 87.238.47.204:80 -d
> Message from C-compiler:
> cc: unrecognized option `-Kpic'
> NB: Storage size limited to 2GB on 32 bit architecture,
> NB: otherwise we could run out of address space.
> storage_file: filename: ./varnish.DcaGMP size 2047 MB.
> Using old SHMFILE
> Varnish on -sfile,-hcritbit
> 200 193
> -----------------------------
> Varnish HTTP accelerator CLI.
> -----------------------------
> Type 'help' for command list.
> Type 'quit' to close CLI session.
> Type 'start' to launch worker process.
>
> start
> child (21308) Started
> Pushing vcls failed: CLI communication error
> Stopping Child
> 200 0
>
> Child (21308) died signal=6
> Child (21308) Panic message: Assert error in wrk_herdtimer_thread(),
> cache_pool.c line 419:
>  Condition(errno_is_multi_threaded != 0) not true.
> thread = (wrk_herdtimer)
> ident = -sfile,-hcritbit,no_waiter
> Backtrace:
>  80730cd: /usr/local/sbin/varnishd'pan_ic+0xa9 [0x80730cd]
>  8074c8e: /usr/local/sbin/varnishd'wrk_herdtimer_thread+0x86 [0x8074c8e]
>  fedacd56: /lib/libc.so.1'_thrp_setup+0x7e [0xfedacd56]
>  fedacfe0: /lib/libc.so.1'_lwp_start+0x0 [0xfedacfe0]
>
>
> Child (-1) said
> Child (-1) said Child starts
> Child (-1) said managed to mmap 2147479552 bytes of 2147479552
> Child cleanup complete
>
> ( yup , for some reason it's a 32-bit install :-()
>
> 0K
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.varnish-cache.org/lists/pipermail/varnish-misc/attachments/20100713/2fdb3267/attachment-0003.html>


More information about the varnish-misc mailing list