From scan-admin at coverity.com Sun Jul 5 11:53:05 2020 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Sun, 05 Jul 2020 11:53:05 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <5f01bf212a463_2244822b1156526f60385f6@prd-scan-dashboard-0.mail> Your request for analysis of varnish has been completed successfully. The results are available at https://u2389337.ct.sendgrid.net/ls/click?upn=nJaKvJSIH-2FPAfmty-2BK5tYpPklAc1eEA-2F1zfUjH6teEzb7a35k9AJT3vQQzyq0UjO90ieNOMB6HZSUHPtUyV1qw-3D-3DQ630_WyTzqwss9kUEGhvWd0SG502mTu1yasCtuh9h-2FD3Je4-2BOp-2FRNI-2BUhWKgWjwouBmXs7GdR6ZP2OrALFA7-2BbudRF4nploI3mWkkqTU4mthDT1FI3QMMaveSZIsAefRvgATwdxBhcoYj07HtcQpljDC7MF9Qv1iL0DTFkR19KzuICAPIM2-2BaTRi5fkhPKSEuudER2A9MS6Shmq-2BlHpeRSI-2FhouQbvbQAZIYY7wiYqMZH-2BYw-3D Build ID: 324778 Analysis Summary: New defects found: 0 Defects eliminated: 0 From guillaume at varnish-software.com Mon Jul 6 22:36:29 2020 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Mon, 6 Jul 2020 15:36:29 -0700 Subject: Making libunwind the default Message-ID: Hi all, https://github.com/varnishcache/varnish-cache/pull/3052 was merged in September and I was wondering if we had enough feedback (or lack thereof) to make a decision and on making the libunwind dependency official. All the weeklies are being built with libunwind since the end of October, and the 6.4 packages on packagecloud. And the official alpine packages use it too, which isn't surprising as the change was made to accommodate the lack of proper libexecinfo for that platform. Notably, if Nils has some feedback about the more "exotic" platforms he uses, I'm interested! Cheers, -- Guillaume Quintard -------------- next part -------------- An HTML attachment was scrubbed... URL: From scan-admin at coverity.com Sun Jul 12 11:54:58 2020 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Sun, 12 Jul 2020 11:54:58 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <5f0afa122d409_370c822b08ab56cf5057f4@prd-scan-dashboard-0.mail> Your request for analysis of varnish has been completed successfully. The results are available at https://u15810271.ct.sendgrid.net/ls/click?upn=HRESupC-2F2Czv4BOaCWWCy7my0P0qcxCbhZ31OYv50yrJbcjUxJo9eCHXi2QbgV6mmItSKtPrD4wtuBl7WlE3MQ-3D-3DU4lR_WyTzqwss9kUEGhvWd0SG502mTu1yasCtuh9h-2FD3Je48NRc4qBDtTM9-2FJqvyM01u0Bm-2F4-2FHQbMAZ4eQrvz4vW8ezHPuO5vKQk6K1omwEd1pIX0XqAE9Ym9DiIZiv00GahXP6AHhNX4HzQI5Wahwi2vfiXef1vBzZ-2F0R4C38FaaqgX-2Fto47Z4kIqj-2Bi0eiV5T6ZWNrCVKPDWMePQqB0OaMRSRU5qchuRPGyrsDIMleC0E-3D Build ID: 326147 Analysis Summary: New defects found: 0 Defects eliminated: 0 From dridi at varni.sh Mon Jul 13 15:07:11 2020 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 13 Jul 2020 15:07:11 +0000 Subject: Making libunwind the default In-Reply-To: References: Message-ID: On Mon, Jul 6, 2020 at 10:36 PM Guillaume Quintard wrote: > > Hi all, > > https://github.com/varnishcache/varnish-cache/pull/3052 was merged in September and I was wondering if we had enough feedback (or lack thereof) to make a decision and on making the libunwind dependency official. > > All the weeklies are being built with libunwind since the end of October, and the 6.4 packages on packagecloud. And the official alpine packages use it too, which isn't surprising as the change was made to accommodate the lack of proper libexecinfo for that platform. > > Notably, if Nils has some feedback about the more "exotic" platforms he uses, I'm interested! Also some backtrace implementations like libexecinfo use GCC builtins [1,2] that are documented [3] as unsafe: > Calling this function with a nonzero argument can have unpredictable effects, including crashing the calling program. Adding a new dependency by default might be worth it all things considered. Dridi [1] __builtin_frame_address [2] __builtin_return_address [3] https://gcc.gnu.org/onlinedocs/gcc/Return-Address.html From martin.grigorov at gmail.com Tue Jul 14 12:20:32 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Tue, 14 Jul 2020 15:20:32 +0300 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: Hi Emilio, On Wed, Jun 17, 2020 at 5:36 PM Guillaume Quintard < guillaume at varnish-software.com> wrote: > Thank you Emilio, I'll contact packagecloud.io to see what's what. > > -- > Guillaume Quintard > > > On Wed, Jun 17, 2020 at 1:01 AM Emilio Fernandes < > emilio.fernandes70 at gmail.com> wrote: > >> Hola Guillaume, >> >> Thank you for uploading the new packages! >> >> I've just tested Ubuntu 20.04 and Centos 8 >> >> 1) Ubuntu >> 1.1) curl -s >> https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.deb.sh >> | sudo bash >> 1.2) apt install varnish - installs 20200615.weekly >> All is OK! >> >> 2) Centos >> 2.1) curl -s >> https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.rpm.sh >> | sudo bash >> This adds varnishcache_varnish-weekly and >> varnishcache_varnish-weekly-source YUM repositories >> 2.2) yum install varnish - installs 6.0.2 >> 2.3) yum --disablerepo="*" --enablerepo="varnishcache_varnish-weekly" >> list available >> Last metadata expiration check: 0:01:53 ago on Wed 17 Jun 2020 07:33:55 >> AM UTC. >> >> there are no packages in the new yum repository! >> > I am not sure whether you have noticed this answer by Dridi: https://github.com/varnishcache/pkg-varnish-cache/issues/142#issuecomment-654380393 I've just tested your steps and indeed after `dnf module disable varnish` I was able to install the weekly package on CentOS 8. Regards, Martin > >> 2.4) I was able to localinstall it though >> 2.4.1) yum install jemalloc >> 2.4.2) wget --content-disposition >> https://packagecloud.io/varnishcache/varnish-weekly/packages/el/8/varnish-20200615.weekly-0.0.el8.aarch64.rpm/download.rpm >> 2.4.3) yum localinstall >> varnish-20200615.weekly-0.0.el8.aarch64.rpm/download.rpm >> >> Do I miss some step with the PackageCloud repository or there is some >> issue ? >> >> Gracias, >> Emilio >> >> El mar., 16 jun. 2020 a las 18:39, Guillaume Quintard (< >> guillaume at varnish-software.com>) escribi?: >> >>> Ola, >>> >>> P?l just pushed Monday's batch, so you get amd64 and aarch64 packages >>> for all the platforms. Go forth and test, the paint is still very wet. >>> >>> Bonne journ?e! >>> >>> -- >>> Guillaume Quintard >>> >>> >>> On Tue, Jun 16, 2020 at 5:28 AM Emilio Fernandes < >>> emilio.fernandes70 at gmail.com> wrote: >>> >>>> Hi, >>>> >>>> When we could expect the new aarch64 binaries at >>>> https://packagecloud.io/varnishcache/varnish-weekly ? >>>> >>>> Gracias! >>>> Emilio >>>> >>>> El mi?., 15 abr. 2020 a las 14:33, Emilio Fernandes (< >>>> emilio.fernandes70 at gmail.com>) escribi?: >>>> >>>>> >>>>> >>>>> El jue., 26 mar. 2020 a las 10:15, Martin Grigorov (< >>>>> martin.grigorov at gmail.com>) escribi?: >>>>> >>>>>> Hello, >>>>>> >>>>>> Here is the PR: >>>>>> https://github.com/varnishcache/varnish-cache/pull/3263 >>>>>> I will add some more documentation about the new setup. >>>>>> Any feedback is welcome! >>>>>> >>>>> >>>>> Nice work, Martin! >>>>> >>>>> Gracias! >>>>> Emilio >>>>> >>>>> >>>>>> >>>>>> Regards, >>>>>> Martin >>>>>> >>>>>> On Wed, Mar 25, 2020 at 9:55 PM Martin Grigorov < >>>>>> martin.grigorov at gmail.com> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> On Wed, Mar 25, 2020, 20:15 Guillaume Quintard < >>>>>>> guillaume at varnish-software.com> wrote: >>>>>>> >>>>>>>> is that script running as root? >>>>>>>> >>>>>>> >>>>>>> Yes. >>>>>>> I also added 'USER root' to its Dockerfile and '-u 0' to 'docker >>>>>>> run' arguments but it still doesn't work. >>>>>>> The x86 build is OK. >>>>>>> It must be something in the base docker image. >>>>>>> I've disabled the Alpine aarch64 job for now. >>>>>>> I'll send a PR tomorrow! >>>>>>> >>>>>>> Regards, >>>>>>> Martin >>>>>>> >>>>>>> >>>>>>>> -- >>>>>>>> Guillaume Quintard >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Mar 25, 2020 at 2:30 AM Martin Grigorov < >>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I've moved 'dist' job to be executed in parallel with >>>>>>>>> 'tar_pkg_tools' and the results from both are shared in the workspace for >>>>>>>>> the actual packing jobs. >>>>>>>>> Now the new error for aarch64-apk job is: >>>>>>>>> >>>>>>>>> abuild: varnish >>> varnish: Updating the sha512sums in APKBUILD... >>>>>>>>> ]0; DEBUG: 4 >>>>>>>>> ]0;abuild: varnish >>> varnish: Building /varnish 6.4.0-r1 (using >>>>>>>>> abuild 3.5.0-r0) started Wed, 25 Mar 2020 09:22:02 +0000 >>>>>>>>> >>> varnish: Checking sanity of /package/APKBUILD... >>>>>>>>> >>> WARNING: varnish: No maintainer >>>>>>>>> >>> varnish: Analyzing dependencies... >>>>>>>>> 0% % >>>>>>>>> ############################################>>> varnish: Installing for >>>>>>>>> build: build-base gcc libc-dev libgcc pcre-dev ncurses-dev libedit-dev >>>>>>>>> py-docutils linux-headers libunwind-dev python py3-sphinx >>>>>>>>> Waiting for repository lock >>>>>>>>> ERROR: Unable to lock database: Bad file descriptor >>>>>>>>> ERROR: Failed to open apk database: Bad file descriptor >>>>>>>>> >>> ERROR: varnish: builddeps failed >>>>>>>>> ]0; >>> varnish: Uninstalling dependencies... >>>>>>>>> Waiting for repository lock >>>>>>>>> ERROR: Unable to lock database: Bad file descriptor >>>>>>>>> ERROR: Failed to open apk database: Bad file descriptor >>>>>>>>> >>>>>>>>> Google suggested to do this: >>>>>>>>> rm -rf /var/cache/apk >>>>>>>>> mkdir /var/cache/apk >>>>>>>>> >>>>>>>>> It fails at 'abuild -r' - >>>>>>>>> https://github.com/martin-g/varnish-cache/blob/b62c357b389c0e1e31e9c001cbffb55090c2e49f/.circleci/make-apk-packages.sh#L61 >>>>>>>>> >>>>>>>>> Any hints ? >>>>>>>>> >>>>>>>>> Martin >>>>>>>>> >>>>>>>>> On Wed, Mar 25, 2020 at 2:39 AM Guillaume Quintard < >>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> So, you are pointing at the `dist` job, whose sole role is to >>>>>>>>>> provide us with a dist tarball, so we don't need that command line to work >>>>>>>>>> for everyone, just for that specific platform. >>>>>>>>>> >>>>>>>>>> On the other hand, >>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L168 is >>>>>>>>>> closer to what you want, `distcheck` will be call on all platform, and you >>>>>>>>>> can see that it has the `--with-unwind` argument. >>>>>>>>>> -- >>>>>>>>>> Guillaume Quintard >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, Mar 24, 2020 at 3:05 PM Martin Grigorov < >>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Tue, Mar 24, 2020, 17:19 Guillaume Quintard < >>>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Compare your configure line with what's currently in use (or >>>>>>>>>>>> the apkbuild file), there are a few options (with-unwind, without-jemalloc, >>>>>>>>>>>> etc.) That need to be set >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> The configure line comes from "./autogen.des": >>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/autogen.des#L35-L42 >>>>>>>>>>> It is called at: >>>>>>>>>>> >>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L40 >>>>>>>>>>> In my branch at: >>>>>>>>>>> >>>>>>>>>>> https://github.com/martin-g/varnish-cache/blob/4b4626ee9cc366b032a45f27b54d77176125ef03/.circleci/make-apk-packages.sh#L26 >>>>>>>>>>> >>>>>>>>>>> It fails only on aarch64 for Alpine Linux. The x86_64 build for >>>>>>>>>>> Alpine is fine. >>>>>>>>>>> AARCH64 for CentOS 7 and Ubuntu 18.04 are also fine. >>>>>>>>>>> >>>>>>>>>>> Martin >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> On Tue, Mar 24, 2020, 08:05 Martin Grigorov < >>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi, >>>>>>>>>>>>> >>>>>>>>>>>>> On Tue, Mar 24, 2020 at 11:00 AM Martin Grigorov < >>>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi Guillaume, >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard < >>>>>>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi Martin, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thank you for that. >>>>>>>>>>>>>>> A few remarks and questions: >>>>>>>>>>>>>>> - how much time does the "docker build" step takes? We can >>>>>>>>>>>>>>> possibly speed things up by push images to the dockerhub, as they don't >>>>>>>>>>>>>>> need to change very often. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Definitely such optimization would be a good thing to do! >>>>>>>>>>>>>> At the moment, with 'machine' executor it fetches the base >>>>>>>>>>>>>> image and then builds all the Docker layers again and again. >>>>>>>>>>>>>> Here are the timings: >>>>>>>>>>>>>> 1) Spinning up a VM - around 10secs >>>>>>>>>>>>>> 2) prepare env variables - 0secs >>>>>>>>>>>>>> 3) checkout code (varnish-cache) - 5secs >>>>>>>>>>>>>> 4) activate QEMU - 2secs >>>>>>>>>>>>>> 5) build packages >>>>>>>>>>>>>> 5.1) x86 deb - 3m 30secs >>>>>>>>>>>>>> 5.2) x86 rpm - 2m 50secs >>>>>>>>>>>>>> 5.3) aarch64 rpm - 35mins >>>>>>>>>>>>>> 5.4) aarch64 deb - 45mins >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> - any reason why you clone pkg-varnish-cache in each job? >>>>>>>>>>>>>>> The idea was to have it cloned once in tar-pkg-tools for consistency and >>>>>>>>>>>>>>> reproducibility, which we lose here. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> I will extract the common steps once I see it working. This >>>>>>>>>>>>>> is my first CircleCI project and I still find my ways in it! >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> - do we want to change things for the amd64 platforms for >>>>>>>>>>>>>>> the sake of consistency? >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> So far there is nothing specific for amd4 or aarch64, except >>>>>>>>>>>>>> the base Docker images. >>>>>>>>>>>>>> For example make-deb-packages.sh is reused for both amd64 and >>>>>>>>>>>>>> aarch64 builds. Same for -rpm- and now for -apk- (alpine). >>>>>>>>>>>>>> >>>>>>>>>>>>>> Once I feel the change is almost finished I will open a Pull >>>>>>>>>>>>>> Request for more comments! >>>>>>>>>>>>>> >>>>>>>>>>>>>> Martin >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Guillaume Quintard >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov < >>>>>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov < >>>>>>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov < >>>>>>>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Hi Guillaume, >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < >>>>>>>>>>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Offering arm64 packages requires a few things: >>>>>>>>>>>>>>>>>>> - arm64-compatible code (all good in >>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache) >>>>>>>>>>>>>>>>>>> - arm64-compatible package framework (all good in >>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/pkg-varnish-cache) >>>>>>>>>>>>>>>>>>> - infrastructure to build the packages (uhoh, see below) >>>>>>>>>>>>>>>>>>> - infrastructure to store and deliver ( >>>>>>>>>>>>>>>>>>> https://packagecloud.io/varnishcache) >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> So, everything is in place, expect for the third point. >>>>>>>>>>>>>>>>>>> At the moment, there are two concurrent CI implementations: >>>>>>>>>>>>>>>>>>> - travis: >>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's >>>>>>>>>>>>>>>>>>> the historical one, and currently only runs compilation+test for OSX >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Actually it tests Linux AMD64 and ARM64 too. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> - circleci: >>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the >>>>>>>>>>>>>>>>>>> new kid on the block, that builds all the packages and distchecks for all >>>>>>>>>>>>>>>>>>> the packaged platforms >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> The issue is that cirecleci doesn't support arm64 >>>>>>>>>>>>>>>>>>> containers (for now?), so we would need to re-implement the packaging logic >>>>>>>>>>>>>>>>>>> in Travis. It's not a big problem, but it's currently not a priority on my >>>>>>>>>>>>>>>>>>> side. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> However, I am totally ready to provide help if someone >>>>>>>>>>>>>>>>>>> wants to take that up. The added benefit it that Travis would be able to >>>>>>>>>>>>>>>>>>> handle everything and we can retire the circleci experiment >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> I will take a look in the coming days and ask you if I >>>>>>>>>>>>>>>>>> need help! >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I've took a look at the current setup and here is what >>>>>>>>>>>>>>>>> I've found as problems and possible solutions: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 1) Circle CI >>>>>>>>>>>>>>>>> 1.1) problem - the 'machine' and 'Docker' executors run on >>>>>>>>>>>>>>>>> x86_64, so there is no way to build the packages in a "native" environment >>>>>>>>>>>>>>>>> 1.2) possible solutions >>>>>>>>>>>>>>>>> 1.2.1) use multiarch cross build >>>>>>>>>>>>>>>>> 1.2.2) use 'machine' executor that registers QEMU via >>>>>>>>>>>>>>>>> https://hub.docker.com/r/multiarch/qemu-user-static/ and >>>>>>>>>>>>>>>>> then builds and runs a custom Docker image that executes a shell script >>>>>>>>>>>>>>>>> with the build steps >>>>>>>>>>>>>>>>> It will look something like >>>>>>>>>>>>>>>>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but >>>>>>>>>>>>>>>>> instead of uploading the Docker image as a last step it will run it. >>>>>>>>>>>>>>>>> The RPM and DEB build related code from current config.yml >>>>>>>>>>>>>>>>> will be extracted into shell scripts which will be copied in the custom >>>>>>>>>>>>>>>>> Docker images >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> From these two possible ways I have better picture in my >>>>>>>>>>>>>>>>> head how to do 1.2.2, but I don't mind going deep in 1.2.1 if this is what >>>>>>>>>>>>>>>>> you'd prefer. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I've decided to stay with Circle CI and use 'machine' >>>>>>>>>>>>>>>> executor with QEMU. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> The changed config.yml could be seen at >>>>>>>>>>>>>>>> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and >>>>>>>>>>>>>>>> the build at >>>>>>>>>>>>>>>> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8 >>>>>>>>>>>>>>>> The builds on x86 arch take 3-4 mins, but for aarch64 >>>>>>>>>>>>>>>> (emulation!) ~40mins >>>>>>>>>>>>>>>> For now the jobs just build the .deb & .rpm packages for >>>>>>>>>>>>>>>> CentOS 7 and Ubuntu 18.04, both amd64 and aarch64. >>>>>>>>>>>>>>>> TODOs: >>>>>>>>>>>>>>>> - migrate Alpine >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>> Build on Alpine aarch64 fails with: >>>>>>>>>>>>> ... >>>>>>>>>>>>> automake: this behaviour will change in future Automake >>>>>>>>>>>>> versions: they will >>>>>>>>>>>>> automake: unconditionally cause object files to be placed in >>>>>>>>>>>>> the same subdirectory >>>>>>>>>>>>> automake: of the corresponding sources. >>>>>>>>>>>>> automake: project, to avoid future incompatibilities. >>>>>>>>>>>>> parallel-tests: installing 'build-aux/test-driver' >>>>>>>>>>>>> lib/libvmod_debug/Makefile.am:12: warning: >>>>>>>>>>>>> libvmod_debug_la_LDFLAGS multiply defined in condition TRUE ... >>>>>>>>>>>>> lib/libvmod_debug/automake_boilerplate.am:19: ... >>>>>>>>>>>>> 'libvmod_debug_la_LDFLAGS' previously defined here >>>>>>>>>>>>> lib/libvmod_debug/Makefile.am:9: 'lib/libvmod_debug/ >>>>>>>>>>>>> automake_boilerplate.am' included from here >>>>>>>>>>>>> + autoconf >>>>>>>>>>>>> + CONFIG_SHELL=/bin/sh >>>>>>>>>>>>> + export CONFIG_SHELL >>>>>>>>>>>>> + ./configure '--prefix=/opt/varnish' >>>>>>>>>>>>> '--mandir=/opt/varnish/man' --enable-maintainer-mode >>>>>>>>>>>>> --enable-developer-warnings --enable-debugging-symbols >>>>>>>>>>>>> --enable-dependency-tracking --with-persistent-storage --quiet >>>>>>>>>>>>> configure: WARNING: dot not found - build will fail if svg >>>>>>>>>>>>> files are out of date. >>>>>>>>>>>>> configure: WARNING: No system jemalloc found, using system >>>>>>>>>>>>> malloc >>>>>>>>>>>>> configure: error: Could not find backtrace() support >>>>>>>>>>>>> >>>>>>>>>>>>> Does anyone know a workaround ? >>>>>>>>>>>>> I use multiarch/alpine:aarch64-edge as a base Docker image >>>>>>>>>>>>> >>>>>>>>>>>>> Martin >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> - store the packages as CircleCI artifacts >>>>>>>>>>>>>>>> - anything else that is still missing >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Adding more architectures would be as easy as adding a new >>>>>>>>>>>>>>>> Dockerfile with a base image from the respective type. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Martin >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 2) Travis CI >>>>>>>>>>>>>>>>> 2.1) problems >>>>>>>>>>>>>>>>> 2.1.1) generally Travis is slower than Circle! >>>>>>>>>>>>>>>>> Althought if we use CircleCI 'machine' executor it will be >>>>>>>>>>>>>>>>> slower than the current 'Docker' executor! >>>>>>>>>>>>>>>>> 2.1.2) Travis supports only Ubuntu >>>>>>>>>>>>>>>>> Current setup at CircleCI uses CentOS 7. >>>>>>>>>>>>>>>>> I guess the build steps won't have problems on Ubuntu. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 3) GitHub Actions >>>>>>>>>>>>>>>>> GH Actions does not support ARM64 but it supports self >>>>>>>>>>>>>>>>> hosted ARM64 runners >>>>>>>>>>>>>>>>> 3.1) The problem is that there is no way to make a self >>>>>>>>>>>>>>>>> hosted runner really private. I.e. if someone forks Varnish Cache any >>>>>>>>>>>>>>>>> commit in the fork will trigger builds on the arm64 node. There is no way >>>>>>>>>>>>>>>>> to reserve the runner only for commits against >>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Do you see other problems or maybe different ways ? >>>>>>>>>>>>>>>>> Do you have preferences which way to go ? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Regards, >>>>>>>>>>>>>>>>> Martin >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Regards, >>>>>>>>>>>>>>>>>> Martin >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> Guillaume Quintard >>>>>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>>>>> varnish-dev mailing list >>>>>>>>>>>>>>>>>>> varnish-dev at varnish-cache.org >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilio.fernandes70 at gmail.com Wed Jul 15 08:11:40 2020 From: emilio.fernandes70 at gmail.com (Emilio Fernandes) Date: Wed, 15 Jul 2020 11:11:40 +0300 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: Hi Martin, El mar., 14 jul. 2020 a las 15:21, Martin Grigorov (< martin.grigorov at gmail.com>) escribi?: > Hi Emilio, > > On Wed, Jun 17, 2020 at 5:36 PM Guillaume Quintard < > guillaume at varnish-software.com> wrote: > >> Thank you Emilio, I'll contact packagecloud.io to see what's what. >> >> -- >> Guillaume Quintard >> >> >> On Wed, Jun 17, 2020 at 1:01 AM Emilio Fernandes < >> emilio.fernandes70 at gmail.com> wrote: >> >>> Hola Guillaume, >>> >>> Thank you for uploading the new packages! >>> >>> I've just tested Ubuntu 20.04 and Centos 8 >>> >>> 1) Ubuntu >>> 1.1) curl -s >>> https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.deb.sh >>> | sudo bash >>> 1.2) apt install varnish - installs 20200615.weekly >>> All is OK! >>> >>> 2) Centos >>> 2.1) curl -s >>> https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.rpm.sh >>> | sudo bash >>> This adds varnishcache_varnish-weekly and >>> varnishcache_varnish-weekly-source YUM repositories >>> 2.2) yum install varnish - installs 6.0.2 >>> 2.3) yum --disablerepo="*" --enablerepo="varnishcache_varnish-weekly" >>> list available >>> Last metadata expiration check: 0:01:53 ago on Wed 17 Jun 2020 07:33:55 >>> AM UTC. >>> >>> there are no packages in the new yum repository! >>> >> > I am not sure whether you have noticed this answer by Dridi: > https://github.com/varnishcache/pkg-varnish-cache/issues/142#issuecomment-654380393 > I've just tested your steps and indeed after `dnf module disable varnish` > I was able to install the weekly package on CentOS 8. > No, I wasn't aware of this discussion. The weekly package installed successfully now! Thank you! Emilio > > Regards, > Martin > > >> >>> 2.4) I was able to localinstall it though >>> 2.4.1) yum install jemalloc >>> 2.4.2) wget --content-disposition >>> https://packagecloud.io/varnishcache/varnish-weekly/packages/el/8/varnish-20200615.weekly-0.0.el8.aarch64.rpm/download.rpm >>> 2.4.3) yum localinstall >>> varnish-20200615.weekly-0.0.el8.aarch64.rpm/download.rpm >>> >>> Do I miss some step with the PackageCloud repository or there is some >>> issue ? >>> >>> Gracias, >>> Emilio >>> >>> El mar., 16 jun. 2020 a las 18:39, Guillaume Quintard (< >>> guillaume at varnish-software.com>) escribi?: >>> >>>> Ola, >>>> >>>> P?l just pushed Monday's batch, so you get amd64 and aarch64 packages >>>> for all the platforms. Go forth and test, the paint is still very wet. >>>> >>>> Bonne journ?e! >>>> >>>> -- >>>> Guillaume Quintard >>>> >>>> >>>> On Tue, Jun 16, 2020 at 5:28 AM Emilio Fernandes < >>>> emilio.fernandes70 at gmail.com> wrote: >>>> >>>>> Hi, >>>>> >>>>> When we could expect the new aarch64 binaries at >>>>> https://packagecloud.io/varnishcache/varnish-weekly ? >>>>> >>>>> Gracias! >>>>> Emilio >>>>> >>>>> El mi?., 15 abr. 2020 a las 14:33, Emilio Fernandes (< >>>>> emilio.fernandes70 at gmail.com>) escribi?: >>>>> >>>>>> >>>>>> >>>>>> El jue., 26 mar. 2020 a las 10:15, Martin Grigorov (< >>>>>> martin.grigorov at gmail.com>) escribi?: >>>>>> >>>>>>> Hello, >>>>>>> >>>>>>> Here is the PR: >>>>>>> https://github.com/varnishcache/varnish-cache/pull/3263 >>>>>>> I will add some more documentation about the new setup. >>>>>>> Any feedback is welcome! >>>>>>> >>>>>> >>>>>> Nice work, Martin! >>>>>> >>>>>> Gracias! >>>>>> Emilio >>>>>> >>>>>> >>>>>>> >>>>>>> Regards, >>>>>>> Martin >>>>>>> >>>>>>> On Wed, Mar 25, 2020 at 9:55 PM Martin Grigorov < >>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> On Wed, Mar 25, 2020, 20:15 Guillaume Quintard < >>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>> >>>>>>>>> is that script running as root? >>>>>>>>> >>>>>>>> >>>>>>>> Yes. >>>>>>>> I also added 'USER root' to its Dockerfile and '-u 0' to 'docker >>>>>>>> run' arguments but it still doesn't work. >>>>>>>> The x86 build is OK. >>>>>>>> It must be something in the base docker image. >>>>>>>> I've disabled the Alpine aarch64 job for now. >>>>>>>> I'll send a PR tomorrow! >>>>>>>> >>>>>>>> Regards, >>>>>>>> Martin >>>>>>>> >>>>>>>> >>>>>>>>> -- >>>>>>>>> Guillaume Quintard >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Mar 25, 2020 at 2:30 AM Martin Grigorov < >>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> I've moved 'dist' job to be executed in parallel with >>>>>>>>>> 'tar_pkg_tools' and the results from both are shared in the workspace for >>>>>>>>>> the actual packing jobs. >>>>>>>>>> Now the new error for aarch64-apk job is: >>>>>>>>>> >>>>>>>>>> abuild: varnish >>> varnish: Updating the sha512sums in >>>>>>>>>> APKBUILD... >>>>>>>>>> ]0; DEBUG: 4 >>>>>>>>>> ]0;abuild: varnish >>> varnish: Building /varnish 6.4.0-r1 (using >>>>>>>>>> abuild 3.5.0-r0) started Wed, 25 Mar 2020 09:22:02 +0000 >>>>>>>>>> >>> varnish: Checking sanity of /package/APKBUILD... >>>>>>>>>> >>> WARNING: varnish: No maintainer >>>>>>>>>> >>> varnish: Analyzing dependencies... >>>>>>>>>> 0% % >>>>>>>>>> ############################################>>> varnish: Installing for >>>>>>>>>> build: build-base gcc libc-dev libgcc pcre-dev ncurses-dev libedit-dev >>>>>>>>>> py-docutils linux-headers libunwind-dev python py3-sphinx >>>>>>>>>> Waiting for repository lock >>>>>>>>>> ERROR: Unable to lock database: Bad file descriptor >>>>>>>>>> ERROR: Failed to open apk database: Bad file descriptor >>>>>>>>>> >>> ERROR: varnish: builddeps failed >>>>>>>>>> ]0; >>> varnish: Uninstalling dependencies... >>>>>>>>>> Waiting for repository lock >>>>>>>>>> ERROR: Unable to lock database: Bad file descriptor >>>>>>>>>> ERROR: Failed to open apk database: Bad file descriptor >>>>>>>>>> >>>>>>>>>> Google suggested to do this: >>>>>>>>>> rm -rf /var/cache/apk >>>>>>>>>> mkdir /var/cache/apk >>>>>>>>>> >>>>>>>>>> It fails at 'abuild -r' - >>>>>>>>>> https://github.com/martin-g/varnish-cache/blob/b62c357b389c0e1e31e9c001cbffb55090c2e49f/.circleci/make-apk-packages.sh#L61 >>>>>>>>>> >>>>>>>>>> Any hints ? >>>>>>>>>> >>>>>>>>>> Martin >>>>>>>>>> >>>>>>>>>> On Wed, Mar 25, 2020 at 2:39 AM Guillaume Quintard < >>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> So, you are pointing at the `dist` job, whose sole role is to >>>>>>>>>>> provide us with a dist tarball, so we don't need that command line to work >>>>>>>>>>> for everyone, just for that specific platform. >>>>>>>>>>> >>>>>>>>>>> On the other hand, >>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L168 is >>>>>>>>>>> closer to what you want, `distcheck` will be call on all platform, and you >>>>>>>>>>> can see that it has the `--with-unwind` argument. >>>>>>>>>>> -- >>>>>>>>>>> Guillaume Quintard >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Tue, Mar 24, 2020 at 3:05 PM Martin Grigorov < >>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Tue, Mar 24, 2020, 17:19 Guillaume Quintard < >>>>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Compare your configure line with what's currently in use (or >>>>>>>>>>>>> the apkbuild file), there are a few options (with-unwind, without-jemalloc, >>>>>>>>>>>>> etc.) That need to be set >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> The configure line comes from "./autogen.des": >>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/autogen.des#L35-L42 >>>>>>>>>>>> It is called at: >>>>>>>>>>>> >>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L40 >>>>>>>>>>>> In my branch at: >>>>>>>>>>>> >>>>>>>>>>>> https://github.com/martin-g/varnish-cache/blob/4b4626ee9cc366b032a45f27b54d77176125ef03/.circleci/make-apk-packages.sh#L26 >>>>>>>>>>>> >>>>>>>>>>>> It fails only on aarch64 for Alpine Linux. The x86_64 build for >>>>>>>>>>>> Alpine is fine. >>>>>>>>>>>> AARCH64 for CentOS 7 and Ubuntu 18.04 are also fine. >>>>>>>>>>>> >>>>>>>>>>>> Martin >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> On Tue, Mar 24, 2020, 08:05 Martin Grigorov < >>>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Tue, Mar 24, 2020 at 11:00 AM Martin Grigorov < >>>>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi Guillaume, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard < >>>>>>>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hi Martin, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thank you for that. >>>>>>>>>>>>>>>> A few remarks and questions: >>>>>>>>>>>>>>>> - how much time does the "docker build" step takes? We can >>>>>>>>>>>>>>>> possibly speed things up by push images to the dockerhub, as they don't >>>>>>>>>>>>>>>> need to change very often. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Definitely such optimization would be a good thing to do! >>>>>>>>>>>>>>> At the moment, with 'machine' executor it fetches the base >>>>>>>>>>>>>>> image and then builds all the Docker layers again and again. >>>>>>>>>>>>>>> Here are the timings: >>>>>>>>>>>>>>> 1) Spinning up a VM - around 10secs >>>>>>>>>>>>>>> 2) prepare env variables - 0secs >>>>>>>>>>>>>>> 3) checkout code (varnish-cache) - 5secs >>>>>>>>>>>>>>> 4) activate QEMU - 2secs >>>>>>>>>>>>>>> 5) build packages >>>>>>>>>>>>>>> 5.1) x86 deb - 3m 30secs >>>>>>>>>>>>>>> 5.2) x86 rpm - 2m 50secs >>>>>>>>>>>>>>> 5.3) aarch64 rpm - 35mins >>>>>>>>>>>>>>> 5.4) aarch64 deb - 45mins >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> - any reason why you clone pkg-varnish-cache in each job? >>>>>>>>>>>>>>>> The idea was to have it cloned once in tar-pkg-tools for consistency and >>>>>>>>>>>>>>>> reproducibility, which we lose here. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I will extract the common steps once I see it working. This >>>>>>>>>>>>>>> is my first CircleCI project and I still find my ways in it! >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> - do we want to change things for the amd64 platforms for >>>>>>>>>>>>>>>> the sake of consistency? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> So far there is nothing specific for amd4 or aarch64, except >>>>>>>>>>>>>>> the base Docker images. >>>>>>>>>>>>>>> For example make-deb-packages.sh is reused for both amd64 >>>>>>>>>>>>>>> and aarch64 builds. Same for -rpm- and now for -apk- (alpine). >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Once I feel the change is almost finished I will open a Pull >>>>>>>>>>>>>>> Request for more comments! >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Martin >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> Guillaume Quintard >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov < >>>>>>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov < >>>>>>>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov < >>>>>>>>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Hi Guillaume, >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < >>>>>>>>>>>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Offering arm64 packages requires a few things: >>>>>>>>>>>>>>>>>>>> - arm64-compatible code (all good in >>>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache) >>>>>>>>>>>>>>>>>>>> - arm64-compatible package framework (all good in >>>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/pkg-varnish-cache) >>>>>>>>>>>>>>>>>>>> - infrastructure to build the packages (uhoh, see below) >>>>>>>>>>>>>>>>>>>> - infrastructure to store and deliver ( >>>>>>>>>>>>>>>>>>>> https://packagecloud.io/varnishcache) >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> So, everything is in place, expect for the third point. >>>>>>>>>>>>>>>>>>>> At the moment, there are two concurrent CI implementations: >>>>>>>>>>>>>>>>>>>> - travis: >>>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's >>>>>>>>>>>>>>>>>>>> the historical one, and currently only runs compilation+test for OSX >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Actually it tests Linux AMD64 and ARM64 too. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> - circleci: >>>>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the >>>>>>>>>>>>>>>>>>>> new kid on the block, that builds all the packages and distchecks for all >>>>>>>>>>>>>>>>>>>> the packaged platforms >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> The issue is that cirecleci doesn't support arm64 >>>>>>>>>>>>>>>>>>>> containers (for now?), so we would need to re-implement the packaging logic >>>>>>>>>>>>>>>>>>>> in Travis. It's not a big problem, but it's currently not a priority on my >>>>>>>>>>>>>>>>>>>> side. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> However, I am totally ready to provide help if someone >>>>>>>>>>>>>>>>>>>> wants to take that up. The added benefit it that Travis would be able to >>>>>>>>>>>>>>>>>>>> handle everything and we can retire the circleci experiment >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> I will take a look in the coming days and ask you if I >>>>>>>>>>>>>>>>>>> need help! >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> I've took a look at the current setup and here is what >>>>>>>>>>>>>>>>>> I've found as problems and possible solutions: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> 1) Circle CI >>>>>>>>>>>>>>>>>> 1.1) problem - the 'machine' and 'Docker' executors run >>>>>>>>>>>>>>>>>> on x86_64, so there is no way to build the packages in a "native" >>>>>>>>>>>>>>>>>> environment >>>>>>>>>>>>>>>>>> 1.2) possible solutions >>>>>>>>>>>>>>>>>> 1.2.1) use multiarch cross build >>>>>>>>>>>>>>>>>> 1.2.2) use 'machine' executor that registers QEMU via >>>>>>>>>>>>>>>>>> https://hub.docker.com/r/multiarch/qemu-user-static/ and >>>>>>>>>>>>>>>>>> then builds and runs a custom Docker image that executes a shell script >>>>>>>>>>>>>>>>>> with the build steps >>>>>>>>>>>>>>>>>> It will look something like >>>>>>>>>>>>>>>>>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but >>>>>>>>>>>>>>>>>> instead of uploading the Docker image as a last step it will run it. >>>>>>>>>>>>>>>>>> The RPM and DEB build related code from current >>>>>>>>>>>>>>>>>> config.yml will be extracted into shell scripts which will be copied in the >>>>>>>>>>>>>>>>>> custom Docker images >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> From these two possible ways I have better picture in my >>>>>>>>>>>>>>>>>> head how to do 1.2.2, but I don't mind going deep in 1.2.1 if this is what >>>>>>>>>>>>>>>>>> you'd prefer. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I've decided to stay with Circle CI and use 'machine' >>>>>>>>>>>>>>>>> executor with QEMU. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> The changed config.yml could be seen at >>>>>>>>>>>>>>>>> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and >>>>>>>>>>>>>>>>> the build at >>>>>>>>>>>>>>>>> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8 >>>>>>>>>>>>>>>>> The builds on x86 arch take 3-4 mins, but for aarch64 >>>>>>>>>>>>>>>>> (emulation!) ~40mins >>>>>>>>>>>>>>>>> For now the jobs just build the .deb & .rpm packages for >>>>>>>>>>>>>>>>> CentOS 7 and Ubuntu 18.04, both amd64 and aarch64. >>>>>>>>>>>>>>>>> TODOs: >>>>>>>>>>>>>>>>> - migrate Alpine >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>> Build on Alpine aarch64 fails with: >>>>>>>>>>>>>> ... >>>>>>>>>>>>>> automake: this behaviour will change in future Automake >>>>>>>>>>>>>> versions: they will >>>>>>>>>>>>>> automake: unconditionally cause object files to be placed in >>>>>>>>>>>>>> the same subdirectory >>>>>>>>>>>>>> automake: of the corresponding sources. >>>>>>>>>>>>>> automake: project, to avoid future incompatibilities. >>>>>>>>>>>>>> parallel-tests: installing 'build-aux/test-driver' >>>>>>>>>>>>>> lib/libvmod_debug/Makefile.am:12: warning: >>>>>>>>>>>>>> libvmod_debug_la_LDFLAGS multiply defined in condition TRUE ... >>>>>>>>>>>>>> lib/libvmod_debug/automake_boilerplate.am:19: ... >>>>>>>>>>>>>> 'libvmod_debug_la_LDFLAGS' previously defined here >>>>>>>>>>>>>> lib/libvmod_debug/Makefile.am:9: 'lib/libvmod_debug/ >>>>>>>>>>>>>> automake_boilerplate.am' included from here >>>>>>>>>>>>>> + autoconf >>>>>>>>>>>>>> + CONFIG_SHELL=/bin/sh >>>>>>>>>>>>>> + export CONFIG_SHELL >>>>>>>>>>>>>> + ./configure '--prefix=/opt/varnish' >>>>>>>>>>>>>> '--mandir=/opt/varnish/man' --enable-maintainer-mode >>>>>>>>>>>>>> --enable-developer-warnings --enable-debugging-symbols >>>>>>>>>>>>>> --enable-dependency-tracking --with-persistent-storage --quiet >>>>>>>>>>>>>> configure: WARNING: dot not found - build will fail if svg >>>>>>>>>>>>>> files are out of date. >>>>>>>>>>>>>> configure: WARNING: No system jemalloc found, using system >>>>>>>>>>>>>> malloc >>>>>>>>>>>>>> configure: error: Could not find backtrace() support >>>>>>>>>>>>>> >>>>>>>>>>>>>> Does anyone know a workaround ? >>>>>>>>>>>>>> I use multiarch/alpine:aarch64-edge as a base Docker image >>>>>>>>>>>>>> >>>>>>>>>>>>>> Martin >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> - store the packages as CircleCI artifacts >>>>>>>>>>>>>>>>> - anything else that is still missing >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Adding more architectures would be as easy as adding a new >>>>>>>>>>>>>>>>> Dockerfile with a base image from the respective type. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Martin >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> 2) Travis CI >>>>>>>>>>>>>>>>>> 2.1) problems >>>>>>>>>>>>>>>>>> 2.1.1) generally Travis is slower than Circle! >>>>>>>>>>>>>>>>>> Althought if we use CircleCI 'machine' executor it will >>>>>>>>>>>>>>>>>> be slower than the current 'Docker' executor! >>>>>>>>>>>>>>>>>> 2.1.2) Travis supports only Ubuntu >>>>>>>>>>>>>>>>>> Current setup at CircleCI uses CentOS 7. >>>>>>>>>>>>>>>>>> I guess the build steps won't have problems on Ubuntu. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> 3) GitHub Actions >>>>>>>>>>>>>>>>>> GH Actions does not support ARM64 but it supports self >>>>>>>>>>>>>>>>>> hosted ARM64 runners >>>>>>>>>>>>>>>>>> 3.1) The problem is that there is no way to make a self >>>>>>>>>>>>>>>>>> hosted runner really private. I.e. if someone forks Varnish Cache any >>>>>>>>>>>>>>>>>> commit in the fork will trigger builds on the arm64 node. There is no way >>>>>>>>>>>>>>>>>> to reserve the runner only for commits against >>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Do you see other problems or maybe different ways ? >>>>>>>>>>>>>>>>>> Do you have preferences which way to go ? >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Regards, >>>>>>>>>>>>>>>>>> Martin >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Regards, >>>>>>>>>>>>>>>>>>> Martin >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> Guillaume Quintard >>>>>>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>>>>>> varnish-dev mailing list >>>>>>>>>>>>>>>>>>>> varnish-dev at varnish-cache.org >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Wed Jul 15 09:31:25 2020 From: dridi at varni.sh (Dridi Boukelmoune) Date: Wed, 15 Jul 2020 09:31:25 +0000 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: > No, I wasn't aware of this discussion. > The weekly package installed successfully now! > Thank you! And I lost track of this thread, but all's well that ends well ;-) We are looking forward to feedback regarding the weeklies, please make sure to upgrade frequently and let us know as soon as something goes wrong. Cheers, Dridi From scan-admin at coverity.com Sun Jul 19 11:55:19 2020 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Sun, 19 Jul 2020 11:55:19 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <5f1434a74392a_1fd602b271b78cf5422425@prd-scan-dashboard-0.mail> Your request for analysis of varnish has been completed successfully. The results are available at https://u15810271.ct.sendgrid.net/ls/click?upn=HRESupC-2F2Czv4BOaCWWCy7my0P0qcxCbhZ31OYv50yrJbcjUxJo9eCHXi2QbgV6mmItSKtPrD4wtuBl7WlE3MQ-3D-3D_JbM_WyTzqwss9kUEGhvWd0SG502mTu1yasCtuh9h-2FD3Je49qRwPGl9lYnUboH4bHf3mQw0M7jU6YUCfk6G7PHidmNMPWQ548ihRtd4v-2FhmdUZOG4uKSdkq26mB0qjskSpDkExVCwcneazwTMdWfGugJ-2FbI5QjGJF68iHJPPxdMq-2FwsEPkKa1fEM60geIJoHaZXsOmquaKuvD5j-2BxiOIgGUi9zlY88V9KhA5KnaT0WPtfWpQ-3D Build ID: 327418 Analysis Summary: New defects found: 1 Defects eliminated: 0 If you have difficulty understanding any defects, email us at scan-admin at coverity.com, or post your question to StackOverflow at https://u15810271.ct.sendgrid.net/ls/click?upn=CTPegkVN6peWFCMEieYYmPWIi1E4yUS9EoqKFcNAiqhRq8qmgeBE-2Bdt3uvFRAFXd-2FlwX83-2FVVdybfzIMOby0qA-3D-3D89th_WyTzqwss9kUEGhvWd0SG502mTu1yasCtuh9h-2FD3Je49qRwPGl9lYnUboH4bHf3mQw0M7jU6YUCfk6G7PHidmNJpYsZT-2BlZiq9DKQue-2FInXGqUPXQtd4QVXkO3QMR7IezkwaxFO93tWEnb5OBm-2BYNpkN0z5C9nMbu-2Bt6ixsDnLn4RbmRXd8RjUOkoH0BzSk9BNruABEAGOeB5ov9ZREhqKkN5tPe31NNPFSukWUvxcTI-3D From scan-admin at coverity.com Sun Jul 26 11:54:45 2020 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Sun, 26 Jul 2020 11:54:45 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <5f1d6f058bda8_66c7d2abc14a9cf40184a1@prd-scan-dashboard-0.mail> Your request for analysis of varnish has been completed successfully. The results are available at https://u15810271.ct.sendgrid.net/ls/click?upn=HRESupC-2F2Czv4BOaCWWCy7my0P0qcxCbhZ31OYv50yrJbcjUxJo9eCHXi2QbgV6mmItSKtPrD4wtuBl7WlE3MQ-3D-3D-qun_WyTzqwss9kUEGhvWd0SG502mTu1yasCtuh9h-2FD3Je48QMUe5-2BWItxbt6QOl2GyyxQYrZK4MubR1P2vSCsn4MMJF9Gc3uiTBU85fZRYCx9uB0b3QDpwqTCHN52RhymGx4vDrcszCI0IgBRLE7BnNeXit1J-2FdHwVd50ByAl9y1MY0xIDLHCI24sC4K1LGNlm2T5lMVLkd6-2BkDAlMUK16DBioBn1kIHv2MpeWHiSUZn7oc-3D Build ID: 328723 Analysis Summary: New defects found: 0 Defects eliminated: 1 From martin.grigorov at gmail.com Tue Jul 28 11:52:40 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Tue, 28 Jul 2020 14:52:40 +0300 Subject: Compared performance of Varnish Cache on x86_64 and aarch64 Message-ID: Hello Varnish community, I've just posted an article [1] about comparing the performance of Varnish Cache on two similar machines - the main difference is the CPU architecture - x86_64 vs aarch64. It uses a specific use case - the backend service just returns a static content. The idea is to compare Varnish on the different architectures but also to compare Varnish against the backend HTTP server. What is interesting is that Varnish gives the same throughput as the backend server on x86_64 but on aarch64 it is around 30% slower than the backend. Any feedback and ideas how to tweak it (VCL or even patches) are very welcome! Regards, Martin 1. https://medium.com/@martin.grigorov/compare-varnish-cache-performance-on-x86-64-and-aarch64-cpu-architectures-cef5ad5fee5f?sk=1be4c19efc17504fa1afb53dc1d8ef92 -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Tue Jul 28 14:01:03 2020 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Tue, 28 Jul 2020 14:01:03 +0000 Subject: Compared performance of Varnish Cache on x86_64 and aarch64 In-Reply-To: References: Message-ID: <70851.1595944863@critter.freebsd.dk> -------- Martin Grigorov writes: > Any feedback and ideas how to tweak it (VCL or even patches) are very > welcome! First you need to tweak your benchmark setup. aarch64 Thread Stats Avg Stdev Max +/- Stdev Latency 655.40us 798.70us 28.43ms 90.52% Strictly speaking, you cannot rule out that the ARM machine sends responses before it receives the request, because your standard deviation is larger than your average. In other words: Those numbers tell us nothing. If you want to do this comparison, and I would love for you to do so, you really need to take the time it takes, and get your "noise" down. Here is how you should do it: for machine in ARM, INTEL Reboot machine For i in (at least) 1-5: Run test for 5 minutes If the results from the first run on each machine is very different from the other four runs, you can disrecard it, as a startup/bootup artifact. Report the numbers for all the runs for both machines. Make a plot of all those numbers, where you plot the reported average +/- stddev as a line, and the max value as a dot/cross/box. If you want to get fancy, you can do a Student's T test to tell you if there is any real difference. There's a program called "ministat" which will do this for you. Also: I can highly recommend this book: http://www.larrygonick.com/titles/science/the-cartoon-guide-to-statistics/ -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From martin.grigorov at gmail.com Wed Jul 29 12:03:07 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Wed, 29 Jul 2020 15:03:07 +0300 Subject: Compared performance of Varnish Cache on x86_64 and aarch64 In-Reply-To: <70851.1595944863@critter.freebsd.dk> References: <70851.1595944863@critter.freebsd.dk> Message-ID: Hi Poul-Henning, Thank you for your answer! On Tue, Jul 28, 2020 at 5:01 PM Poul-Henning Kamp wrote: > -------- > Martin Grigorov writes: > > > Any feedback and ideas how to tweak it (VCL or even patches) are very > > welcome! > > First you need to tweak your benchmark setup. > > aarch64 > > Thread Stats Avg Stdev Max +/- Stdev > Latency 655.40us 798.70us 28.43ms 90.52% > > Strictly speaking, you cannot rule out that the ARM machine > sends responses before it receives the request, because your > standard deviation is larger than your average. > Could you explain in what case(s) the server would send responses before receiving a request ? Do you think that there might be negative values for the latency of some requests ? > > In other words: Those numbers tell us nothing. > > If you want to do this comparison, and I would love for you to do so, > you really need to take the time it takes, and get your "noise" down. > > Here is how you should do it: > > for machine in ARM, INTEL > Reboot machine > For i in (at least) 1-5: > Run test for 5 minutes > > If the results from the first run on each machine is very different > from the other four runs, you can disrecard it, as a startup/bootup > artifact. > > Report the numbers for all the runs for both machines. > > Make a plot of all those numbers, where you plot the reported > average +/- stddev as a line, and the max value as a dot/cross/box. > > If you want to get fancy, you can do a Student's T test to tell > you if there is any real difference. There's a program called > "ministat" which will do this for you. > ministat looks cool! Thanks! I think I can save the raw latencies for all requests into a file and feed ministat with it! Gil Tene also didn't like how wrk measures the latency and forked it to https://github.com/giltene/wrk2. wrk2 measures the latency by using constant rate/throughput, while wrk focuses on as high throughput as possible and just reports the latency percentiles. wrk2 also prints detailed latency distribution as at https://github.com/giltene/wrk2#basic-usage (not as plot chart but still useful). The only problem is that wrk2 is not well maintained and it doesn't work on modern aarch64 due to the old version of Lua. I'll try to upgrade it. Regards, Martin > Also: I can highly recommend this book: > > > http://www.larrygonick.com/titles/science/the-cartoon-guide-to-statistics/ > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Wed Jul 29 12:11:44 2020 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 29 Jul 2020 12:11:44 +0000 Subject: Compared performance of Varnish Cache on x86_64 and aarch64 In-Reply-To: References: <70851.1595944863@critter.freebsd.dk> Message-ID: <81825.1596024704@critter.freebsd.dk> -------- Martin Grigorov writes: > > > Any feedback and ideas how to tweak it (VCL or even patches) are very > > > welcome! > > > > First you need to tweak your benchmark setup. > > > > aarch64 > > > > Thread Stats Avg Stdev Max +/- Stdev > > Latency 655.40us 798.70us 28.43ms 90.52% > > > > Strictly speaking, you cannot rule out that the ARM machine > > sends responses before it receives the request, because your > > standard deviation is larger than your average. > > > > Could you explain in what case(s) the server would send responses before > receiving a request ? It never would, that's the point! Your measurement says that there is 2/3 chance that the latency is between: 655.40?s - 798.70?s = -143.30?s and 655.40?s + 798.70?s = 1454.10?s You cannot conclude _anything_ from those numbers. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From martin.grigorov at gmail.com Wed Jul 29 12:35:38 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Wed, 29 Jul 2020 15:35:38 +0300 Subject: Compared performance of Varnish Cache on x86_64 and aarch64 In-Reply-To: <81825.1596024704@critter.freebsd.dk> References: <70851.1595944863@critter.freebsd.dk> <81825.1596024704@critter.freebsd.dk> Message-ID: On Wed, Jul 29, 2020 at 3:11 PM Poul-Henning Kamp wrote: > -------- > Martin Grigorov writes: > > > > > Any feedback and ideas how to tweak it (VCL or even patches) are very > > > > welcome! > > > > > > First you need to tweak your benchmark setup. > > > > > > aarch64 > > > > > > Thread Stats Avg Stdev Max +/- Stdev > > > Latency 655.40us 798.70us 28.43ms 90.52% > > > > > > Strictly speaking, you cannot rule out that the ARM machine > > > sends responses before it receives the request, because your > > > standard deviation is larger than your average. > > > > > > > Could you explain in what case(s) the server would send responses before > > receiving a request ? > > It never would, that's the point! > > Your measurement says that there is 2/3 chance that the latency > is between: > > 655.40?s - 798.70?s = -143.30?s > > and > 655.40?s + 798.70?s = 1454.10?s > > You cannot conclude _anything_ from those numbers. > This now sounds like: if the latency stats are not correct then most probably the throughput is also not correct! I may switch to a different load client tool! > > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geoff at uplex.de Fri Jul 31 06:44:33 2020 From: geoff at uplex.de (Geoff Simmons) Date: Fri, 31 Jul 2020 08:44:33 +0200 Subject: Compared performance of Varnish Cache on x86_64 and aarch64 In-Reply-To: References: Message-ID: <6d910fb6-ed09-6e17-5b20-8ef0a0f9b228@uplex.de> On 7/28/20 13:52, Martin Grigorov wrote: > > I've just posted an article [1] about comparing the performance of Varnish > Cache on two similar > machines - the main difference is the CPU architecture - x86_64 vs aarch64. > It uses a specific use case - the backend service just returns a static > content. The idea is > to compare Varnish on the different architectures but also to compare > Varnish against the backend HTTP server. > What is interesting is that Varnish gives the same throughput as the > backend server on x86_64 but on aarch64 it is around 30% slower than the > backend. Does your test have an account of whether there were any errors in backend fetches? Don't know if that explains anything, but with a connect timeout of 10s and first byte timeout of 5m, any error would have a considerable effect on the results of a 30 second test. The test tool output doesn't say anything I can see about error rates -- whether all responses had status 200, and if not, how many had which other status. Ideally it should be all 200, otherwise the results may not be valid. I agree with phk that a statistical analysis is needed for a robust statement about differences between the two platforms. For that, you'd need more than the summary stats shown in your blog post -- you need to collect all of the response times. What I usually do is query Varnish client request logs for Timestamp:Resp and save the number in the last column. t.test() in R runs Student's t-test (me R fanboi). HTH, Geoff -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From hermunn at varnish-software.com Fri Jul 31 13:43:11 2020 From: hermunn at varnish-software.com (=?UTF-8?Q?P=C3=A5l_Hermunn_Johansen?=) Date: Fri, 31 Jul 2020 15:43:11 +0200 Subject: Compared performance of Varnish Cache on x86_64 and aarch64 In-Reply-To: <81825.1596024704@critter.freebsd.dk> References: <70851.1595944863@critter.freebsd.dk> <81825.1596024704@critter.freebsd.dk> Message-ID: I am sorry for being so late to the game, but here it goes: ons. 29. jul. 2020 kl. 14:12 skrev Poul-Henning Kamp : > Your measurement says that there is 2/3 chance that the latency > is between: > > 655.40?s - 798.70?s = -143.30?s > > and > 655.40?s + 798.70?s = 1454.10?s No, it does not. There is no claim anywhere that the numbers are following a normal distribution or an approximation of it. Of course, the calculations you do demonstrate that the data is far from normally distributed (as expected). > You cannot conclude _anything_ from those numbers. There are two numbers, the average and the standard deviation, and they are calculated from the data, but the truth is hidden deeper in the data. By looking at the particular numbers, I agree completely that it is wrong to conclude that one is better than the other. I am not saying that the statements in the article are false, just that you do not have data to draw the conclusions. Furthermore I have to say that Geoff got things right (see below). As a mathematician, I have to say that statistics is hard, and trusting the output of wrk to draw conclusions is outright the wrong thing to do. In this case we have a luxury which you typically do not have: Data is essentially free. You can run many tests and you can run short or long tests with different parameters. A 30 second test is simply not enough for anything. As Geoff indicated, for each transaction you can extract many relevant values from varnishlog, with the status, hit/miss, time to first byte and time to last byte being the most obvious ones. They can be extracted and saved to a csv file by using varnishncsa with a custom format string, and you can use R (used it myself as a tool in my previous job - not a fan) to do statistical analysis on the data. The Student T suggestion from Geoff is a good idea, but just looking at one set of numbers without considering other factors is mathematically problematic. Anyway, some obvious questions then arise. For example: - How do the numbers between wrk and varnishlog/varnishncsa compare? Did wrk report a total number of transactions than varnish? If there is a discrepancy, then the errors might be because of some resource restraint (number of sockets or dropped syn packages?). - How does the average and maximum compare between varnish and wrk? - What is the CPU usage of the kernel, the benchmarking tool and the varnish processes in the tests? - What is the difference between the time to first byte and the time to last byte in Varnish for different object sizes? When Varnish writes to a socket, it hands bytes over to the kernel, and when the write call returns, we do not know how far the bytes have come, and how long it will take before they get to the final destination. The bytes may be in a kernel buffer, they might be on the network card, and they might be already received at the client's kernel, and they might have made it all into wrk (which may or may not have timestamped the response). Typically, depending on many things, Varnish will report faster times than what wrk, but since returning from the write call means that the calling thread must be rescheduled, it is even possible that wrk will see that some requests are faster than what Varnish reports. Running wrk2 with different speeds in a series of tests seems natural to me, so that you can observe when (and how) the system starts running into bottlenecks. Note that the bottleneck can just as well be in wrk2 itself or on the combined CPU usage of kernel + Varnish + wrk2. To complicate things even further: On your ARM vs. x64 tests, my guess is that both kernel parameters and parameters for the network are different, and the distributions probably have good reason to choose different values. It is very likely that these differences affect the performance of the systems in many ways, and that different tests will have different "optimal" tunings of kernel and network parameters. Sorry for rambling, but getting the statistics wrong is so easy. The question is very interesting, but if you want to draw conclusions, you should do the analysis, and (ideally) give access to the raw data in case anyone wants to have a look. Best, P?l fre. 31. jul. 2020 kl. 08:45 skrev Geoff Simmons : > > On 7/28/20 13:52, Martin Grigorov wrote: > > > > I've just posted an article [1] about comparing the performance of Varnish > > Cache on two similar > > machines - the main difference is the CPU architecture - x86_64 vs aarch64. > > It uses a specific use case - the backend service just returns a static > > content. The idea is > > to compare Varnish on the different architectures but also to compare > > Varnish against the backend HTTP server. > > What is interesting is that Varnish gives the same throughput as the > > backend server on x86_64 but on aarch64 it is around 30% slower than the > > backend. > > Does your test have an account of whether there were any errors in > backend fetches? Don't know if that explains anything, but with a > connect timeout of 10s and first byte timeout of 5m, any error would > have a considerable effect on the results of a 30 second test. > > The test tool output doesn't say anything I can see about error rates -- > whether all responses had status 200, and if not, how many had which > other status. Ideally it should be all 200, otherwise the results may > not be valid. > > I agree with phk that a statistical analysis is needed for a robust > statement about differences between the two platforms. For that, you'd > need more than the summary stats shown in your blog post -- you need to > collect all of the response times. What I usually do is query Varnish > client request logs for Timestamp:Resp and save the number in the last > column. > > t.test() in R runs Student's t-test (me R fanboi). > >