From scan-admin at coverity.com Sun Jun 7 11:50:59 2020 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Sun, 07 Jun 2020 11:50:59 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <5edcd4a375143_302cac2ae67f602f58280ab@appnode-2.mail> Your request for analysis of varnish has been completed successfully. The results are available at https://u2389337.ct.sendgrid.net/ls/click?upn=nJaKvJSIH-2FPAfmty-2BK5tYpPklAc1eEA-2F1zfUjH6teEzb7a35k9AJT3vQQzyq0UjO90ieNOMB6HZSUHPtUyV1qw-3D-3DATNt_WyTzqwss9kUEGhvWd0SG502mTu1yasCtuh9h-2FD3Je4916FrLWYqIb4xNwHogozfXjHwo2FQN9TGvNHVKTspBxpDQwxRIOpM-2FJ5PHZ2NBlkerW2hbIxs5pwNOdgxTlwBtzL9OnPgoFsGup5RAlqQ5R0c5zBo5GDdHHKLr6cuOcHB32GZs8QVJI9tO-2BfbPQvzTpz8ALavKo2hrgINfps13EA-3D-3D Build ID: 319365 Analysis Summary: New defects found: 0 Defects eliminated: 0 From scan-admin at coverity.com Sun Jun 14 11:47:39 2020 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Sun, 14 Jun 2020 11:47:39 +0000 (UTC) Subject: Coverity Scan: Analysis failed for varnish Message-ID: <5ee60e5ab48d6_23f972b1b03752f588925e@prd-scan-dashboard-0.mail> Your request for analysis of varnish is failed. Analysis status: Failure Please fix the error and upload the build again. Error details: :Failed to retrieve tar file For more detail explanation on the error, please check: https://u2389337.ct.sendgrid.net/ls/click?upn=QsMnDxMCOVVs7CDlyD2jouKTgNlKFinTRd3y-2BJC7sZryfVdWHH2BBU620aHLHGfhMXPTHYY5wQ5zOiTMnTlWDg-3D-3DcgjT_WyTzqwss9kUEGhvWd0SG502mTu1yasCtuh9h-2FD3Je4-2B-2FrqgD-2BaTFygO1cLe7zAR01-2F-2B5fYnvR0vd20xQpyIbIA1TvGOMQ6zsSxjUPmDDpO8l5vHNXV-2BGJZ2kyu2mF9HOuafusQQpTg-2BC-2BVQGvLXCD3buTUnCMkM1yGECpuD5NI7XTmodtWFlU3GjOIPHOP0EHuy9756O7e-2BKz0M6dJQtpg-3D-3D If your build process isn't going smoothly, email us at scan-admin at coverity.com with your cov-int/build-log.txt file attached for assistance, or post your issue to the StackOverflow at https://u2389337.ct.sendgrid.net/ls/click?upn=QsMnDxMCOVVs7CDlyD2jouKTgNlKFinTRd3y-2BJC7sZryfVdWHH2BBU620aHLHGfhMXPTHYY5wQ5zOiTMnTlWDg-3D-3Dohn7_WyTzqwss9kUEGhvWd0SG502mTu1yasCtuh9h-2FD3Je4-2B-2FrqgD-2BaTFygO1cLe7zAR01-2F-2B5fYnvR0vd20xQpyIbIPv1o3QqpcdeilqbhVAKBN4lWwUkb-2FgEk9XD-2B3aP9Buwll8hsawf0QiF3hzrlYvhxz3LBsq5JbTIs-2F6Nm3xrU2a-2BO3WIJiYzo1vyzkmUDrPht2IP2Xw0MLe7ug9sCqJLVQ-3D-3D From emilio.fernandes70 at gmail.com Tue Jun 16 12:28:20 2020 From: emilio.fernandes70 at gmail.com (Emilio Fernandes) Date: Tue, 16 Jun 2020 15:28:20 +0300 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: Hi, When we could expect the new aarch64 binaries at https://packagecloud.io/varnishcache/varnish-weekly ? Gracias! Emilio El mi?., 15 abr. 2020 a las 14:33, Emilio Fernandes (< emilio.fernandes70 at gmail.com>) escribi?: > > > El jue., 26 mar. 2020 a las 10:15, Martin Grigorov (< > martin.grigorov at gmail.com>) escribi?: > >> Hello, >> >> Here is the PR: https://github.com/varnishcache/varnish-cache/pull/3263 >> I will add some more documentation about the new setup. >> Any feedback is welcome! >> > > Nice work, Martin! > > Gracias! > Emilio > > >> >> Regards, >> Martin >> >> On Wed, Mar 25, 2020 at 9:55 PM Martin Grigorov < >> martin.grigorov at gmail.com> wrote: >> >>> Hi, >>> >>> On Wed, Mar 25, 2020, 20:15 Guillaume Quintard < >>> guillaume at varnish-software.com> wrote: >>> >>>> is that script running as root? >>>> >>> >>> Yes. >>> I also added 'USER root' to its Dockerfile and '-u 0' to 'docker run' >>> arguments but it still doesn't work. >>> The x86 build is OK. >>> It must be something in the base docker image. >>> I've disabled the Alpine aarch64 job for now. >>> I'll send a PR tomorrow! >>> >>> Regards, >>> Martin >>> >>> >>>> -- >>>> Guillaume Quintard >>>> >>>> >>>> On Wed, Mar 25, 2020 at 2:30 AM Martin Grigorov < >>>> martin.grigorov at gmail.com> wrote: >>>> >>>>> Hi, >>>>> >>>>> I've moved 'dist' job to be executed in parallel with 'tar_pkg_tools' >>>>> and the results from both are shared in the workspace for the actual >>>>> packing jobs. >>>>> Now the new error for aarch64-apk job is: >>>>> >>>>> abuild: varnish >>> varnish: Updating the sha512sums in APKBUILD... >>>>> ]0; DEBUG: 4 >>>>> ]0;abuild: varnish >>> varnish: Building /varnish 6.4.0-r1 (using >>>>> abuild 3.5.0-r0) started Wed, 25 Mar 2020 09:22:02 +0000 >>>>> >>> varnish: Checking sanity of /package/APKBUILD... >>>>> >>> WARNING: varnish: No maintainer >>>>> >>> varnish: Analyzing dependencies... >>>>> 0% % >>>>> ############################################>>> varnish: Installing for >>>>> build: build-base gcc libc-dev libgcc pcre-dev ncurses-dev libedit-dev >>>>> py-docutils linux-headers libunwind-dev python py3-sphinx >>>>> Waiting for repository lock >>>>> ERROR: Unable to lock database: Bad file descriptor >>>>> ERROR: Failed to open apk database: Bad file descriptor >>>>> >>> ERROR: varnish: builddeps failed >>>>> ]0; >>> varnish: Uninstalling dependencies... >>>>> Waiting for repository lock >>>>> ERROR: Unable to lock database: Bad file descriptor >>>>> ERROR: Failed to open apk database: Bad file descriptor >>>>> >>>>> Google suggested to do this: >>>>> rm -rf /var/cache/apk >>>>> mkdir /var/cache/apk >>>>> >>>>> It fails at 'abuild -r' - >>>>> https://github.com/martin-g/varnish-cache/blob/b62c357b389c0e1e31e9c001cbffb55090c2e49f/.circleci/make-apk-packages.sh#L61 >>>>> >>>>> Any hints ? >>>>> >>>>> Martin >>>>> >>>>> On Wed, Mar 25, 2020 at 2:39 AM Guillaume Quintard < >>>>> guillaume at varnish-software.com> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> So, you are pointing at the `dist` job, whose sole role is to provide >>>>>> us with a dist tarball, so we don't need that command line to work for >>>>>> everyone, just for that specific platform. >>>>>> >>>>>> On the other hand, >>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L168 is >>>>>> closer to what you want, `distcheck` will be call on all platform, and you >>>>>> can see that it has the `--with-unwind` argument. >>>>>> -- >>>>>> Guillaume Quintard >>>>>> >>>>>> >>>>>> On Tue, Mar 24, 2020 at 3:05 PM Martin Grigorov < >>>>>> martin.grigorov at gmail.com> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, Mar 24, 2020, 17:19 Guillaume Quintard < >>>>>>> guillaume at varnish-software.com> wrote: >>>>>>> >>>>>>>> Compare your configure line with what's currently in use (or the >>>>>>>> apkbuild file), there are a few options (with-unwind, without-jemalloc, >>>>>>>> etc.) That need to be set >>>>>>>> >>>>>>> >>>>>>> The configure line comes from "./autogen.des": >>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/autogen.des#L35-L42 >>>>>>> It is called at: >>>>>>> >>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L40 >>>>>>> In my branch at: >>>>>>> >>>>>>> https://github.com/martin-g/varnish-cache/blob/4b4626ee9cc366b032a45f27b54d77176125ef03/.circleci/make-apk-packages.sh#L26 >>>>>>> >>>>>>> It fails only on aarch64 for Alpine Linux. The x86_64 build for >>>>>>> Alpine is fine. >>>>>>> AARCH64 for CentOS 7 and Ubuntu 18.04 are also fine. >>>>>>> >>>>>>> Martin >>>>>>> >>>>>>> >>>>>>>> On Tue, Mar 24, 2020, 08:05 Martin Grigorov < >>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> On Tue, Mar 24, 2020 at 11:00 AM Martin Grigorov < >>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi Guillaume, >>>>>>>>>> >>>>>>>>>> On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard < >>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi Martin, >>>>>>>>>>> >>>>>>>>>>> Thank you for that. >>>>>>>>>>> A few remarks and questions: >>>>>>>>>>> - how much time does the "docker build" step takes? We can >>>>>>>>>>> possibly speed things up by push images to the dockerhub, as they don't >>>>>>>>>>> need to change very often. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Definitely such optimization would be a good thing to do! >>>>>>>>>> At the moment, with 'machine' executor it fetches the base image >>>>>>>>>> and then builds all the Docker layers again and again. >>>>>>>>>> Here are the timings: >>>>>>>>>> 1) Spinning up a VM - around 10secs >>>>>>>>>> 2) prepare env variables - 0secs >>>>>>>>>> 3) checkout code (varnish-cache) - 5secs >>>>>>>>>> 4) activate QEMU - 2secs >>>>>>>>>> 5) build packages >>>>>>>>>> 5.1) x86 deb - 3m 30secs >>>>>>>>>> 5.2) x86 rpm - 2m 50secs >>>>>>>>>> 5.3) aarch64 rpm - 35mins >>>>>>>>>> 5.4) aarch64 deb - 45mins >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> - any reason why you clone pkg-varnish-cache in each job? The >>>>>>>>>>> idea was to have it cloned once in tar-pkg-tools for consistency and >>>>>>>>>>> reproducibility, which we lose here. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I will extract the common steps once I see it working. This is my >>>>>>>>>> first CircleCI project and I still find my ways in it! >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> - do we want to change things for the amd64 platforms for the >>>>>>>>>>> sake of consistency? >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> So far there is nothing specific for amd4 or aarch64, except the >>>>>>>>>> base Docker images. >>>>>>>>>> For example make-deb-packages.sh is reused for both amd64 and >>>>>>>>>> aarch64 builds. Same for -rpm- and now for -apk- (alpine). >>>>>>>>>> >>>>>>>>>> Once I feel the change is almost finished I will open a Pull >>>>>>>>>> Request for more comments! >>>>>>>>>> >>>>>>>>>> Martin >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Guillaume Quintard >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov < >>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi, >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov < >>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi, >>>>>>>>>>>>> >>>>>>>>>>>>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov < >>>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi Guillaume, >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < >>>>>>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Offering arm64 packages requires a few things: >>>>>>>>>>>>>>> - arm64-compatible code (all good in >>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache) >>>>>>>>>>>>>>> - arm64-compatible package framework (all good in >>>>>>>>>>>>>>> https://github.com/varnishcache/pkg-varnish-cache) >>>>>>>>>>>>>>> - infrastructure to build the packages (uhoh, see below) >>>>>>>>>>>>>>> - infrastructure to store and deliver ( >>>>>>>>>>>>>>> https://packagecloud.io/varnishcache) >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> So, everything is in place, expect for the third point. At >>>>>>>>>>>>>>> the moment, there are two concurrent CI implementations: >>>>>>>>>>>>>>> - travis: >>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's >>>>>>>>>>>>>>> the historical one, and currently only runs compilation+test for OSX >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Actually it tests Linux AMD64 and ARM64 too. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> - circleci: >>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the >>>>>>>>>>>>>>> new kid on the block, that builds all the packages and distchecks for all >>>>>>>>>>>>>>> the packaged platforms >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> The issue is that cirecleci doesn't support arm64 containers >>>>>>>>>>>>>>> (for now?), so we would need to re-implement the packaging logic in Travis. >>>>>>>>>>>>>>> It's not a big problem, but it's currently not a priority on my side. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> However, I am totally ready to provide help if someone wants >>>>>>>>>>>>>>> to take that up. The added benefit it that Travis would be able to handle >>>>>>>>>>>>>>> everything and we can retire the circleci experiment >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> I will take a look in the coming days and ask you if I need >>>>>>>>>>>>>> help! >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> I've took a look at the current setup and here is what I've >>>>>>>>>>>>> found as problems and possible solutions: >>>>>>>>>>>>> >>>>>>>>>>>>> 1) Circle CI >>>>>>>>>>>>> 1.1) problem - the 'machine' and 'Docker' executors run on >>>>>>>>>>>>> x86_64, so there is no way to build the packages in a "native" environment >>>>>>>>>>>>> 1.2) possible solutions >>>>>>>>>>>>> 1.2.1) use multiarch cross build >>>>>>>>>>>>> 1.2.2) use 'machine' executor that registers QEMU via >>>>>>>>>>>>> https://hub.docker.com/r/multiarch/qemu-user-static/ and then >>>>>>>>>>>>> builds and runs a custom Docker image that executes a shell script with the >>>>>>>>>>>>> build steps >>>>>>>>>>>>> It will look something like >>>>>>>>>>>>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but >>>>>>>>>>>>> instead of uploading the Docker image as a last step it will run it. >>>>>>>>>>>>> The RPM and DEB build related code from current config.yml >>>>>>>>>>>>> will be extracted into shell scripts which will be copied in the custom >>>>>>>>>>>>> Docker images >>>>>>>>>>>>> >>>>>>>>>>>>> From these two possible ways I have better picture in my head >>>>>>>>>>>>> how to do 1.2.2, but I don't mind going deep in 1.2.1 if this is what you'd >>>>>>>>>>>>> prefer. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I've decided to stay with Circle CI and use 'machine' executor >>>>>>>>>>>> with QEMU. >>>>>>>>>>>> >>>>>>>>>>>> The changed config.yml could be seen at >>>>>>>>>>>> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and >>>>>>>>>>>> the build at >>>>>>>>>>>> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8 >>>>>>>>>>>> The builds on x86 arch take 3-4 mins, but for aarch64 >>>>>>>>>>>> (emulation!) ~40mins >>>>>>>>>>>> For now the jobs just build the .deb & .rpm packages for CentOS >>>>>>>>>>>> 7 and Ubuntu 18.04, both amd64 and aarch64. >>>>>>>>>>>> TODOs: >>>>>>>>>>>> - migrate Alpine >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>> Build on Alpine aarch64 fails with: >>>>>>>>> ... >>>>>>>>> automake: this behaviour will change in future Automake versions: >>>>>>>>> they will >>>>>>>>> automake: unconditionally cause object files to be placed in the >>>>>>>>> same subdirectory >>>>>>>>> automake: of the corresponding sources. >>>>>>>>> automake: project, to avoid future incompatibilities. >>>>>>>>> parallel-tests: installing 'build-aux/test-driver' >>>>>>>>> lib/libvmod_debug/Makefile.am:12: warning: >>>>>>>>> libvmod_debug_la_LDFLAGS multiply defined in condition TRUE ... >>>>>>>>> lib/libvmod_debug/automake_boilerplate.am:19: ... >>>>>>>>> 'libvmod_debug_la_LDFLAGS' previously defined here >>>>>>>>> lib/libvmod_debug/Makefile.am:9: 'lib/libvmod_debug/ >>>>>>>>> automake_boilerplate.am' included from here >>>>>>>>> + autoconf >>>>>>>>> + CONFIG_SHELL=/bin/sh >>>>>>>>> + export CONFIG_SHELL >>>>>>>>> + ./configure '--prefix=/opt/varnish' '--mandir=/opt/varnish/man' >>>>>>>>> --enable-maintainer-mode --enable-developer-warnings >>>>>>>>> --enable-debugging-symbols --enable-dependency-tracking >>>>>>>>> --with-persistent-storage --quiet >>>>>>>>> configure: WARNING: dot not found - build will fail if svg files >>>>>>>>> are out of date. >>>>>>>>> configure: WARNING: No system jemalloc found, using system malloc >>>>>>>>> configure: error: Could not find backtrace() support >>>>>>>>> >>>>>>>>> Does anyone know a workaround ? >>>>>>>>> I use multiarch/alpine:aarch64-edge as a base Docker image >>>>>>>>> >>>>>>>>> Martin >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> - store the packages as CircleCI artifacts >>>>>>>>>>>> - anything else that is still missing >>>>>>>>>>>> >>>>>>>>>>>> Adding more architectures would be as easy as adding a new >>>>>>>>>>>> Dockerfile with a base image from the respective type. >>>>>>>>>>>> >>>>>>>>>>>> Martin >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> 2) Travis CI >>>>>>>>>>>>> 2.1) problems >>>>>>>>>>>>> 2.1.1) generally Travis is slower than Circle! >>>>>>>>>>>>> Althought if we use CircleCI 'machine' executor it will be >>>>>>>>>>>>> slower than the current 'Docker' executor! >>>>>>>>>>>>> 2.1.2) Travis supports only Ubuntu >>>>>>>>>>>>> Current setup at CircleCI uses CentOS 7. >>>>>>>>>>>>> I guess the build steps won't have problems on Ubuntu. >>>>>>>>>>>>> >>>>>>>>>>>>> 3) GitHub Actions >>>>>>>>>>>>> GH Actions does not support ARM64 but it supports self hosted >>>>>>>>>>>>> ARM64 runners >>>>>>>>>>>>> 3.1) The problem is that there is no way to make a self hosted >>>>>>>>>>>>> runner really private. I.e. if someone forks Varnish Cache any commit in >>>>>>>>>>>>> the fork will trigger builds on the arm64 node. There is no way to reserve >>>>>>>>>>>>> the runner only for commits against >>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache >>>>>>>>>>>>> >>>>>>>>>>>>> Do you see other problems or maybe different ways ? >>>>>>>>>>>>> Do you have preferences which way to go ? >>>>>>>>>>>>> >>>>>>>>>>>>> Regards, >>>>>>>>>>>>> Martin >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Regards, >>>>>>>>>>>>>> Martin >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Guillaume Quintard >>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>> varnish-dev mailing list >>>>>>>>>>>>>>> varnish-dev at varnish-cache.org >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev >>>>>>>>>>>>>>> >>>>>>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Tue Jun 16 15:38:48 2020 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Tue, 16 Jun 2020 08:38:48 -0700 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: Ola, P?l just pushed Monday's batch, so you get amd64 and aarch64 packages for all the platforms. Go forth and test, the paint is still very wet. Bonne journ?e! -- Guillaume Quintard On Tue, Jun 16, 2020 at 5:28 AM Emilio Fernandes < emilio.fernandes70 at gmail.com> wrote: > Hi, > > When we could expect the new aarch64 binaries at > https://packagecloud.io/varnishcache/varnish-weekly ? > > Gracias! > Emilio > > El mi?., 15 abr. 2020 a las 14:33, Emilio Fernandes (< > emilio.fernandes70 at gmail.com>) escribi?: > >> >> >> El jue., 26 mar. 2020 a las 10:15, Martin Grigorov (< >> martin.grigorov at gmail.com>) escribi?: >> >>> Hello, >>> >>> Here is the PR: https://github.com/varnishcache/varnish-cache/pull/3263 >>> I will add some more documentation about the new setup. >>> Any feedback is welcome! >>> >> >> Nice work, Martin! >> >> Gracias! >> Emilio >> >> >>> >>> Regards, >>> Martin >>> >>> On Wed, Mar 25, 2020 at 9:55 PM Martin Grigorov < >>> martin.grigorov at gmail.com> wrote: >>> >>>> Hi, >>>> >>>> On Wed, Mar 25, 2020, 20:15 Guillaume Quintard < >>>> guillaume at varnish-software.com> wrote: >>>> >>>>> is that script running as root? >>>>> >>>> >>>> Yes. >>>> I also added 'USER root' to its Dockerfile and '-u 0' to 'docker run' >>>> arguments but it still doesn't work. >>>> The x86 build is OK. >>>> It must be something in the base docker image. >>>> I've disabled the Alpine aarch64 job for now. >>>> I'll send a PR tomorrow! >>>> >>>> Regards, >>>> Martin >>>> >>>> >>>>> -- >>>>> Guillaume Quintard >>>>> >>>>> >>>>> On Wed, Mar 25, 2020 at 2:30 AM Martin Grigorov < >>>>> martin.grigorov at gmail.com> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> I've moved 'dist' job to be executed in parallel with 'tar_pkg_tools' >>>>>> and the results from both are shared in the workspace for the actual >>>>>> packing jobs. >>>>>> Now the new error for aarch64-apk job is: >>>>>> >>>>>> abuild: varnish >>> varnish: Updating the sha512sums in APKBUILD... >>>>>> ]0; DEBUG: 4 >>>>>> ]0;abuild: varnish >>> varnish: Building /varnish 6.4.0-r1 (using >>>>>> abuild 3.5.0-r0) started Wed, 25 Mar 2020 09:22:02 +0000 >>>>>> >>> varnish: Checking sanity of /package/APKBUILD... >>>>>> >>> WARNING: varnish: No maintainer >>>>>> >>> varnish: Analyzing dependencies... >>>>>> 0% % >>>>>> ############################################>>> varnish: Installing for >>>>>> build: build-base gcc libc-dev libgcc pcre-dev ncurses-dev libedit-dev >>>>>> py-docutils linux-headers libunwind-dev python py3-sphinx >>>>>> Waiting for repository lock >>>>>> ERROR: Unable to lock database: Bad file descriptor >>>>>> ERROR: Failed to open apk database: Bad file descriptor >>>>>> >>> ERROR: varnish: builddeps failed >>>>>> ]0; >>> varnish: Uninstalling dependencies... >>>>>> Waiting for repository lock >>>>>> ERROR: Unable to lock database: Bad file descriptor >>>>>> ERROR: Failed to open apk database: Bad file descriptor >>>>>> >>>>>> Google suggested to do this: >>>>>> rm -rf /var/cache/apk >>>>>> mkdir /var/cache/apk >>>>>> >>>>>> It fails at 'abuild -r' - >>>>>> https://github.com/martin-g/varnish-cache/blob/b62c357b389c0e1e31e9c001cbffb55090c2e49f/.circleci/make-apk-packages.sh#L61 >>>>>> >>>>>> Any hints ? >>>>>> >>>>>> Martin >>>>>> >>>>>> On Wed, Mar 25, 2020 at 2:39 AM Guillaume Quintard < >>>>>> guillaume at varnish-software.com> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> So, you are pointing at the `dist` job, whose sole role is to >>>>>>> provide us with a dist tarball, so we don't need that command line to work >>>>>>> for everyone, just for that specific platform. >>>>>>> >>>>>>> On the other hand, >>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L168 is >>>>>>> closer to what you want, `distcheck` will be call on all platform, and you >>>>>>> can see that it has the `--with-unwind` argument. >>>>>>> -- >>>>>>> Guillaume Quintard >>>>>>> >>>>>>> >>>>>>> On Tue, Mar 24, 2020 at 3:05 PM Martin Grigorov < >>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Mar 24, 2020, 17:19 Guillaume Quintard < >>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>> >>>>>>>>> Compare your configure line with what's currently in use (or the >>>>>>>>> apkbuild file), there are a few options (with-unwind, without-jemalloc, >>>>>>>>> etc.) That need to be set >>>>>>>>> >>>>>>>> >>>>>>>> The configure line comes from "./autogen.des": >>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/autogen.des#L35-L42 >>>>>>>> It is called at: >>>>>>>> >>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L40 >>>>>>>> In my branch at: >>>>>>>> >>>>>>>> https://github.com/martin-g/varnish-cache/blob/4b4626ee9cc366b032a45f27b54d77176125ef03/.circleci/make-apk-packages.sh#L26 >>>>>>>> >>>>>>>> It fails only on aarch64 for Alpine Linux. The x86_64 build for >>>>>>>> Alpine is fine. >>>>>>>> AARCH64 for CentOS 7 and Ubuntu 18.04 are also fine. >>>>>>>> >>>>>>>> Martin >>>>>>>> >>>>>>>> >>>>>>>>> On Tue, Mar 24, 2020, 08:05 Martin Grigorov < >>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> On Tue, Mar 24, 2020 at 11:00 AM Martin Grigorov < >>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi Guillaume, >>>>>>>>>>> >>>>>>>>>>> On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard < >>>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi Martin, >>>>>>>>>>>> >>>>>>>>>>>> Thank you for that. >>>>>>>>>>>> A few remarks and questions: >>>>>>>>>>>> - how much time does the "docker build" step takes? We can >>>>>>>>>>>> possibly speed things up by push images to the dockerhub, as they don't >>>>>>>>>>>> need to change very often. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Definitely such optimization would be a good thing to do! >>>>>>>>>>> At the moment, with 'machine' executor it fetches the base image >>>>>>>>>>> and then builds all the Docker layers again and again. >>>>>>>>>>> Here are the timings: >>>>>>>>>>> 1) Spinning up a VM - around 10secs >>>>>>>>>>> 2) prepare env variables - 0secs >>>>>>>>>>> 3) checkout code (varnish-cache) - 5secs >>>>>>>>>>> 4) activate QEMU - 2secs >>>>>>>>>>> 5) build packages >>>>>>>>>>> 5.1) x86 deb - 3m 30secs >>>>>>>>>>> 5.2) x86 rpm - 2m 50secs >>>>>>>>>>> 5.3) aarch64 rpm - 35mins >>>>>>>>>>> 5.4) aarch64 deb - 45mins >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> - any reason why you clone pkg-varnish-cache in each job? The >>>>>>>>>>>> idea was to have it cloned once in tar-pkg-tools for consistency and >>>>>>>>>>>> reproducibility, which we lose here. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I will extract the common steps once I see it working. This is >>>>>>>>>>> my first CircleCI project and I still find my ways in it! >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> - do we want to change things for the amd64 platforms for the >>>>>>>>>>>> sake of consistency? >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> So far there is nothing specific for amd4 or aarch64, except the >>>>>>>>>>> base Docker images. >>>>>>>>>>> For example make-deb-packages.sh is reused for both amd64 and >>>>>>>>>>> aarch64 builds. Same for -rpm- and now for -apk- (alpine). >>>>>>>>>>> >>>>>>>>>>> Once I feel the change is almost finished I will open a Pull >>>>>>>>>>> Request for more comments! >>>>>>>>>>> >>>>>>>>>>> Martin >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Guillaume Quintard >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov < >>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi, >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov < >>>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov < >>>>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi Guillaume, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < >>>>>>>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Offering arm64 packages requires a few things: >>>>>>>>>>>>>>>> - arm64-compatible code (all good in >>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache) >>>>>>>>>>>>>>>> - arm64-compatible package framework (all good in >>>>>>>>>>>>>>>> https://github.com/varnishcache/pkg-varnish-cache) >>>>>>>>>>>>>>>> - infrastructure to build the packages (uhoh, see below) >>>>>>>>>>>>>>>> - infrastructure to store and deliver ( >>>>>>>>>>>>>>>> https://packagecloud.io/varnishcache) >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> So, everything is in place, expect for the third point. At >>>>>>>>>>>>>>>> the moment, there are two concurrent CI implementations: >>>>>>>>>>>>>>>> - travis: >>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's >>>>>>>>>>>>>>>> the historical one, and currently only runs compilation+test for OSX >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Actually it tests Linux AMD64 and ARM64 too. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> - circleci: >>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the >>>>>>>>>>>>>>>> new kid on the block, that builds all the packages and distchecks for all >>>>>>>>>>>>>>>> the packaged platforms >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> The issue is that cirecleci doesn't support arm64 >>>>>>>>>>>>>>>> containers (for now?), so we would need to re-implement the packaging logic >>>>>>>>>>>>>>>> in Travis. It's not a big problem, but it's currently not a priority on my >>>>>>>>>>>>>>>> side. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> However, I am totally ready to provide help if someone >>>>>>>>>>>>>>>> wants to take that up. The added benefit it that Travis would be able to >>>>>>>>>>>>>>>> handle everything and we can retire the circleci experiment >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I will take a look in the coming days and ask you if I need >>>>>>>>>>>>>>> help! >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> I've took a look at the current setup and here is what I've >>>>>>>>>>>>>> found as problems and possible solutions: >>>>>>>>>>>>>> >>>>>>>>>>>>>> 1) Circle CI >>>>>>>>>>>>>> 1.1) problem - the 'machine' and 'Docker' executors run on >>>>>>>>>>>>>> x86_64, so there is no way to build the packages in a "native" environment >>>>>>>>>>>>>> 1.2) possible solutions >>>>>>>>>>>>>> 1.2.1) use multiarch cross build >>>>>>>>>>>>>> 1.2.2) use 'machine' executor that registers QEMU via >>>>>>>>>>>>>> https://hub.docker.com/r/multiarch/qemu-user-static/ and >>>>>>>>>>>>>> then builds and runs a custom Docker image that executes a shell script >>>>>>>>>>>>>> with the build steps >>>>>>>>>>>>>> It will look something like >>>>>>>>>>>>>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but >>>>>>>>>>>>>> instead of uploading the Docker image as a last step it will run it. >>>>>>>>>>>>>> The RPM and DEB build related code from current config.yml >>>>>>>>>>>>>> will be extracted into shell scripts which will be copied in the custom >>>>>>>>>>>>>> Docker images >>>>>>>>>>>>>> >>>>>>>>>>>>>> From these two possible ways I have better picture in my head >>>>>>>>>>>>>> how to do 1.2.2, but I don't mind going deep in 1.2.1 if this is what you'd >>>>>>>>>>>>>> prefer. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> I've decided to stay with Circle CI and use 'machine' executor >>>>>>>>>>>>> with QEMU. >>>>>>>>>>>>> >>>>>>>>>>>>> The changed config.yml could be seen at >>>>>>>>>>>>> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and >>>>>>>>>>>>> the build at >>>>>>>>>>>>> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8 >>>>>>>>>>>>> The builds on x86 arch take 3-4 mins, but for aarch64 >>>>>>>>>>>>> (emulation!) ~40mins >>>>>>>>>>>>> For now the jobs just build the .deb & .rpm packages for >>>>>>>>>>>>> CentOS 7 and Ubuntu 18.04, both amd64 and aarch64. >>>>>>>>>>>>> TODOs: >>>>>>>>>>>>> - migrate Alpine >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> Build on Alpine aarch64 fails with: >>>>>>>>>> ... >>>>>>>>>> automake: this behaviour will change in future Automake versions: >>>>>>>>>> they will >>>>>>>>>> automake: unconditionally cause object files to be placed in the >>>>>>>>>> same subdirectory >>>>>>>>>> automake: of the corresponding sources. >>>>>>>>>> automake: project, to avoid future incompatibilities. >>>>>>>>>> parallel-tests: installing 'build-aux/test-driver' >>>>>>>>>> lib/libvmod_debug/Makefile.am:12: warning: >>>>>>>>>> libvmod_debug_la_LDFLAGS multiply defined in condition TRUE ... >>>>>>>>>> lib/libvmod_debug/automake_boilerplate.am:19: ... >>>>>>>>>> 'libvmod_debug_la_LDFLAGS' previously defined here >>>>>>>>>> lib/libvmod_debug/Makefile.am:9: 'lib/libvmod_debug/ >>>>>>>>>> automake_boilerplate.am' included from here >>>>>>>>>> + autoconf >>>>>>>>>> + CONFIG_SHELL=/bin/sh >>>>>>>>>> + export CONFIG_SHELL >>>>>>>>>> + ./configure '--prefix=/opt/varnish' '--mandir=/opt/varnish/man' >>>>>>>>>> --enable-maintainer-mode --enable-developer-warnings >>>>>>>>>> --enable-debugging-symbols --enable-dependency-tracking >>>>>>>>>> --with-persistent-storage --quiet >>>>>>>>>> configure: WARNING: dot not found - build will fail if svg files >>>>>>>>>> are out of date. >>>>>>>>>> configure: WARNING: No system jemalloc found, using system malloc >>>>>>>>>> configure: error: Could not find backtrace() support >>>>>>>>>> >>>>>>>>>> Does anyone know a workaround ? >>>>>>>>>> I use multiarch/alpine:aarch64-edge as a base Docker image >>>>>>>>>> >>>>>>>>>> Martin >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> - store the packages as CircleCI artifacts >>>>>>>>>>>>> - anything else that is still missing >>>>>>>>>>>>> >>>>>>>>>>>>> Adding more architectures would be as easy as adding a new >>>>>>>>>>>>> Dockerfile with a base image from the respective type. >>>>>>>>>>>>> >>>>>>>>>>>>> Martin >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> 2) Travis CI >>>>>>>>>>>>>> 2.1) problems >>>>>>>>>>>>>> 2.1.1) generally Travis is slower than Circle! >>>>>>>>>>>>>> Althought if we use CircleCI 'machine' executor it will be >>>>>>>>>>>>>> slower than the current 'Docker' executor! >>>>>>>>>>>>>> 2.1.2) Travis supports only Ubuntu >>>>>>>>>>>>>> Current setup at CircleCI uses CentOS 7. >>>>>>>>>>>>>> I guess the build steps won't have problems on Ubuntu. >>>>>>>>>>>>>> >>>>>>>>>>>>>> 3) GitHub Actions >>>>>>>>>>>>>> GH Actions does not support ARM64 but it supports self hosted >>>>>>>>>>>>>> ARM64 runners >>>>>>>>>>>>>> 3.1) The problem is that there is no way to make a self >>>>>>>>>>>>>> hosted runner really private. I.e. if someone forks Varnish Cache any >>>>>>>>>>>>>> commit in the fork will trigger builds on the arm64 node. There is no way >>>>>>>>>>>>>> to reserve the runner only for commits against >>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache >>>>>>>>>>>>>> >>>>>>>>>>>>>> Do you see other problems or maybe different ways ? >>>>>>>>>>>>>> Do you have preferences which way to go ? >>>>>>>>>>>>>> >>>>>>>>>>>>>> Regards, >>>>>>>>>>>>>> Martin >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Regards, >>>>>>>>>>>>>>> Martin >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> Guillaume Quintard >>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>> varnish-dev mailing list >>>>>>>>>>>>>>>> varnish-dev at varnish-cache.org >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilio.fernandes70 at gmail.com Wed Jun 17 08:00:55 2020 From: emilio.fernandes70 at gmail.com (Emilio Fernandes) Date: Wed, 17 Jun 2020 11:00:55 +0300 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: Hola Guillaume, Thank you for uploading the new packages! I've just tested Ubuntu 20.04 and Centos 8 1) Ubuntu 1.1) curl -s https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.deb.sh | sudo bash 1.2) apt install varnish - installs 20200615.weekly All is OK! 2) Centos 2.1) curl -s https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.rpm.sh | sudo bash This adds varnishcache_varnish-weekly and varnishcache_varnish-weekly-source YUM repositories 2.2) yum install varnish - installs 6.0.2 2.3) yum --disablerepo="*" --enablerepo="varnishcache_varnish-weekly" list available Last metadata expiration check: 0:01:53 ago on Wed 17 Jun 2020 07:33:55 AM UTC. there are no packages in the new yum repository! 2.4) I was able to localinstall it though 2.4.1) yum install jemalloc 2.4.2) wget --content-disposition https://packagecloud.io/varnishcache/varnish-weekly/packages/el/8/varnish-20200615.weekly-0.0.el8.aarch64.rpm/download.rpm 2.4.3) yum localinstall varnish-20200615.weekly-0.0.el8.aarch64.rpm/download.rpm Do I miss some step with the PackageCloud repository or there is some issue ? Gracias, Emilio El mar., 16 jun. 2020 a las 18:39, Guillaume Quintard (< guillaume at varnish-software.com>) escribi?: > Ola, > > P?l just pushed Monday's batch, so you get amd64 and aarch64 packages for > all the platforms. Go forth and test, the paint is still very wet. > > Bonne journ?e! > > -- > Guillaume Quintard > > > On Tue, Jun 16, 2020 at 5:28 AM Emilio Fernandes < > emilio.fernandes70 at gmail.com> wrote: > >> Hi, >> >> When we could expect the new aarch64 binaries at >> https://packagecloud.io/varnishcache/varnish-weekly ? >> >> Gracias! >> Emilio >> >> El mi?., 15 abr. 2020 a las 14:33, Emilio Fernandes (< >> emilio.fernandes70 at gmail.com>) escribi?: >> >>> >>> >>> El jue., 26 mar. 2020 a las 10:15, Martin Grigorov (< >>> martin.grigorov at gmail.com>) escribi?: >>> >>>> Hello, >>>> >>>> Here is the PR: https://github.com/varnishcache/varnish-cache/pull/3263 >>>> I will add some more documentation about the new setup. >>>> Any feedback is welcome! >>>> >>> >>> Nice work, Martin! >>> >>> Gracias! >>> Emilio >>> >>> >>>> >>>> Regards, >>>> Martin >>>> >>>> On Wed, Mar 25, 2020 at 9:55 PM Martin Grigorov < >>>> martin.grigorov at gmail.com> wrote: >>>> >>>>> Hi, >>>>> >>>>> On Wed, Mar 25, 2020, 20:15 Guillaume Quintard < >>>>> guillaume at varnish-software.com> wrote: >>>>> >>>>>> is that script running as root? >>>>>> >>>>> >>>>> Yes. >>>>> I also added 'USER root' to its Dockerfile and '-u 0' to 'docker run' >>>>> arguments but it still doesn't work. >>>>> The x86 build is OK. >>>>> It must be something in the base docker image. >>>>> I've disabled the Alpine aarch64 job for now. >>>>> I'll send a PR tomorrow! >>>>> >>>>> Regards, >>>>> Martin >>>>> >>>>> >>>>>> -- >>>>>> Guillaume Quintard >>>>>> >>>>>> >>>>>> On Wed, Mar 25, 2020 at 2:30 AM Martin Grigorov < >>>>>> martin.grigorov at gmail.com> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I've moved 'dist' job to be executed in parallel with >>>>>>> 'tar_pkg_tools' and the results from both are shared in the workspace for >>>>>>> the actual packing jobs. >>>>>>> Now the new error for aarch64-apk job is: >>>>>>> >>>>>>> abuild: varnish >>> varnish: Updating the sha512sums in APKBUILD... >>>>>>> ]0; DEBUG: 4 >>>>>>> ]0;abuild: varnish >>> varnish: Building /varnish 6.4.0-r1 (using >>>>>>> abuild 3.5.0-r0) started Wed, 25 Mar 2020 09:22:02 +0000 >>>>>>> >>> varnish: Checking sanity of /package/APKBUILD... >>>>>>> >>> WARNING: varnish: No maintainer >>>>>>> >>> varnish: Analyzing dependencies... >>>>>>> 0% % >>>>>>> ############################################>>> varnish: Installing for >>>>>>> build: build-base gcc libc-dev libgcc pcre-dev ncurses-dev libedit-dev >>>>>>> py-docutils linux-headers libunwind-dev python py3-sphinx >>>>>>> Waiting for repository lock >>>>>>> ERROR: Unable to lock database: Bad file descriptor >>>>>>> ERROR: Failed to open apk database: Bad file descriptor >>>>>>> >>> ERROR: varnish: builddeps failed >>>>>>> ]0; >>> varnish: Uninstalling dependencies... >>>>>>> Waiting for repository lock >>>>>>> ERROR: Unable to lock database: Bad file descriptor >>>>>>> ERROR: Failed to open apk database: Bad file descriptor >>>>>>> >>>>>>> Google suggested to do this: >>>>>>> rm -rf /var/cache/apk >>>>>>> mkdir /var/cache/apk >>>>>>> >>>>>>> It fails at 'abuild -r' - >>>>>>> https://github.com/martin-g/varnish-cache/blob/b62c357b389c0e1e31e9c001cbffb55090c2e49f/.circleci/make-apk-packages.sh#L61 >>>>>>> >>>>>>> Any hints ? >>>>>>> >>>>>>> Martin >>>>>>> >>>>>>> On Wed, Mar 25, 2020 at 2:39 AM Guillaume Quintard < >>>>>>> guillaume at varnish-software.com> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> So, you are pointing at the `dist` job, whose sole role is to >>>>>>>> provide us with a dist tarball, so we don't need that command line to work >>>>>>>> for everyone, just for that specific platform. >>>>>>>> >>>>>>>> On the other hand, >>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L168 is >>>>>>>> closer to what you want, `distcheck` will be call on all platform, and you >>>>>>>> can see that it has the `--with-unwind` argument. >>>>>>>> -- >>>>>>>> Guillaume Quintard >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Mar 24, 2020 at 3:05 PM Martin Grigorov < >>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, Mar 24, 2020, 17:19 Guillaume Quintard < >>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>> >>>>>>>>>> Compare your configure line with what's currently in use (or the >>>>>>>>>> apkbuild file), there are a few options (with-unwind, without-jemalloc, >>>>>>>>>> etc.) That need to be set >>>>>>>>>> >>>>>>>>> >>>>>>>>> The configure line comes from "./autogen.des": >>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/autogen.des#L35-L42 >>>>>>>>> It is called at: >>>>>>>>> >>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L40 >>>>>>>>> In my branch at: >>>>>>>>> >>>>>>>>> https://github.com/martin-g/varnish-cache/blob/4b4626ee9cc366b032a45f27b54d77176125ef03/.circleci/make-apk-packages.sh#L26 >>>>>>>>> >>>>>>>>> It fails only on aarch64 for Alpine Linux. The x86_64 build for >>>>>>>>> Alpine is fine. >>>>>>>>> AARCH64 for CentOS 7 and Ubuntu 18.04 are also fine. >>>>>>>>> >>>>>>>>> Martin >>>>>>>>> >>>>>>>>> >>>>>>>>>> On Tue, Mar 24, 2020, 08:05 Martin Grigorov < >>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> On Tue, Mar 24, 2020 at 11:00 AM Martin Grigorov < >>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi Guillaume, >>>>>>>>>>>> >>>>>>>>>>>> On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard < >>>>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi Martin, >>>>>>>>>>>>> >>>>>>>>>>>>> Thank you for that. >>>>>>>>>>>>> A few remarks and questions: >>>>>>>>>>>>> - how much time does the "docker build" step takes? We can >>>>>>>>>>>>> possibly speed things up by push images to the dockerhub, as they don't >>>>>>>>>>>>> need to change very often. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Definitely such optimization would be a good thing to do! >>>>>>>>>>>> At the moment, with 'machine' executor it fetches the base >>>>>>>>>>>> image and then builds all the Docker layers again and again. >>>>>>>>>>>> Here are the timings: >>>>>>>>>>>> 1) Spinning up a VM - around 10secs >>>>>>>>>>>> 2) prepare env variables - 0secs >>>>>>>>>>>> 3) checkout code (varnish-cache) - 5secs >>>>>>>>>>>> 4) activate QEMU - 2secs >>>>>>>>>>>> 5) build packages >>>>>>>>>>>> 5.1) x86 deb - 3m 30secs >>>>>>>>>>>> 5.2) x86 rpm - 2m 50secs >>>>>>>>>>>> 5.3) aarch64 rpm - 35mins >>>>>>>>>>>> 5.4) aarch64 deb - 45mins >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> - any reason why you clone pkg-varnish-cache in each job? The >>>>>>>>>>>>> idea was to have it cloned once in tar-pkg-tools for consistency and >>>>>>>>>>>>> reproducibility, which we lose here. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I will extract the common steps once I see it working. This is >>>>>>>>>>>> my first CircleCI project and I still find my ways in it! >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> - do we want to change things for the amd64 platforms for the >>>>>>>>>>>>> sake of consistency? >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> So far there is nothing specific for amd4 or aarch64, except >>>>>>>>>>>> the base Docker images. >>>>>>>>>>>> For example make-deb-packages.sh is reused for both amd64 and >>>>>>>>>>>> aarch64 builds. Same for -rpm- and now for -apk- (alpine). >>>>>>>>>>>> >>>>>>>>>>>> Once I feel the change is almost finished I will open a Pull >>>>>>>>>>>> Request for more comments! >>>>>>>>>>>> >>>>>>>>>>>> Martin >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Guillaume Quintard >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov < >>>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov < >>>>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov < >>>>>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hi Guillaume, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < >>>>>>>>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Offering arm64 packages requires a few things: >>>>>>>>>>>>>>>>> - arm64-compatible code (all good in >>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache) >>>>>>>>>>>>>>>>> - arm64-compatible package framework (all good in >>>>>>>>>>>>>>>>> https://github.com/varnishcache/pkg-varnish-cache) >>>>>>>>>>>>>>>>> - infrastructure to build the packages (uhoh, see below) >>>>>>>>>>>>>>>>> - infrastructure to store and deliver ( >>>>>>>>>>>>>>>>> https://packagecloud.io/varnishcache) >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> So, everything is in place, expect for the third point. At >>>>>>>>>>>>>>>>> the moment, there are two concurrent CI implementations: >>>>>>>>>>>>>>>>> - travis: >>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's >>>>>>>>>>>>>>>>> the historical one, and currently only runs compilation+test for OSX >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Actually it tests Linux AMD64 and ARM64 too. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> - circleci: >>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the >>>>>>>>>>>>>>>>> new kid on the block, that builds all the packages and distchecks for all >>>>>>>>>>>>>>>>> the packaged platforms >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> The issue is that cirecleci doesn't support arm64 >>>>>>>>>>>>>>>>> containers (for now?), so we would need to re-implement the packaging logic >>>>>>>>>>>>>>>>> in Travis. It's not a big problem, but it's currently not a priority on my >>>>>>>>>>>>>>>>> side. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> However, I am totally ready to provide help if someone >>>>>>>>>>>>>>>>> wants to take that up. The added benefit it that Travis would be able to >>>>>>>>>>>>>>>>> handle everything and we can retire the circleci experiment >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I will take a look in the coming days and ask you if I need >>>>>>>>>>>>>>>> help! >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I've took a look at the current setup and here is what I've >>>>>>>>>>>>>>> found as problems and possible solutions: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 1) Circle CI >>>>>>>>>>>>>>> 1.1) problem - the 'machine' and 'Docker' executors run on >>>>>>>>>>>>>>> x86_64, so there is no way to build the packages in a "native" environment >>>>>>>>>>>>>>> 1.2) possible solutions >>>>>>>>>>>>>>> 1.2.1) use multiarch cross build >>>>>>>>>>>>>>> 1.2.2) use 'machine' executor that registers QEMU via >>>>>>>>>>>>>>> https://hub.docker.com/r/multiarch/qemu-user-static/ and >>>>>>>>>>>>>>> then builds and runs a custom Docker image that executes a shell script >>>>>>>>>>>>>>> with the build steps >>>>>>>>>>>>>>> It will look something like >>>>>>>>>>>>>>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but >>>>>>>>>>>>>>> instead of uploading the Docker image as a last step it will run it. >>>>>>>>>>>>>>> The RPM and DEB build related code from current config.yml >>>>>>>>>>>>>>> will be extracted into shell scripts which will be copied in the custom >>>>>>>>>>>>>>> Docker images >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> From these two possible ways I have better picture in my >>>>>>>>>>>>>>> head how to do 1.2.2, but I don't mind going deep in 1.2.1 if this is what >>>>>>>>>>>>>>> you'd prefer. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> I've decided to stay with Circle CI and use 'machine' >>>>>>>>>>>>>> executor with QEMU. >>>>>>>>>>>>>> >>>>>>>>>>>>>> The changed config.yml could be seen at >>>>>>>>>>>>>> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and >>>>>>>>>>>>>> the build at >>>>>>>>>>>>>> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8 >>>>>>>>>>>>>> The builds on x86 arch take 3-4 mins, but for aarch64 >>>>>>>>>>>>>> (emulation!) ~40mins >>>>>>>>>>>>>> For now the jobs just build the .deb & .rpm packages for >>>>>>>>>>>>>> CentOS 7 and Ubuntu 18.04, both amd64 and aarch64. >>>>>>>>>>>>>> TODOs: >>>>>>>>>>>>>> - migrate Alpine >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>> Build on Alpine aarch64 fails with: >>>>>>>>>>> ... >>>>>>>>>>> automake: this behaviour will change in future Automake >>>>>>>>>>> versions: they will >>>>>>>>>>> automake: unconditionally cause object files to be placed in the >>>>>>>>>>> same subdirectory >>>>>>>>>>> automake: of the corresponding sources. >>>>>>>>>>> automake: project, to avoid future incompatibilities. >>>>>>>>>>> parallel-tests: installing 'build-aux/test-driver' >>>>>>>>>>> lib/libvmod_debug/Makefile.am:12: warning: >>>>>>>>>>> libvmod_debug_la_LDFLAGS multiply defined in condition TRUE ... >>>>>>>>>>> lib/libvmod_debug/automake_boilerplate.am:19: ... >>>>>>>>>>> 'libvmod_debug_la_LDFLAGS' previously defined here >>>>>>>>>>> lib/libvmod_debug/Makefile.am:9: 'lib/libvmod_debug/ >>>>>>>>>>> automake_boilerplate.am' included from here >>>>>>>>>>> + autoconf >>>>>>>>>>> + CONFIG_SHELL=/bin/sh >>>>>>>>>>> + export CONFIG_SHELL >>>>>>>>>>> + ./configure '--prefix=/opt/varnish' >>>>>>>>>>> '--mandir=/opt/varnish/man' --enable-maintainer-mode >>>>>>>>>>> --enable-developer-warnings --enable-debugging-symbols >>>>>>>>>>> --enable-dependency-tracking --with-persistent-storage --quiet >>>>>>>>>>> configure: WARNING: dot not found - build will fail if svg files >>>>>>>>>>> are out of date. >>>>>>>>>>> configure: WARNING: No system jemalloc found, using system malloc >>>>>>>>>>> configure: error: Could not find backtrace() support >>>>>>>>>>> >>>>>>>>>>> Does anyone know a workaround ? >>>>>>>>>>> I use multiarch/alpine:aarch64-edge as a base Docker image >>>>>>>>>>> >>>>>>>>>>> Martin >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> - store the packages as CircleCI artifacts >>>>>>>>>>>>>> - anything else that is still missing >>>>>>>>>>>>>> >>>>>>>>>>>>>> Adding more architectures would be as easy as adding a new >>>>>>>>>>>>>> Dockerfile with a base image from the respective type. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Martin >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> 2) Travis CI >>>>>>>>>>>>>>> 2.1) problems >>>>>>>>>>>>>>> 2.1.1) generally Travis is slower than Circle! >>>>>>>>>>>>>>> Althought if we use CircleCI 'machine' executor it will be >>>>>>>>>>>>>>> slower than the current 'Docker' executor! >>>>>>>>>>>>>>> 2.1.2) Travis supports only Ubuntu >>>>>>>>>>>>>>> Current setup at CircleCI uses CentOS 7. >>>>>>>>>>>>>>> I guess the build steps won't have problems on Ubuntu. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 3) GitHub Actions >>>>>>>>>>>>>>> GH Actions does not support ARM64 but it supports self >>>>>>>>>>>>>>> hosted ARM64 runners >>>>>>>>>>>>>>> 3.1) The problem is that there is no way to make a self >>>>>>>>>>>>>>> hosted runner really private. I.e. if someone forks Varnish Cache any >>>>>>>>>>>>>>> commit in the fork will trigger builds on the arm64 node. There is no way >>>>>>>>>>>>>>> to reserve the runner only for commits against >>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Do you see other problems or maybe different ways ? >>>>>>>>>>>>>>> Do you have preferences which way to go ? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Regards, >>>>>>>>>>>>>>> Martin >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Regards, >>>>>>>>>>>>>>>> Martin >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> Guillaume Quintard >>>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>>> varnish-dev mailing list >>>>>>>>>>>>>>>>> varnish-dev at varnish-cache.org >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Wed Jun 17 14:35:51 2020 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Wed, 17 Jun 2020 07:35:51 -0700 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: Thank you Emilio, I'll contact packagecloud.io to see what's what. -- Guillaume Quintard On Wed, Jun 17, 2020 at 1:01 AM Emilio Fernandes < emilio.fernandes70 at gmail.com> wrote: > Hola Guillaume, > > Thank you for uploading the new packages! > > I've just tested Ubuntu 20.04 and Centos 8 > > 1) Ubuntu > 1.1) curl -s > https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.deb.sh > | sudo bash > 1.2) apt install varnish - installs 20200615.weekly > All is OK! > > 2) Centos > 2.1) curl -s > https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.rpm.sh > | sudo bash > This adds varnishcache_varnish-weekly and > varnishcache_varnish-weekly-source YUM repositories > 2.2) yum install varnish - installs 6.0.2 > 2.3) yum --disablerepo="*" --enablerepo="varnishcache_varnish-weekly" list > available > Last metadata expiration check: 0:01:53 ago on Wed 17 Jun 2020 07:33:55 AM > UTC. > > there are no packages in the new yum repository! > > 2.4) I was able to localinstall it though > 2.4.1) yum install jemalloc > 2.4.2) wget --content-disposition > https://packagecloud.io/varnishcache/varnish-weekly/packages/el/8/varnish-20200615.weekly-0.0.el8.aarch64.rpm/download.rpm > 2.4.3) yum localinstall > varnish-20200615.weekly-0.0.el8.aarch64.rpm/download.rpm > > Do I miss some step with the PackageCloud repository or there is some > issue ? > > Gracias, > Emilio > > El mar., 16 jun. 2020 a las 18:39, Guillaume Quintard (< > guillaume at varnish-software.com>) escribi?: > >> Ola, >> >> P?l just pushed Monday's batch, so you get amd64 and aarch64 packages for >> all the platforms. Go forth and test, the paint is still very wet. >> >> Bonne journ?e! >> >> -- >> Guillaume Quintard >> >> >> On Tue, Jun 16, 2020 at 5:28 AM Emilio Fernandes < >> emilio.fernandes70 at gmail.com> wrote: >> >>> Hi, >>> >>> When we could expect the new aarch64 binaries at >>> https://packagecloud.io/varnishcache/varnish-weekly ? >>> >>> Gracias! >>> Emilio >>> >>> El mi?., 15 abr. 2020 a las 14:33, Emilio Fernandes (< >>> emilio.fernandes70 at gmail.com>) escribi?: >>> >>>> >>>> >>>> El jue., 26 mar. 2020 a las 10:15, Martin Grigorov (< >>>> martin.grigorov at gmail.com>) escribi?: >>>> >>>>> Hello, >>>>> >>>>> Here is the PR: >>>>> https://github.com/varnishcache/varnish-cache/pull/3263 >>>>> I will add some more documentation about the new setup. >>>>> Any feedback is welcome! >>>>> >>>> >>>> Nice work, Martin! >>>> >>>> Gracias! >>>> Emilio >>>> >>>> >>>>> >>>>> Regards, >>>>> Martin >>>>> >>>>> On Wed, Mar 25, 2020 at 9:55 PM Martin Grigorov < >>>>> martin.grigorov at gmail.com> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> On Wed, Mar 25, 2020, 20:15 Guillaume Quintard < >>>>>> guillaume at varnish-software.com> wrote: >>>>>> >>>>>>> is that script running as root? >>>>>>> >>>>>> >>>>>> Yes. >>>>>> I also added 'USER root' to its Dockerfile and '-u 0' to 'docker run' >>>>>> arguments but it still doesn't work. >>>>>> The x86 build is OK. >>>>>> It must be something in the base docker image. >>>>>> I've disabled the Alpine aarch64 job for now. >>>>>> I'll send a PR tomorrow! >>>>>> >>>>>> Regards, >>>>>> Martin >>>>>> >>>>>> >>>>>>> -- >>>>>>> Guillaume Quintard >>>>>>> >>>>>>> >>>>>>> On Wed, Mar 25, 2020 at 2:30 AM Martin Grigorov < >>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> I've moved 'dist' job to be executed in parallel with >>>>>>>> 'tar_pkg_tools' and the results from both are shared in the workspace for >>>>>>>> the actual packing jobs. >>>>>>>> Now the new error for aarch64-apk job is: >>>>>>>> >>>>>>>> abuild: varnish >>> varnish: Updating the sha512sums in APKBUILD... >>>>>>>> ]0; DEBUG: 4 >>>>>>>> ]0;abuild: varnish >>> varnish: Building /varnish 6.4.0-r1 (using >>>>>>>> abuild 3.5.0-r0) started Wed, 25 Mar 2020 09:22:02 +0000 >>>>>>>> >>> varnish: Checking sanity of /package/APKBUILD... >>>>>>>> >>> WARNING: varnish: No maintainer >>>>>>>> >>> varnish: Analyzing dependencies... >>>>>>>> 0% % >>>>>>>> ############################################>>> varnish: Installing for >>>>>>>> build: build-base gcc libc-dev libgcc pcre-dev ncurses-dev libedit-dev >>>>>>>> py-docutils linux-headers libunwind-dev python py3-sphinx >>>>>>>> Waiting for repository lock >>>>>>>> ERROR: Unable to lock database: Bad file descriptor >>>>>>>> ERROR: Failed to open apk database: Bad file descriptor >>>>>>>> >>> ERROR: varnish: builddeps failed >>>>>>>> ]0; >>> varnish: Uninstalling dependencies... >>>>>>>> Waiting for repository lock >>>>>>>> ERROR: Unable to lock database: Bad file descriptor >>>>>>>> ERROR: Failed to open apk database: Bad file descriptor >>>>>>>> >>>>>>>> Google suggested to do this: >>>>>>>> rm -rf /var/cache/apk >>>>>>>> mkdir /var/cache/apk >>>>>>>> >>>>>>>> It fails at 'abuild -r' - >>>>>>>> https://github.com/martin-g/varnish-cache/blob/b62c357b389c0e1e31e9c001cbffb55090c2e49f/.circleci/make-apk-packages.sh#L61 >>>>>>>> >>>>>>>> Any hints ? >>>>>>>> >>>>>>>> Martin >>>>>>>> >>>>>>>> On Wed, Mar 25, 2020 at 2:39 AM Guillaume Quintard < >>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> So, you are pointing at the `dist` job, whose sole role is to >>>>>>>>> provide us with a dist tarball, so we don't need that command line to work >>>>>>>>> for everyone, just for that specific platform. >>>>>>>>> >>>>>>>>> On the other hand, >>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L168 is >>>>>>>>> closer to what you want, `distcheck` will be call on all platform, and you >>>>>>>>> can see that it has the `--with-unwind` argument. >>>>>>>>> -- >>>>>>>>> Guillaume Quintard >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, Mar 24, 2020 at 3:05 PM Martin Grigorov < >>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, Mar 24, 2020, 17:19 Guillaume Quintard < >>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>> >>>>>>>>>>> Compare your configure line with what's currently in use (or the >>>>>>>>>>> apkbuild file), there are a few options (with-unwind, without-jemalloc, >>>>>>>>>>> etc.) That need to be set >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> The configure line comes from "./autogen.des": >>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/autogen.des#L35-L42 >>>>>>>>>> It is called at: >>>>>>>>>> >>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L40 >>>>>>>>>> In my branch at: >>>>>>>>>> >>>>>>>>>> https://github.com/martin-g/varnish-cache/blob/4b4626ee9cc366b032a45f27b54d77176125ef03/.circleci/make-apk-packages.sh#L26 >>>>>>>>>> >>>>>>>>>> It fails only on aarch64 for Alpine Linux. The x86_64 build for >>>>>>>>>> Alpine is fine. >>>>>>>>>> AARCH64 for CentOS 7 and Ubuntu 18.04 are also fine. >>>>>>>>>> >>>>>>>>>> Martin >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> On Tue, Mar 24, 2020, 08:05 Martin Grigorov < >>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi, >>>>>>>>>>>> >>>>>>>>>>>> On Tue, Mar 24, 2020 at 11:00 AM Martin Grigorov < >>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi Guillaume, >>>>>>>>>>>>> >>>>>>>>>>>>> On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard < >>>>>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi Martin, >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thank you for that. >>>>>>>>>>>>>> A few remarks and questions: >>>>>>>>>>>>>> - how much time does the "docker build" step takes? We can >>>>>>>>>>>>>> possibly speed things up by push images to the dockerhub, as they don't >>>>>>>>>>>>>> need to change very often. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Definitely such optimization would be a good thing to do! >>>>>>>>>>>>> At the moment, with 'machine' executor it fetches the base >>>>>>>>>>>>> image and then builds all the Docker layers again and again. >>>>>>>>>>>>> Here are the timings: >>>>>>>>>>>>> 1) Spinning up a VM - around 10secs >>>>>>>>>>>>> 2) prepare env variables - 0secs >>>>>>>>>>>>> 3) checkout code (varnish-cache) - 5secs >>>>>>>>>>>>> 4) activate QEMU - 2secs >>>>>>>>>>>>> 5) build packages >>>>>>>>>>>>> 5.1) x86 deb - 3m 30secs >>>>>>>>>>>>> 5.2) x86 rpm - 2m 50secs >>>>>>>>>>>>> 5.3) aarch64 rpm - 35mins >>>>>>>>>>>>> 5.4) aarch64 deb - 45mins >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> - any reason why you clone pkg-varnish-cache in each job? The >>>>>>>>>>>>>> idea was to have it cloned once in tar-pkg-tools for consistency and >>>>>>>>>>>>>> reproducibility, which we lose here. >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> I will extract the common steps once I see it working. This is >>>>>>>>>>>>> my first CircleCI project and I still find my ways in it! >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> - do we want to change things for the amd64 platforms for the >>>>>>>>>>>>>> sake of consistency? >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> So far there is nothing specific for amd4 or aarch64, except >>>>>>>>>>>>> the base Docker images. >>>>>>>>>>>>> For example make-deb-packages.sh is reused for both amd64 and >>>>>>>>>>>>> aarch64 builds. Same for -rpm- and now for -apk- (alpine). >>>>>>>>>>>>> >>>>>>>>>>>>> Once I feel the change is almost finished I will open a Pull >>>>>>>>>>>>> Request for more comments! >>>>>>>>>>>>> >>>>>>>>>>>>> Martin >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Guillaume Quintard >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov < >>>>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov < >>>>>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov < >>>>>>>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Hi Guillaume, >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < >>>>>>>>>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Offering arm64 packages requires a few things: >>>>>>>>>>>>>>>>>> - arm64-compatible code (all good in >>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache) >>>>>>>>>>>>>>>>>> - arm64-compatible package framework (all good in >>>>>>>>>>>>>>>>>> https://github.com/varnishcache/pkg-varnish-cache) >>>>>>>>>>>>>>>>>> - infrastructure to build the packages (uhoh, see below) >>>>>>>>>>>>>>>>>> - infrastructure to store and deliver ( >>>>>>>>>>>>>>>>>> https://packagecloud.io/varnishcache) >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> So, everything is in place, expect for the third point. >>>>>>>>>>>>>>>>>> At the moment, there are two concurrent CI implementations: >>>>>>>>>>>>>>>>>> - travis: >>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's >>>>>>>>>>>>>>>>>> the historical one, and currently only runs compilation+test for OSX >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Actually it tests Linux AMD64 and ARM64 too. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> - circleci: >>>>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the >>>>>>>>>>>>>>>>>> new kid on the block, that builds all the packages and distchecks for all >>>>>>>>>>>>>>>>>> the packaged platforms >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> The issue is that cirecleci doesn't support arm64 >>>>>>>>>>>>>>>>>> containers (for now?), so we would need to re-implement the packaging logic >>>>>>>>>>>>>>>>>> in Travis. It's not a big problem, but it's currently not a priority on my >>>>>>>>>>>>>>>>>> side. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> However, I am totally ready to provide help if someone >>>>>>>>>>>>>>>>>> wants to take that up. The added benefit it that Travis would be able to >>>>>>>>>>>>>>>>>> handle everything and we can retire the circleci experiment >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I will take a look in the coming days and ask you if I >>>>>>>>>>>>>>>>> need help! >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I've took a look at the current setup and here is what I've >>>>>>>>>>>>>>>> found as problems and possible solutions: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 1) Circle CI >>>>>>>>>>>>>>>> 1.1) problem - the 'machine' and 'Docker' executors run on >>>>>>>>>>>>>>>> x86_64, so there is no way to build the packages in a "native" environment >>>>>>>>>>>>>>>> 1.2) possible solutions >>>>>>>>>>>>>>>> 1.2.1) use multiarch cross build >>>>>>>>>>>>>>>> 1.2.2) use 'machine' executor that registers QEMU via >>>>>>>>>>>>>>>> https://hub.docker.com/r/multiarch/qemu-user-static/ and >>>>>>>>>>>>>>>> then builds and runs a custom Docker image that executes a shell script >>>>>>>>>>>>>>>> with the build steps >>>>>>>>>>>>>>>> It will look something like >>>>>>>>>>>>>>>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but >>>>>>>>>>>>>>>> instead of uploading the Docker image as a last step it will run it. >>>>>>>>>>>>>>>> The RPM and DEB build related code from current config.yml >>>>>>>>>>>>>>>> will be extracted into shell scripts which will be copied in the custom >>>>>>>>>>>>>>>> Docker images >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> From these two possible ways I have better picture in my >>>>>>>>>>>>>>>> head how to do 1.2.2, but I don't mind going deep in 1.2.1 if this is what >>>>>>>>>>>>>>>> you'd prefer. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I've decided to stay with Circle CI and use 'machine' >>>>>>>>>>>>>>> executor with QEMU. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> The changed config.yml could be seen at >>>>>>>>>>>>>>> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and >>>>>>>>>>>>>>> the build at >>>>>>>>>>>>>>> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8 >>>>>>>>>>>>>>> The builds on x86 arch take 3-4 mins, but for aarch64 >>>>>>>>>>>>>>> (emulation!) ~40mins >>>>>>>>>>>>>>> For now the jobs just build the .deb & .rpm packages for >>>>>>>>>>>>>>> CentOS 7 and Ubuntu 18.04, both amd64 and aarch64. >>>>>>>>>>>>>>> TODOs: >>>>>>>>>>>>>>> - migrate Alpine >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>> Build on Alpine aarch64 fails with: >>>>>>>>>>>> ... >>>>>>>>>>>> automake: this behaviour will change in future Automake >>>>>>>>>>>> versions: they will >>>>>>>>>>>> automake: unconditionally cause object files to be placed in >>>>>>>>>>>> the same subdirectory >>>>>>>>>>>> automake: of the corresponding sources. >>>>>>>>>>>> automake: project, to avoid future incompatibilities. >>>>>>>>>>>> parallel-tests: installing 'build-aux/test-driver' >>>>>>>>>>>> lib/libvmod_debug/Makefile.am:12: warning: >>>>>>>>>>>> libvmod_debug_la_LDFLAGS multiply defined in condition TRUE ... >>>>>>>>>>>> lib/libvmod_debug/automake_boilerplate.am:19: ... >>>>>>>>>>>> 'libvmod_debug_la_LDFLAGS' previously defined here >>>>>>>>>>>> lib/libvmod_debug/Makefile.am:9: 'lib/libvmod_debug/ >>>>>>>>>>>> automake_boilerplate.am' included from here >>>>>>>>>>>> + autoconf >>>>>>>>>>>> + CONFIG_SHELL=/bin/sh >>>>>>>>>>>> + export CONFIG_SHELL >>>>>>>>>>>> + ./configure '--prefix=/opt/varnish' >>>>>>>>>>>> '--mandir=/opt/varnish/man' --enable-maintainer-mode >>>>>>>>>>>> --enable-developer-warnings --enable-debugging-symbols >>>>>>>>>>>> --enable-dependency-tracking --with-persistent-storage --quiet >>>>>>>>>>>> configure: WARNING: dot not found - build will fail if svg >>>>>>>>>>>> files are out of date. >>>>>>>>>>>> configure: WARNING: No system jemalloc found, using system >>>>>>>>>>>> malloc >>>>>>>>>>>> configure: error: Could not find backtrace() support >>>>>>>>>>>> >>>>>>>>>>>> Does anyone know a workaround ? >>>>>>>>>>>> I use multiarch/alpine:aarch64-edge as a base Docker image >>>>>>>>>>>> >>>>>>>>>>>> Martin >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> - store the packages as CircleCI artifacts >>>>>>>>>>>>>>> - anything else that is still missing >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Adding more architectures would be as easy as adding a new >>>>>>>>>>>>>>> Dockerfile with a base image from the respective type. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Martin >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 2) Travis CI >>>>>>>>>>>>>>>> 2.1) problems >>>>>>>>>>>>>>>> 2.1.1) generally Travis is slower than Circle! >>>>>>>>>>>>>>>> Althought if we use CircleCI 'machine' executor it will be >>>>>>>>>>>>>>>> slower than the current 'Docker' executor! >>>>>>>>>>>>>>>> 2.1.2) Travis supports only Ubuntu >>>>>>>>>>>>>>>> Current setup at CircleCI uses CentOS 7. >>>>>>>>>>>>>>>> I guess the build steps won't have problems on Ubuntu. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 3) GitHub Actions >>>>>>>>>>>>>>>> GH Actions does not support ARM64 but it supports self >>>>>>>>>>>>>>>> hosted ARM64 runners >>>>>>>>>>>>>>>> 3.1) The problem is that there is no way to make a self >>>>>>>>>>>>>>>> hosted runner really private. I.e. if someone forks Varnish Cache any >>>>>>>>>>>>>>>> commit in the fork will trigger builds on the arm64 node. There is no way >>>>>>>>>>>>>>>> to reserve the runner only for commits against >>>>>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Do you see other problems or maybe different ways ? >>>>>>>>>>>>>>>> Do you have preferences which way to go ? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Regards, >>>>>>>>>>>>>>>> Martin >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Regards, >>>>>>>>>>>>>>>>> Martin >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> Guillaume Quintard >>>>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>>>>> varnish-dev mailing list >>>>>>>>>>>>>>>>>> varnish-dev at varnish-cache.org >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From slink at schokola.de Wed Jun 17 14:56:29 2020 From: slink at schokola.de (Nils Goroll) Date: Wed, 17 Jun 2020 16:56:29 +0200 Subject: Support for AARCH64 In-Reply-To: References: Message-ID: <028360d2-d765-1aec-a592-da1a4045c2b1@schokola.de> On 17/06/2020 10:00, Emilio Fernandes wrote: > 1.1) curl -s > https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.deb.sh > | sudo bash The fact that, with my listmaster head on, I have not censored this posting, does not, *by any stretch*, imply any form of endorsement of this practice. My personal 2 cents: DO NOT DO THIS. EVER. AND DO NOT POST THIS AS ADVISE TO OTHERS. Thank you From geoff at uplex.de Wed Jun 17 15:05:24 2020 From: geoff at uplex.de (Geoff Simmons) Date: Wed, 17 Jun 2020 17:05:24 +0200 Subject: Support for AARCH64 In-Reply-To: <028360d2-d765-1aec-a592-da1a4045c2b1@schokola.de> References: <028360d2-d765-1aec-a592-da1a4045c2b1@schokola.de> Message-ID: On 6/17/20 16:56, Nils Goroll wrote: > On 17/06/2020 10:00, Emilio Fernandes wrote: >> 1.1) curl -s >> https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.deb.sh >> | sudo bash > > The fact that, with my listmaster head on, I have not censored this posting, > does not, *by any stretch*, imply any form of endorsement of this practice. > > My personal 2 cents: DO NOT DO THIS. EVER. AND DO NOT POST THIS AS ADVISE TO OTHERS. > > Thank you +1 To point fingers at the right people, this is what the packagecloud docs tell you do. But ... the *packagecloud docs* tell you to do that! If I could have them arrested for it, I'd think about it. Piping the response from a web site into a root shell is stark, raving madness. Stay safe, Geoff -- ** * * UPLEX - Nils Goroll Systemoptimierung Scheffelstra?e 32 22301 Hamburg Tel +49 40 2880 5731 Mob +49 176 636 90917 Fax +49 40 42949753 http://uplex.de -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From dridi at varni.sh Wed Jun 17 18:17:12 2020 From: dridi at varni.sh (Dridi Boukelmoune) Date: Wed, 17 Jun 2020 18:17:12 +0000 Subject: Support for AARCH64 In-Reply-To: References: <028360d2-d765-1aec-a592-da1a4045c2b1@schokola.de> Message-ID: On Wed, Jun 17, 2020 at 3:05 PM Geoff Simmons wrote: > > On 6/17/20 16:56, Nils Goroll wrote: > > On 17/06/2020 10:00, Emilio Fernandes wrote: > >> 1.1) curl -s > >> https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.deb.sh > >> | sudo bash > > > > The fact that, with my listmaster head on, I have not censored this posting, > > does not, *by any stretch*, imply any form of endorsement of this practice. > > > > My personal 2 cents: DO NOT DO THIS. EVER. AND DO NOT POST THIS AS ADVISE TO OTHERS. > > > > Thank you > > +1 > To point fingers at the right people, this is what the packagecloud docs > tell you do. > > But ... the *packagecloud docs* tell you to do that! > > If I could have them arrested for it, I'd think about it. > > Piping the response from a web site into a root shell is stark, raving > madness. Dudes, chill out and live with your time. It's not like attackers taking control of packagecloud could send a different payload depending on whether you curl to disk to audit the script or yolo curl to pipe. https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side/ We've known for years that it isn't possible. Dridi From emilio.fernandes70 at gmail.com Wed Jun 17 19:21:13 2020 From: emilio.fernandes70 at gmail.com (Emilio Fernandes) Date: Wed, 17 Jun 2020 22:21:13 +0300 Subject: Support for AARCH64 In-Reply-To: <028360d2-d765-1aec-a592-da1a4045c2b1@schokola.de> References: <028360d2-d765-1aec-a592-da1a4045c2b1@schokola.de> Message-ID: Hi, El mi?., 17 jun. 2020 a las 17:56, Nils Goroll () escribi?: > On 17/06/2020 10:00, Emilio Fernandes wrote: > > 1.1) curl -s > > > https://packagecloud.io/install/repositories/varnishcache/varnish-weekly/script.deb.sh > > | sudo bash > > The fact that, with my listmaster head on, I have not censored this > posting, > does not, *by any stretch*, imply any form of endorsement of this practice. > > My personal 2 cents: DO NOT DO THIS. EVER. AND DO NOT POST THIS AS ADVISE > TO OTHERS. > Actually I thought about this and executed those inside fresh/throw-away Docker containers. I fully agree that one should not execute such unknown scripts blindly! Emilio > > Thank you > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scan-admin at coverity.com Sun Jun 21 11:52:34 2020 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Sun, 21 Jun 2020 11:52:34 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <5eef4a022fd4_16a0d52b13db506f585438a@prd-scan-dashboard-0.mail> Your request for analysis of varnish has been completed successfully. The results are available at https://u2389337.ct.sendgrid.net/ls/click?upn=nJaKvJSIH-2FPAfmty-2BK5tYpPklAc1eEA-2F1zfUjH6teEzb7a35k9AJT3vQQzyq0UjO90ieNOMB6HZSUHPtUyV1qw-3D-3Div6m_WyTzqwss9kUEGhvWd0SG502mTu1yasCtuh9h-2FD3Je49c-2Bl47SylRfUZn0hOEMh-2B3LOttvRbckEIL6hhIqz1ZzXACFsCqOjyRitUubKXo2-2FHIdk1vyRoBNHYdIFsJMXBibeiOlfBLsHggBrZR191FOw-2BxXq7FimgoVREiVrtR1uv5Aw1-2FUUeRCVgOgmyttPZ0Lx7egAmGzZt-2Bge6AmhCkJ1U038skRbr341n7H-2FV5T6E-3D Build ID: 322134 Analysis Summary: New defects found: 3 Defects eliminated: 0 If you have difficulty understanding any defects, email us at scan-admin at coverity.com, or post your question to StackOverflow at https://u2389337.ct.sendgrid.net/ls/click?upn=QsMnDxMCOVVs7CDlyD2jouKTgNlKFinTRd3y-2BJC7sZryfVdWHH2BBU620aHLHGfhMXPTHYY5wQ5zOiTMnTlWDg-3D-3DcH2r_WyTzqwss9kUEGhvWd0SG502mTu1yasCtuh9h-2FD3Je49c-2Bl47SylRfUZn0hOEMh-2B3LOttvRbckEIL6hhIqz1ZzWPAhwI4tb8cHi82lNkgGg-2F-2FEbl5es3tS3GgqAPoLthjPAK-2B-2FyAoD7ENi2-2BGi8eMfnHXtehjTRtBO2e2CispJenHH5Ezz8KtwfKn-2FqZaLv-2Bt-2B0-2FoulaUYCT2ZmA5peeQMT7JUyZukHeNVcM76BLrbSM-3D From scan-admin at coverity.com Sun Jun 28 11:52:49 2020 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Sun, 28 Jun 2020 11:52:49 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <5ef88490ddeb7_dff572ac895474f40980a3@prd-scan-dashboard-0.mail> Your request for analysis of varnish has been completed successfully. The results are available at https://u2389337.ct.sendgrid.net/ls/click?upn=nJaKvJSIH-2FPAfmty-2BK5tYpPklAc1eEA-2F1zfUjH6teEzb7a35k9AJT3vQQzyq0UjO90ieNOMB6HZSUHPtUyV1qw-3D-3D_Omp_WyTzqwss9kUEGhvWd0SG502mTu1yasCtuh9h-2FD3Je49Zjm6IRmZSwQsPalcpyTOdq-2FrIvIhHZVPWzjSAwrjwWGW-2BuYnrIP9yoSUK0tzXfMgiTejEmbC84C1w481ESXD4xhNHCIsCTwdrUdOAUD-2FgDGSKABS4Xv5nZMK-2F87QO-2B9bG-2BDwTFBkSdW-2FqSlSPRTgTPwWoScuQM6rSRfl1tA8UAIsHBzaokmFvYf8Ip6MGKYk-3D Build ID: 323527 Analysis Summary: New defects found: 0 Defects eliminated: 0