From scan-admin at coverity.com Sun Mar 1 11:43:31 2020 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Sun, 01 Mar 2020 11:43:31 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <5e5b9fe336edd_53e52aedfbd76f54736a0@appnode-2.mail> Your request for analysis of varnish has been completed successfully. The results are available at https://u2389337.ct.sendgrid.net/ls/click?upn=nJaKvJSIH-2FPAfmty-2BK5tYpPklAc1eEA-2F1zfUjH6teEzb7a35k9AJT3vQQzyq0UjO90ieNOMB6HZSUHPtUyV1qw-3D-3DAaY7_WyTzqwss9kUEGhvWd0SG502mTu1yasCtuh9h-2FD3Je4-2BthhT-2FUpeDw00ElZcL38sf4-2BOIJ6q4TOAQVXGGEhKq41UhwNX2CRgDqxSVcbRb-2FBNbAeIvIBQ50adbxifMVREpTaMg9YoPvWqy6fAJaW5TQrsjcxL5ieXT4YZ-2FpjwjDUra1TZtsvFnzmyTYb1o6WJpzrRfEsQshmX-2FRka6me41-2BzJVdNlzSvMlSnb3lJBmn60yl3yNrYdMnUPaN4R86Ihs Build ID: 297757 Analysis Summary: New defects found: 0 Defects eliminated: 16 From dridi at varni.sh Mon Mar 2 20:21:16 2020 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 2 Mar 2020 20:21:16 +0000 Subject: Time for release notes In-Reply-To: References: Message-ID: On Wed, Feb 19, 2020 at 11:12 AM Dridi Boukelmoune wrote: > > Dear Varnish developers, > > It's that time of the semester already, the time where we all enjoy > revisiting what happened in the last 6 months to produce release and > upgrade notes. > > As such I'll spend the rest of today and tomorrow trying to make > progress on 6.3 documentation. I did, and finished a sweep of all the commits between 6.2.0 and 6.3.0, after what Nils kindly wrote upgrade notes for the VCL temperature change. The upgrade notes are mostly C developer-centric. It is either my biais, or a reflection of how little 6.3 breaks anything compared to 6.2 besides the usual VRT suspects. This is the last call for double checking, I will otherwise time this operation out at some undefined time later this week, remove the "incomplete" release notes markers and back-port everything in the 6.3 branch. Dridi From dridi at varni.sh Thu Mar 5 15:39:58 2020 From: dridi at varni.sh (Dridi Boukelmoune) Date: Thu, 5 Mar 2020 15:39:58 +0000 Subject: Travis macos job Message-ID: Hi, Can someone with knowledge of both travis and macos have a look? Since this build it fails to find rst2man: https://travis-ci.org/varnishcache/varnish-cache/builds/653055383 Thanks, Dridi From fgsch at lodoss.net Sat Mar 7 20:02:45 2020 From: fgsch at lodoss.net (Federico Schwindt) Date: Sat, 7 Mar 2020 20:02:45 +0000 Subject: Travis macos job In-Reply-To: References: Message-ID: Should be fixed now. On Thu, Mar 5, 2020 at 3:40 PM Dridi Boukelmoune wrote: > Hi, > > Can someone with knowledge of both travis and macos have a look? > > Since this build it fails to find rst2man: > > https://travis-ci.org/varnishcache/varnish-cache/builds/653055383 > > Thanks, > Dridi > _______________________________________________ > varnish-dev mailing list > varnish-dev at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scan-admin at coverity.com Sun Mar 8 11:43:33 2020 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Sun, 08 Mar 2020 11:43:33 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <5e64da64ea3e1_7f592b0856486f5069829@appnode-2.mail> Your request for analysis of varnish has been completed successfully. The results are available at https://u2389337.ct.sendgrid.net/ls/click?upn=nJaKvJSIH-2FPAfmty-2BK5tYpPklAc1eEA-2F1zfUjH6teEzb7a35k9AJT3vQQzyq0UjO90ieNOMB6HZSUHPtUyV1qw-3D-3DB2XR_WyTzqwss9kUEGhvWd0SG502mTu1yasCtuh9h-2FD3Je4-2BkH4VbOMHVcISHnG84qg8O1V4yXgi3F4ILf8zNTTJ0qOxqKu7Tn8DFdxLdlJugzN7cSPcBtuv1mhTf2q1AIhy-2BBiEaCkmP5R0dpLkYTFsX9cMm0Y-2F2j222Gu-2F0VlonR9nI-2F2gKf0EIlvZ1RNUW92tnNDIuW4-2BaImEIqjFuY-2Fk9neV7seKG0xYClqGWIx75VRq8As83XYMW-2Bjfo17hzA2fn Build ID: 298964 Analysis Summary: New defects found: 0 Defects eliminated: 16 From dridi at varni.sh Mon Mar 9 10:58:43 2020 From: dridi at varni.sh (Dridi Boukelmoune) Date: Mon, 9 Mar 2020 10:58:43 +0000 Subject: Travis macos job In-Reply-To: References: Message-ID: On Sat, Mar 7, 2020 at 8:02 PM Federico Schwindt wrote: > > Should be fixed now. Thanks! From emilio.fernandes70 at gmail.com Tue Mar 10 13:23:08 2020 From: emilio.fernandes70 at gmail.com (Emilio Fernandes) Date: Tue, 10 Mar 2020 15:23:08 +0200 Subject: Support for AARCH64 Message-ID: Hello Varnish team, I'd like to ask whether AARCH64 architecture is officially supported one. I wasn't able to find anything on the website but I've found that there is a CI [1] and some tickets [2]. Few comments in this ticket [3] say that ARM64 is known to work fine on FreeBSD and Linux (Ubuntu & Fedora). Finally I see only x86_64 and amd64 packages at [4] My request is: Would it be possible to add aarch64 package(s) to PackageCloud too ? OSes update the packages with a delay. At the moment the only option to update after a security release fix is to build from source. It is OK but it would be nicer if "apt update && apt upgrade" does it for me as soon as there is something new in PackageCloud. And maybe mention somewhere on the website which are the supported architectures. Thank you for Varnish Cache! Gracias! Emilio 1. https://github.com/varnishcache/varnish-cache/blob/b365be2d281944d5c79be92ff73b3dc02c5db6be/.travis.yml#L35-L43 2. https://github.com/varnishcache/varnish-cache/issues?q=is%3Aissue+aarch64+ & https://github.com/varnishcache/varnish-cache/issues?q=is%3Aissue+arm64 3. https://github.com/varnishcache/varnish-cache/issues/3227#issuecomment-590334301 4. https://packagecloud.io/varnishcache/varnish63?page=1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.grigorov at gmail.com Wed Mar 11 04:21:21 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Wed, 11 Mar 2020 06:21:21 +0200 Subject: Support for AARCH64 In-Reply-To: References: Message-ID: Hi, On Tue, Mar 10, 2020, 15:23 Emilio Fernandes wrote: > Hello Varnish team, > > I'd like to ask whether AARCH64 architecture is officially supported one. > I wasn't able to find anything on the website but I've found that there is > a CI [1] and some tickets [2]. > Few comments in this ticket [3] say that ARM64 is known to work fine on > FreeBSD and Linux (Ubuntu & Fedora). > Finally I see only x86_64 and amd64 packages at [4] > > My request is: Would it be possible to add aarch64 package(s) to > PackageCloud too ? > +1 for ARM64 packages! Martin OSes update the packages with a delay. At the moment the only option to > update after a security > release fix is to build from source. It is OK but it would be nicer if > "apt update && apt upgrade" > does it for me as soon as there is something new in PackageCloud. > > And maybe mention somewhere on the website which are the supported > architectures. > > Thank you for Varnish Cache! > > Gracias! > Emilio > > 1. > https://github.com/varnishcache/varnish-cache/blob/b365be2d281944d5c79be92ff73b3dc02c5db6be/.travis.yml#L35-L43 > 2. > https://github.com/varnishcache/varnish-cache/issues?q=is%3Aissue+aarch64+ > & https://github.com/varnishcache/varnish-cache/issues?q=is%3Aissue+arm64 > 3. > https://github.com/varnishcache/varnish-cache/issues/3227#issuecomment-590334301 > 4. https://packagecloud.io/varnishcache/varnish63?page=1 > _______________________________________________ > varnish-dev mailing list > varnish-dev at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Wed Mar 11 07:15:35 2020 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Wed, 11 Mar 2020 07:15:35 +0000 Subject: Support for AARCH64 In-Reply-To: References: Message-ID: <8156.1583910935@critter.freebsd.dk> -------- In message , Emilio Fernandes writes: >I'd like to ask whether AARCH64 architecture is officially supported one. Hi Emilio, In the sense that we meticulously makes sure that Varnish works on all archtectures we can lay our hands on, including arm64: Yes it is supported. You can see here which arch/os/compiler combos our daily testing involves: http://varnish-cache.org/vtest/ I'm pretty sure FreeBSD has arm64 varnish packages, but I'll let our package-meisters answer with respect to package building on Linux. >Thank you for Varnish Cache! You're welcome! -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From mgrigorov at apache.org Tue Mar 10 21:36:06 2020 From: mgrigorov at apache.org (Martin Grigorov) Date: Tue, 10 Mar 2020 23:36:06 +0200 Subject: Support for AARCH64 In-Reply-To: References: Message-ID: Hi, On Tue, Mar 10, 2020, 15:23 Emilio Fernandes wrote: > Hello Varnish team, > > I'd like to ask whether AARCH64 architecture is officially supported one. > I wasn't able to find anything on the website but I've found that there is > a CI [1] and some tickets [2]. > Few comments in this ticket [3] say that ARM64 is known to work fine on > FreeBSD and Linux (Ubuntu & Fedora). > Finally I see only x86_64 and amd64 packages at [4] > > My request is: Would it be possible to add aarch64 package(s) to > PackageCloud too ? > +1 for ARM64 packages! Martin OSes update the packages with a delay. At the moment the only option to > update after a security > release fix is to build from source. It is OK but it would be nicer if > "apt update && apt upgrade" > does it for me as soon as there is something new in PackageCloud. > > And maybe mention somewhere on the website which are the supported > architectures. > > Thank you for Varnish Cache! > > Gracias! > Emilio > > 1. > https://github.com/varnishcache/varnish-cache/blob/b365be2d281944d5c79be92ff73b3dc02c5db6be/.travis.yml#L35-L43 > 2. > https://github.com/varnishcache/varnish-cache/issues?q=is%3Aissue+aarch64+ > & https://github.com/varnishcache/varnish-cache/issues?q=is%3Aissue+arm64 > 3. > https://github.com/varnishcache/varnish-cache/issues/3227#issuecomment-590334301 > 4. https://packagecloud.io/varnishcache/varnish63?page=1 > _______________________________________________ > varnish-dev mailing list > varnish-dev at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilio.fernandes70 at gmail.com Thu Mar 12 11:35:18 2020 From: emilio.fernandes70 at gmail.com (Emilio Fernandes) Date: Thu, 12 Mar 2020 13:35:18 +0200 Subject: Support for AARCH64 In-Reply-To: <8156.1583910935@critter.freebsd.dk> References: <8156.1583910935@critter.freebsd.dk> Message-ID: Hi Poul-Henning, El mi?., 11 mar. 2020 a las 9:15, Poul-Henning Kamp () escribi?: > -------- > In message < > CADRXdtP_CoMObMqw3PJ0bS7kUkbtSTguwcaaNiPzZRyg82AQLw at mail.gmail.com> > , Emilio Fernandes writes: > > >I'd like to ask whether AARCH64 architecture is officially supported one. > > Hi Emilio, > > In the sense that we meticulously makes sure that Varnish works on > all archtectures we can lay our hands on, including arm64: Yes it > is supported. > Thank you for confirming! > > You can see here which arch/os/compiler combos our daily testing > involves: > > http://varnish-cache.org/vtest/ > > I'm pretty sure FreeBSD has arm64 varnish packages, but I'll let > our package-meisters answer with respect to package building on > Linux. > I hope the person(s) who manage(s) the Varnish packages at PackageCloud will notice this message! :-) Gracias! Emilio > > >Thank you for Varnish Cache! > > You're welcome! > > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk at FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Thu Mar 12 13:22:52 2020 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Thu, 12 Mar 2020 06:22:52 -0700 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: Hi, Offering arm64 packages requires a few things: - arm64-compatible code (all good in https://github.com/varnishcache/varnish-cache) - arm64-compatible package framework (all good in https://github.com/varnishcache/pkg-varnish-cache) - infrastructure to build the packages (uhoh, see below) - infrastructure to store and deliver (https://packagecloud.io/varnishcache) So, everything is in place, expect for the third point. At the moment, there are two concurrent CI implementations: - travis: https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's the historical one, and currently only runs compilation+test for OSX - circleci: https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the new kid on the block, that builds all the packages and distchecks for all the packaged platforms The issue is that cirecleci doesn't support arm64 containers (for now?), so we would need to re-implement the packaging logic in Travis. It's not a big problem, but it's currently not a priority on my side. However, I am totally ready to provide help if someone wants to take that up. The added benefit it that Travis would be able to handle everything and we can retire the circleci experiment -- Guillaume Quintard -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.grigorov at gmail.com Thu Mar 12 14:35:22 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Thu, 12 Mar 2020 16:35:22 +0200 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: Hi Guillaume, On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < guillaume at varnish-software.com> wrote: > Hi, > > Offering arm64 packages requires a few things: > - arm64-compatible code (all good in > https://github.com/varnishcache/varnish-cache) > - arm64-compatible package framework (all good in > https://github.com/varnishcache/pkg-varnish-cache) > - infrastructure to build the packages (uhoh, see below) > - infrastructure to store and deliver ( > https://packagecloud.io/varnishcache) > > So, everything is in place, expect for the third point. At the moment, > there are two concurrent CI implementations: > - travis: > https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's > the historical one, and currently only runs compilation+test for OSX > Actually it tests Linux AMD64 and ARM64 too. > - circleci: > https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the > new kid on the block, that builds all the packages and distchecks for all > the packaged platforms > > The issue is that cirecleci doesn't support arm64 containers (for now?), > so we would need to re-implement the packaging logic in Travis. It's not a > big problem, but it's currently not a priority on my side. > > However, I am totally ready to provide help if someone wants to take that > up. The added benefit it that Travis would be able to handle everything and > we can retire the circleci experiment > I will take a look in the coming days and ask you if I need help! Regards, Martin > > -- > Guillaume Quintard > _______________________________________________ > varnish-dev mailing list > varnish-dev at varnish-cache.org > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Thu Mar 12 15:46:21 2020 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Thu, 12 Mar 2020 15:46:21 +0000 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: <16909.1584027981@critter.freebsd.dk> -------- In message , Guillaume Quintard writes: >Offering arm64 packages requires a few things: Don't we have packages for a bunch of non-x86 architectures on Redhat ? I seem to recall Ingvar popping up with issues on s390 and other archs every so often ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence. From guillaume at varnish-software.com Thu Mar 12 19:32:20 2020 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Thu, 12 Mar 2020 12:32:20 -0700 Subject: Support for AARCH64 In-Reply-To: <16909.1584027981@critter.freebsd.dk> References: <8156.1583910935@critter.freebsd.dk> <16909.1584027981@critter.freebsd.dk> Message-ID: On Thu, Mar 12, 2020, 08:46 Poul-Henning Kamp wrote: > -------- > In message < > CAJ6ZYQyw+LUdtHyOND1ifiRdn9-E0B_XdJhprJNmHKBA0zxL4w at mail.gmail.com> > , Guillaume Quintard writes: > > >Offering arm64 packages requires a few things: > > Don't we have packages for a bunch of non-x86 architectures on Redhat ? > We do, in the sense that distributions do the work, but I believe the question was about the packagecloud repos. Fedora is possibly a good student he, providing timely packages (Dridi appears in 3..2..1..) but we definitely cannot expect the same thing from the debians. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dridi at varni.sh Fri Mar 13 08:18:49 2020 From: dridi at varni.sh (Dridi Boukelmoune) Date: Fri, 13 Mar 2020 08:18:49 +0000 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> <16909.1584027981@critter.freebsd.dk> Message-ID: > We do, in the sense that distributions do the work, but I believe the question was about the packagecloud repos. > > Fedora is possibly a good student he, providing timely packages (Dridi appears in 3..2..1..) but we definitely cannot expect the same thing from the debians. *appears* Fedora builds packages for a bunch of architectures, and builds packages for Red Hat Enterprise Linux and derivatives via its EPEL project. So Fedora has "official" Varnish packages outside the x86_64 realm (power pc, system/390 mainframes, arm boards) but we don't. *disappears* From scan-admin at coverity.com Sun Mar 15 11:43:58 2020 From: scan-admin at coverity.com (scan-admin at coverity.com) Date: Sun, 15 Mar 2020 11:43:58 +0000 (UTC) Subject: Coverity Scan: Analysis completed for varnish Message-ID: <5e6e14fe20aae_229c2b0856486f5069847@appnode-2.mail> Your request for analysis of varnish has been completed successfully. The results are available at https://u2389337.ct.sendgrid.net/ls/click?upn=nJaKvJSIH-2FPAfmty-2BK5tYpPklAc1eEA-2F1zfUjH6teEzb7a35k9AJT3vQQzyq0UjO90ieNOMB6HZSUHPtUyV1qw-3D-3D4cvk_WyTzqwss9kUEGhvWd0SG502mTu1yasCtuh9h-2FD3Je4-2FPF5lwOxX4KBgSK9rY3aF-2BtoasyFUd0PGcDHSxoz4CHp7EJAuDZ7JzPwL-2FErWGyt9HQkpXyxHCtNzlzzMtlSwOkiBGC2516MdCFIGLfI-2BIr3-2BTuL4P3Jx3p8vPSKTHDxqEHtdspRHfux7wYC2GfNCD2jD1YitVGRDdYJoRVS8ssMq729g2ooW174NSeIMP9WZJYbhPrNRKGkNquo5d-2Fgtx Build ID: 300218 Analysis Summary: New defects found: 0 Defects eliminated: 17 From martin.grigorov at gmail.com Wed Mar 18 15:31:56 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Wed, 18 Mar 2020 17:31:56 +0200 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: Hi, On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov wrote: > Hi Guillaume, > > On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < > guillaume at varnish-software.com> wrote: > >> Hi, >> >> Offering arm64 packages requires a few things: >> - arm64-compatible code (all good in >> https://github.com/varnishcache/varnish-cache) >> - arm64-compatible package framework (all good in >> https://github.com/varnishcache/pkg-varnish-cache) >> - infrastructure to build the packages (uhoh, see below) >> - infrastructure to store and deliver ( >> https://packagecloud.io/varnishcache) >> >> So, everything is in place, expect for the third point. At the moment, >> there are two concurrent CI implementations: >> - travis: >> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's >> the historical one, and currently only runs compilation+test for OSX >> > > Actually it tests Linux AMD64 and ARM64 too. > > >> - circleci: >> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the >> new kid on the block, that builds all the packages and distchecks for all >> the packaged platforms >> >> The issue is that cirecleci doesn't support arm64 containers (for now?), >> so we would need to re-implement the packaging logic in Travis. It's not a >> big problem, but it's currently not a priority on my side. >> >> However, I am totally ready to provide help if someone wants to take that >> up. The added benefit it that Travis would be able to handle everything and >> we can retire the circleci experiment >> > > I will take a look in the coming days and ask you if I need help! > I've took a look at the current setup and here is what I've found as problems and possible solutions: 1) Circle CI 1.1) problem - the 'machine' and 'Docker' executors run on x86_64, so there is no way to build the packages in a "native" environment 1.2) possible solutions 1.2.1) use multiarch cross build 1.2.2) use 'machine' executor that registers QEMU via https://hub.docker.com/r/multiarch/qemu-user-static/ and then builds and runs a custom Docker image that executes a shell script with the build steps It will look something like https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but instead of uploading the Docker image as a last step it will run it. The RPM and DEB build related code from current config.yml will be extracted into shell scripts which will be copied in the custom Docker images >From these two possible ways I have better picture in my head how to do 1.2.2, but I don't mind going deep in 1.2.1 if this is what you'd prefer. 2) Travis CI 2.1) problems 2.1.1) generally Travis is slower than Circle! Althought if we use CircleCI 'machine' executor it will be slower than the current 'Docker' executor! 2.1.2) Travis supports only Ubuntu Current setup at CircleCI uses CentOS 7. I guess the build steps won't have problems on Ubuntu. 3) GitHub Actions GH Actions does not support ARM64 but it supports self hosted ARM64 runners 3.1) The problem is that there is no way to make a self hosted runner really private. I.e. if someone forks Varnish Cache any commit in the fork will trigger builds on the arm64 node. There is no way to reserve the runner only for commits against https://github.com/varnishcache/varnish-cache Do you see other problems or maybe different ways ? Do you have preferences which way to go ? Regards, Martin > > Regards, > Martin > > >> >> -- >> Guillaume Quintard >> _______________________________________________ >> varnish-dev mailing list >> varnish-dev at varnish-cache.org >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.grigorov at gmail.com Mon Mar 23 13:25:07 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Mon, 23 Mar 2020 15:25:07 +0200 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: Hi, On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov wrote: > Hi, > > On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov > wrote: > >> Hi Guillaume, >> >> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < >> guillaume at varnish-software.com> wrote: >> >>> Hi, >>> >>> Offering arm64 packages requires a few things: >>> - arm64-compatible code (all good in >>> https://github.com/varnishcache/varnish-cache) >>> - arm64-compatible package framework (all good in >>> https://github.com/varnishcache/pkg-varnish-cache) >>> - infrastructure to build the packages (uhoh, see below) >>> - infrastructure to store and deliver ( >>> https://packagecloud.io/varnishcache) >>> >>> So, everything is in place, expect for the third point. At the moment, >>> there are two concurrent CI implementations: >>> - travis: >>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's >>> the historical one, and currently only runs compilation+test for OSX >>> >> >> Actually it tests Linux AMD64 and ARM64 too. >> >> >>> - circleci: >>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the >>> new kid on the block, that builds all the packages and distchecks for all >>> the packaged platforms >>> >>> The issue is that cirecleci doesn't support arm64 containers (for now?), >>> so we would need to re-implement the packaging logic in Travis. It's not a >>> big problem, but it's currently not a priority on my side. >>> >>> However, I am totally ready to provide help if someone wants to take >>> that up. The added benefit it that Travis would be able to handle >>> everything and we can retire the circleci experiment >>> >> >> I will take a look in the coming days and ask you if I need help! >> > > I've took a look at the current setup and here is what I've found as > problems and possible solutions: > > 1) Circle CI > 1.1) problem - the 'machine' and 'Docker' executors run on x86_64, so > there is no way to build the packages in a "native" environment > 1.2) possible solutions > 1.2.1) use multiarch cross build > 1.2.2) use 'machine' executor that registers QEMU via > https://hub.docker.com/r/multiarch/qemu-user-static/ and then builds and > runs a custom Docker image that executes a shell script with the build steps > It will look something like > https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but > instead of uploading the Docker image as a last step it will run it. > The RPM and DEB build related code from current config.yml will be > extracted into shell scripts which will be copied in the custom Docker > images > > From these two possible ways I have better picture in my head how to do > 1.2.2, but I don't mind going deep in 1.2.1 if this is what you'd prefer. > I've decided to stay with Circle CI and use 'machine' executor with QEMU. The changed config.yml could be seen at https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and the build at https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8 The builds on x86 arch take 3-4 mins, but for aarch64 (emulation!) ~40mins For now the jobs just build the .deb & .rpm packages for CentOS 7 and Ubuntu 18.04, both amd64 and aarch64. TODOs: - migrate Alpine - store the packages as CircleCI artifacts - anything else that is still missing Adding more architectures would be as easy as adding a new Dockerfile with a base image from the respective type. Martin > 2) Travis CI > 2.1) problems > 2.1.1) generally Travis is slower than Circle! > Althought if we use CircleCI 'machine' executor it will be slower than the > current 'Docker' executor! > 2.1.2) Travis supports only Ubuntu > Current setup at CircleCI uses CentOS 7. > I guess the build steps won't have problems on Ubuntu. > > 3) GitHub Actions > GH Actions does not support ARM64 but it supports self hosted ARM64 runners > 3.1) The problem is that there is no way to make a self hosted runner > really private. I.e. if someone forks Varnish Cache any commit in the fork > will trigger builds on the arm64 node. There is no way to reserve the > runner only for commits against > https://github.com/varnishcache/varnish-cache > > Do you see other problems or maybe different ways ? > Do you have preferences which way to go ? > > Regards, > Martin > > >> >> Regards, >> Martin >> >> >>> >>> -- >>> Guillaume Quintard >>> _______________________________________________ >>> varnish-dev mailing list >>> varnish-dev at varnish-cache.org >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Mon Mar 23 18:01:45 2020 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Mon, 23 Mar 2020 11:01:45 -0700 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: Hi Martin, Thank you for that. A few remarks and questions: - how much time does the "docker build" step takes? We can possibly speed things up by push images to the dockerhub, as they don't need to change very often. - any reason why you clone pkg-varnish-cache in each job? The idea was to have it cloned once in tar-pkg-tools for consistency and reproducibility, which we lose here. - do we want to change things for the amd64 platforms for the sake of consistency? -- Guillaume Quintard On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov wrote: > Hi, > > > On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov > wrote: > >> Hi, >> >> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov < >> martin.grigorov at gmail.com> wrote: >> >>> Hi Guillaume, >>> >>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < >>> guillaume at varnish-software.com> wrote: >>> >>>> Hi, >>>> >>>> Offering arm64 packages requires a few things: >>>> - arm64-compatible code (all good in >>>> https://github.com/varnishcache/varnish-cache) >>>> - arm64-compatible package framework (all good in >>>> https://github.com/varnishcache/pkg-varnish-cache) >>>> - infrastructure to build the packages (uhoh, see below) >>>> - infrastructure to store and deliver ( >>>> https://packagecloud.io/varnishcache) >>>> >>>> So, everything is in place, expect for the third point. At the moment, >>>> there are two concurrent CI implementations: >>>> - travis: >>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's >>>> the historical one, and currently only runs compilation+test for OSX >>>> >>> >>> Actually it tests Linux AMD64 and ARM64 too. >>> >>> >>>> - circleci: >>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the >>>> new kid on the block, that builds all the packages and distchecks for all >>>> the packaged platforms >>>> >>>> The issue is that cirecleci doesn't support arm64 containers (for >>>> now?), so we would need to re-implement the packaging logic in Travis. It's >>>> not a big problem, but it's currently not a priority on my side. >>>> >>>> However, I am totally ready to provide help if someone wants to take >>>> that up. The added benefit it that Travis would be able to handle >>>> everything and we can retire the circleci experiment >>>> >>> >>> I will take a look in the coming days and ask you if I need help! >>> >> >> I've took a look at the current setup and here is what I've found as >> problems and possible solutions: >> >> 1) Circle CI >> 1.1) problem - the 'machine' and 'Docker' executors run on x86_64, so >> there is no way to build the packages in a "native" environment >> 1.2) possible solutions >> 1.2.1) use multiarch cross build >> 1.2.2) use 'machine' executor that registers QEMU via >> https://hub.docker.com/r/multiarch/qemu-user-static/ and then builds and >> runs a custom Docker image that executes a shell script with the build steps >> It will look something like >> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but >> instead of uploading the Docker image as a last step it will run it. >> The RPM and DEB build related code from current config.yml will be >> extracted into shell scripts which will be copied in the custom Docker >> images >> >> From these two possible ways I have better picture in my head how to do >> 1.2.2, but I don't mind going deep in 1.2.1 if this is what you'd prefer. >> > > I've decided to stay with Circle CI and use 'machine' executor with QEMU. > > The changed config.yml could be seen at > https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and > the build at > https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8 > The builds on x86 arch take 3-4 mins, but for aarch64 (emulation!) ~40mins > For now the jobs just build the .deb & .rpm packages for CentOS 7 and > Ubuntu 18.04, both amd64 and aarch64. > TODOs: > - migrate Alpine > - store the packages as CircleCI artifacts > - anything else that is still missing > > Adding more architectures would be as easy as adding a new Dockerfile with > a base image from the respective type. > > Martin > > >> 2) Travis CI >> 2.1) problems >> 2.1.1) generally Travis is slower than Circle! >> Althought if we use CircleCI 'machine' executor it will be slower than >> the current 'Docker' executor! >> 2.1.2) Travis supports only Ubuntu >> Current setup at CircleCI uses CentOS 7. >> I guess the build steps won't have problems on Ubuntu. >> >> 3) GitHub Actions >> GH Actions does not support ARM64 but it supports self hosted ARM64 >> runners >> 3.1) The problem is that there is no way to make a self hosted runner >> really private. I.e. if someone forks Varnish Cache any commit in the fork >> will trigger builds on the arm64 node. There is no way to reserve the >> runner only for commits against >> https://github.com/varnishcache/varnish-cache >> >> Do you see other problems or maybe different ways ? >> Do you have preferences which way to go ? >> >> Regards, >> Martin >> >> >>> >>> Regards, >>> Martin >>> >>> >>>> >>>> -- >>>> Guillaume Quintard >>>> _______________________________________________ >>>> varnish-dev mailing list >>>> varnish-dev at varnish-cache.org >>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.grigorov at gmail.com Tue Mar 24 09:00:33 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Tue, 24 Mar 2020 11:00:33 +0200 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: Hi Guillaume, On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard < guillaume at varnish-software.com> wrote: > Hi Martin, > > Thank you for that. > A few remarks and questions: > - how much time does the "docker build" step takes? We can possibly speed > things up by push images to the dockerhub, as they don't need to change > very often. > Definitely such optimization would be a good thing to do! At the moment, with 'machine' executor it fetches the base image and then builds all the Docker layers again and again. Here are the timings: 1) Spinning up a VM - around 10secs 2) prepare env variables - 0secs 3) checkout code (varnish-cache) - 5secs 4) activate QEMU - 2secs 5) build packages 5.1) x86 deb - 3m 30secs 5.2) x86 rpm - 2m 50secs 5.3) aarch64 rpm - 35mins 5.4) aarch64 deb - 45mins > - any reason why you clone pkg-varnish-cache in each job? The idea was to > have it cloned once in tar-pkg-tools for consistency and reproducibility, > which we lose here. > I will extract the common steps once I see it working. This is my first CircleCI project and I still find my ways in it! > - do we want to change things for the amd64 platforms for the sake of > consistency? > So far there is nothing specific for amd4 or aarch64, except the base Docker images. For example make-deb-packages.sh is reused for both amd64 and aarch64 builds. Same for -rpm- and now for -apk- (alpine). Once I feel the change is almost finished I will open a Pull Request for more comments! Martin > > -- > Guillaume Quintard > > > On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov > wrote: > >> Hi, >> >> >> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov < >> martin.grigorov at gmail.com> wrote: >> >>> Hi, >>> >>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov < >>> martin.grigorov at gmail.com> wrote: >>> >>>> Hi Guillaume, >>>> >>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < >>>> guillaume at varnish-software.com> wrote: >>>> >>>>> Hi, >>>>> >>>>> Offering arm64 packages requires a few things: >>>>> - arm64-compatible code (all good in >>>>> https://github.com/varnishcache/varnish-cache) >>>>> - arm64-compatible package framework (all good in >>>>> https://github.com/varnishcache/pkg-varnish-cache) >>>>> - infrastructure to build the packages (uhoh, see below) >>>>> - infrastructure to store and deliver ( >>>>> https://packagecloud.io/varnishcache) >>>>> >>>>> So, everything is in place, expect for the third point. At the moment, >>>>> there are two concurrent CI implementations: >>>>> - travis: >>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's >>>>> the historical one, and currently only runs compilation+test for OSX >>>>> >>>> >>>> Actually it tests Linux AMD64 and ARM64 too. >>>> >>>> >>>>> - circleci: >>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the >>>>> new kid on the block, that builds all the packages and distchecks for all >>>>> the packaged platforms >>>>> >>>>> The issue is that cirecleci doesn't support arm64 containers (for >>>>> now?), so we would need to re-implement the packaging logic in Travis. It's >>>>> not a big problem, but it's currently not a priority on my side. >>>>> >>>>> However, I am totally ready to provide help if someone wants to take >>>>> that up. The added benefit it that Travis would be able to handle >>>>> everything and we can retire the circleci experiment >>>>> >>>> >>>> I will take a look in the coming days and ask you if I need help! >>>> >>> >>> I've took a look at the current setup and here is what I've found as >>> problems and possible solutions: >>> >>> 1) Circle CI >>> 1.1) problem - the 'machine' and 'Docker' executors run on x86_64, so >>> there is no way to build the packages in a "native" environment >>> 1.2) possible solutions >>> 1.2.1) use multiarch cross build >>> 1.2.2) use 'machine' executor that registers QEMU via >>> https://hub.docker.com/r/multiarch/qemu-user-static/ and then builds >>> and runs a custom Docker image that executes a shell script with the build >>> steps >>> It will look something like >>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but >>> instead of uploading the Docker image as a last step it will run it. >>> The RPM and DEB build related code from current config.yml will be >>> extracted into shell scripts which will be copied in the custom Docker >>> images >>> >>> From these two possible ways I have better picture in my head how to do >>> 1.2.2, but I don't mind going deep in 1.2.1 if this is what you'd prefer. >>> >> >> I've decided to stay with Circle CI and use 'machine' executor with QEMU. >> >> The changed config.yml could be seen at >> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and >> the build at >> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8 >> The builds on x86 arch take 3-4 mins, but for aarch64 (emulation!) ~40mins >> For now the jobs just build the .deb & .rpm packages for CentOS 7 and >> Ubuntu 18.04, both amd64 and aarch64. >> TODOs: >> - migrate Alpine >> - store the packages as CircleCI artifacts >> - anything else that is still missing >> >> Adding more architectures would be as easy as adding a new Dockerfile >> with a base image from the respective type. >> >> Martin >> >> >>> 2) Travis CI >>> 2.1) problems >>> 2.1.1) generally Travis is slower than Circle! >>> Althought if we use CircleCI 'machine' executor it will be slower than >>> the current 'Docker' executor! >>> 2.1.2) Travis supports only Ubuntu >>> Current setup at CircleCI uses CentOS 7. >>> I guess the build steps won't have problems on Ubuntu. >>> >>> 3) GitHub Actions >>> GH Actions does not support ARM64 but it supports self hosted ARM64 >>> runners >>> 3.1) The problem is that there is no way to make a self hosted runner >>> really private. I.e. if someone forks Varnish Cache any commit in the fork >>> will trigger builds on the arm64 node. There is no way to reserve the >>> runner only for commits against >>> https://github.com/varnishcache/varnish-cache >>> >>> Do you see other problems or maybe different ways ? >>> Do you have preferences which way to go ? >>> >>> Regards, >>> Martin >>> >>> >>>> >>>> Regards, >>>> Martin >>>> >>>> >>>>> >>>>> -- >>>>> Guillaume Quintard >>>>> _______________________________________________ >>>>> varnish-dev mailing list >>>>> varnish-dev at varnish-cache.org >>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev >>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.grigorov at gmail.com Tue Mar 24 15:05:00 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Tue, 24 Mar 2020 17:05:00 +0200 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: Hi, On Tue, Mar 24, 2020 at 11:00 AM Martin Grigorov wrote: > Hi Guillaume, > > On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard < > guillaume at varnish-software.com> wrote: > >> Hi Martin, >> >> Thank you for that. >> A few remarks and questions: >> - how much time does the "docker build" step takes? We can possibly speed >> things up by push images to the dockerhub, as they don't need to change >> very often. >> > > Definitely such optimization would be a good thing to do! > At the moment, with 'machine' executor it fetches the base image and then > builds all the Docker layers again and again. > Here are the timings: > 1) Spinning up a VM - around 10secs > 2) prepare env variables - 0secs > 3) checkout code (varnish-cache) - 5secs > 4) activate QEMU - 2secs > 5) build packages > 5.1) x86 deb - 3m 30secs > 5.2) x86 rpm - 2m 50secs > 5.3) aarch64 rpm - 35mins > 5.4) aarch64 deb - 45mins > > >> - any reason why you clone pkg-varnish-cache in each job? The idea was to >> have it cloned once in tar-pkg-tools for consistency and reproducibility, >> which we lose here. >> > > I will extract the common steps once I see it working. This is my first > CircleCI project and I still find my ways in it! > > >> - do we want to change things for the amd64 platforms for the sake of >> consistency? >> > > So far there is nothing specific for amd4 or aarch64, except the base > Docker images. > For example make-deb-packages.sh is reused for both amd64 and aarch64 > builds. Same for -rpm- and now for -apk- (alpine). > > Once I feel the change is almost finished I will open a Pull Request for > more comments! > > Martin > > >> >> -- >> Guillaume Quintard >> >> >> On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov < >> martin.grigorov at gmail.com> wrote: >> >>> Hi, >>> >>> >>> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov < >>> martin.grigorov at gmail.com> wrote: >>> >>>> Hi, >>>> >>>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov < >>>> martin.grigorov at gmail.com> wrote: >>>> >>>>> Hi Guillaume, >>>>> >>>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < >>>>> guillaume at varnish-software.com> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> Offering arm64 packages requires a few things: >>>>>> - arm64-compatible code (all good in >>>>>> https://github.com/varnishcache/varnish-cache) >>>>>> - arm64-compatible package framework (all good in >>>>>> https://github.com/varnishcache/pkg-varnish-cache) >>>>>> - infrastructure to build the packages (uhoh, see below) >>>>>> - infrastructure to store and deliver ( >>>>>> https://packagecloud.io/varnishcache) >>>>>> >>>>>> So, everything is in place, expect for the third point. At the >>>>>> moment, there are two concurrent CI implementations: >>>>>> - travis: >>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's >>>>>> the historical one, and currently only runs compilation+test for OSX >>>>>> >>>>> >>>>> Actually it tests Linux AMD64 and ARM64 too. >>>>> >>>>> >>>>>> - circleci: >>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the >>>>>> new kid on the block, that builds all the packages and distchecks for all >>>>>> the packaged platforms >>>>>> >>>>>> The issue is that cirecleci doesn't support arm64 containers (for >>>>>> now?), so we would need to re-implement the packaging logic in Travis. It's >>>>>> not a big problem, but it's currently not a priority on my side. >>>>>> >>>>>> However, I am totally ready to provide help if someone wants to take >>>>>> that up. The added benefit it that Travis would be able to handle >>>>>> everything and we can retire the circleci experiment >>>>>> >>>>> >>>>> I will take a look in the coming days and ask you if I need help! >>>>> >>>> >>>> I've took a look at the current setup and here is what I've found as >>>> problems and possible solutions: >>>> >>>> 1) Circle CI >>>> 1.1) problem - the 'machine' and 'Docker' executors run on x86_64, so >>>> there is no way to build the packages in a "native" environment >>>> 1.2) possible solutions >>>> 1.2.1) use multiarch cross build >>>> 1.2.2) use 'machine' executor that registers QEMU via >>>> https://hub.docker.com/r/multiarch/qemu-user-static/ and then builds >>>> and runs a custom Docker image that executes a shell script with the build >>>> steps >>>> It will look something like >>>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but >>>> instead of uploading the Docker image as a last step it will run it. >>>> The RPM and DEB build related code from current config.yml will be >>>> extracted into shell scripts which will be copied in the custom Docker >>>> images >>>> >>>> From these two possible ways I have better picture in my head how to do >>>> 1.2.2, but I don't mind going deep in 1.2.1 if this is what you'd prefer. >>>> >>> >>> I've decided to stay with Circle CI and use 'machine' executor with QEMU. >>> >>> The changed config.yml could be seen at >>> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and >>> the build at >>> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8 >>> The builds on x86 arch take 3-4 mins, but for aarch64 (emulation!) >>> ~40mins >>> For now the jobs just build the .deb & .rpm packages for CentOS 7 and >>> Ubuntu 18.04, both amd64 and aarch64. >>> TODOs: >>> - migrate Alpine >>> >> Build on Alpine aarch64 fails with: ... automake: this behaviour will change in future Automake versions: they will automake: unconditionally cause object files to be placed in the same subdirectory automake: of the corresponding sources. automake: project, to avoid future incompatibilities. parallel-tests: installing 'build-aux/test-driver' lib/libvmod_debug/Makefile.am:12: warning: libvmod_debug_la_LDFLAGS multiply defined in condition TRUE ... lib/libvmod_debug/automake_boilerplate.am:19: ... 'libvmod_debug_la_LDFLAGS' previously defined here lib/libvmod_debug/Makefile.am:9: 'lib/libvmod_debug/ automake_boilerplate.am' included from here + autoconf + CONFIG_SHELL=/bin/sh + export CONFIG_SHELL + ./configure '--prefix=/opt/varnish' '--mandir=/opt/varnish/man' --enable-maintainer-mode --enable-developer-warnings --enable-debugging-symbols --enable-dependency-tracking --with-persistent-storage --quiet configure: WARNING: dot not found - build will fail if svg files are out of date. configure: WARNING: No system jemalloc found, using system malloc configure: error: Could not find backtrace() support Does anyone know a workaround ? I use multiarch/alpine:aarch64-edge as a base Docker image Martin > - store the packages as CircleCI artifacts >>> - anything else that is still missing >>> >>> Adding more architectures would be as easy as adding a new Dockerfile >>> with a base image from the respective type. >>> >>> Martin >>> >>> >>>> 2) Travis CI >>>> 2.1) problems >>>> 2.1.1) generally Travis is slower than Circle! >>>> Althought if we use CircleCI 'machine' executor it will be slower than >>>> the current 'Docker' executor! >>>> 2.1.2) Travis supports only Ubuntu >>>> Current setup at CircleCI uses CentOS 7. >>>> I guess the build steps won't have problems on Ubuntu. >>>> >>>> 3) GitHub Actions >>>> GH Actions does not support ARM64 but it supports self hosted ARM64 >>>> runners >>>> 3.1) The problem is that there is no way to make a self hosted runner >>>> really private. I.e. if someone forks Varnish Cache any commit in the fork >>>> will trigger builds on the arm64 node. There is no way to reserve the >>>> runner only for commits against >>>> https://github.com/varnishcache/varnish-cache >>>> >>>> Do you see other problems or maybe different ways ? >>>> Do you have preferences which way to go ? >>>> >>>> Regards, >>>> Martin >>>> >>>> >>>>> >>>>> Regards, >>>>> Martin >>>>> >>>>> >>>>>> >>>>>> -- >>>>>> Guillaume Quintard >>>>>> _______________________________________________ >>>>>> varnish-dev mailing list >>>>>> varnish-dev at varnish-cache.org >>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev >>>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Tue Mar 24 15:18:58 2020 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Tue, 24 Mar 2020 08:18:58 -0700 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: Compare your configure line with what's currently in use (or the apkbuild file), there are a few options (with-unwind, without-jemalloc, etc.) That need to be set On Tue, Mar 24, 2020, 08:05 Martin Grigorov wrote: > Hi, > > On Tue, Mar 24, 2020 at 11:00 AM Martin Grigorov < > martin.grigorov at gmail.com> wrote: > >> Hi Guillaume, >> >> On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard < >> guillaume at varnish-software.com> wrote: >> >>> Hi Martin, >>> >>> Thank you for that. >>> A few remarks and questions: >>> - how much time does the "docker build" step takes? We can possibly >>> speed things up by push images to the dockerhub, as they don't need to >>> change very often. >>> >> >> Definitely such optimization would be a good thing to do! >> At the moment, with 'machine' executor it fetches the base image and then >> builds all the Docker layers again and again. >> Here are the timings: >> 1) Spinning up a VM - around 10secs >> 2) prepare env variables - 0secs >> 3) checkout code (varnish-cache) - 5secs >> 4) activate QEMU - 2secs >> 5) build packages >> 5.1) x86 deb - 3m 30secs >> 5.2) x86 rpm - 2m 50secs >> 5.3) aarch64 rpm - 35mins >> 5.4) aarch64 deb - 45mins >> >> >>> - any reason why you clone pkg-varnish-cache in each job? The idea was >>> to have it cloned once in tar-pkg-tools for consistency and >>> reproducibility, which we lose here. >>> >> >> I will extract the common steps once I see it working. This is my first >> CircleCI project and I still find my ways in it! >> >> >>> - do we want to change things for the amd64 platforms for the sake of >>> consistency? >>> >> >> So far there is nothing specific for amd4 or aarch64, except the base >> Docker images. >> For example make-deb-packages.sh is reused for both amd64 and aarch64 >> builds. Same for -rpm- and now for -apk- (alpine). >> >> Once I feel the change is almost finished I will open a Pull Request for >> more comments! >> >> Martin >> >> >>> >>> -- >>> Guillaume Quintard >>> >>> >>> On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov < >>> martin.grigorov at gmail.com> wrote: >>> >>>> Hi, >>>> >>>> >>>> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov < >>>> martin.grigorov at gmail.com> wrote: >>>> >>>>> Hi, >>>>> >>>>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov < >>>>> martin.grigorov at gmail.com> wrote: >>>>> >>>>>> Hi Guillaume, >>>>>> >>>>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < >>>>>> guillaume at varnish-software.com> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> Offering arm64 packages requires a few things: >>>>>>> - arm64-compatible code (all good in >>>>>>> https://github.com/varnishcache/varnish-cache) >>>>>>> - arm64-compatible package framework (all good in >>>>>>> https://github.com/varnishcache/pkg-varnish-cache) >>>>>>> - infrastructure to build the packages (uhoh, see below) >>>>>>> - infrastructure to store and deliver ( >>>>>>> https://packagecloud.io/varnishcache) >>>>>>> >>>>>>> So, everything is in place, expect for the third point. At the >>>>>>> moment, there are two concurrent CI implementations: >>>>>>> - travis: >>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's >>>>>>> the historical one, and currently only runs compilation+test for OSX >>>>>>> >>>>>> >>>>>> Actually it tests Linux AMD64 and ARM64 too. >>>>>> >>>>>> >>>>>>> - circleci: >>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the >>>>>>> new kid on the block, that builds all the packages and distchecks for all >>>>>>> the packaged platforms >>>>>>> >>>>>>> The issue is that cirecleci doesn't support arm64 containers (for >>>>>>> now?), so we would need to re-implement the packaging logic in Travis. It's >>>>>>> not a big problem, but it's currently not a priority on my side. >>>>>>> >>>>>>> However, I am totally ready to provide help if someone wants to take >>>>>>> that up. The added benefit it that Travis would be able to handle >>>>>>> everything and we can retire the circleci experiment >>>>>>> >>>>>> >>>>>> I will take a look in the coming days and ask you if I need help! >>>>>> >>>>> >>>>> I've took a look at the current setup and here is what I've found as >>>>> problems and possible solutions: >>>>> >>>>> 1) Circle CI >>>>> 1.1) problem - the 'machine' and 'Docker' executors run on x86_64, so >>>>> there is no way to build the packages in a "native" environment >>>>> 1.2) possible solutions >>>>> 1.2.1) use multiarch cross build >>>>> 1.2.2) use 'machine' executor that registers QEMU via >>>>> https://hub.docker.com/r/multiarch/qemu-user-static/ and then builds >>>>> and runs a custom Docker image that executes a shell script with the build >>>>> steps >>>>> It will look something like >>>>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but >>>>> instead of uploading the Docker image as a last step it will run it. >>>>> The RPM and DEB build related code from current config.yml will be >>>>> extracted into shell scripts which will be copied in the custom Docker >>>>> images >>>>> >>>>> From these two possible ways I have better picture in my head how to >>>>> do 1.2.2, but I don't mind going deep in 1.2.1 if this is what you'd prefer. >>>>> >>>> >>>> I've decided to stay with Circle CI and use 'machine' executor with >>>> QEMU. >>>> >>>> The changed config.yml could be seen at >>>> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and >>>> the build at >>>> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8 >>>> The builds on x86 arch take 3-4 mins, but for aarch64 (emulation!) >>>> ~40mins >>>> For now the jobs just build the .deb & .rpm packages for CentOS 7 and >>>> Ubuntu 18.04, both amd64 and aarch64. >>>> TODOs: >>>> - migrate Alpine >>>> >>> > Build on Alpine aarch64 fails with: > ... > automake: this behaviour will change in future Automake versions: they will > automake: unconditionally cause object files to be placed in the same > subdirectory > automake: of the corresponding sources. > automake: project, to avoid future incompatibilities. > parallel-tests: installing 'build-aux/test-driver' > lib/libvmod_debug/Makefile.am:12: warning: libvmod_debug_la_LDFLAGS > multiply defined in condition TRUE ... > lib/libvmod_debug/automake_boilerplate.am:19: ... > 'libvmod_debug_la_LDFLAGS' previously defined here > lib/libvmod_debug/Makefile.am:9: 'lib/libvmod_debug/ > automake_boilerplate.am' included from here > + autoconf > + CONFIG_SHELL=/bin/sh > + export CONFIG_SHELL > + ./configure '--prefix=/opt/varnish' '--mandir=/opt/varnish/man' > --enable-maintainer-mode --enable-developer-warnings > --enable-debugging-symbols --enable-dependency-tracking > --with-persistent-storage --quiet > configure: WARNING: dot not found - build will fail if svg files are out > of date. > configure: WARNING: No system jemalloc found, using system malloc > configure: error: Could not find backtrace() support > > Does anyone know a workaround ? > I use multiarch/alpine:aarch64-edge as a base Docker image > > Martin > > > >> - store the packages as CircleCI artifacts >>>> - anything else that is still missing >>>> >>>> Adding more architectures would be as easy as adding a new Dockerfile >>>> with a base image from the respective type. >>>> >>>> Martin >>>> >>>> >>>>> 2) Travis CI >>>>> 2.1) problems >>>>> 2.1.1) generally Travis is slower than Circle! >>>>> Althought if we use CircleCI 'machine' executor it will be slower than >>>>> the current 'Docker' executor! >>>>> 2.1.2) Travis supports only Ubuntu >>>>> Current setup at CircleCI uses CentOS 7. >>>>> I guess the build steps won't have problems on Ubuntu. >>>>> >>>>> 3) GitHub Actions >>>>> GH Actions does not support ARM64 but it supports self hosted ARM64 >>>>> runners >>>>> 3.1) The problem is that there is no way to make a self hosted runner >>>>> really private. I.e. if someone forks Varnish Cache any commit in the fork >>>>> will trigger builds on the arm64 node. There is no way to reserve the >>>>> runner only for commits against >>>>> https://github.com/varnishcache/varnish-cache >>>>> >>>>> Do you see other problems or maybe different ways ? >>>>> Do you have preferences which way to go ? >>>>> >>>>> Regards, >>>>> Martin >>>>> >>>>> >>>>>> >>>>>> Regards, >>>>>> Martin >>>>>> >>>>>> >>>>>>> >>>>>>> -- >>>>>>> Guillaume Quintard >>>>>>> _______________________________________________ >>>>>>> varnish-dev mailing list >>>>>>> varnish-dev at varnish-cache.org >>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev >>>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.grigorov at gmail.com Tue Mar 24 22:04:52 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Wed, 25 Mar 2020 00:04:52 +0200 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: On Tue, Mar 24, 2020, 17:19 Guillaume Quintard < guillaume at varnish-software.com> wrote: > Compare your configure line with what's currently in use (or the apkbuild > file), there are a few options (with-unwind, without-jemalloc, etc.) That > need to be set > The configure line comes from "./autogen.des": https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/autogen.des#L35-L42 It is called at: https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L40 In my branch at: https://github.com/martin-g/varnish-cache/blob/4b4626ee9cc366b032a45f27b54d77176125ef03/.circleci/make-apk-packages.sh#L26 It fails only on aarch64 for Alpine Linux. The x86_64 build for Alpine is fine. AARCH64 for CentOS 7 and Ubuntu 18.04 are also fine. Martin > On Tue, Mar 24, 2020, 08:05 Martin Grigorov > wrote: > >> Hi, >> >> On Tue, Mar 24, 2020 at 11:00 AM Martin Grigorov < >> martin.grigorov at gmail.com> wrote: >> >>> Hi Guillaume, >>> >>> On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard < >>> guillaume at varnish-software.com> wrote: >>> >>>> Hi Martin, >>>> >>>> Thank you for that. >>>> A few remarks and questions: >>>> - how much time does the "docker build" step takes? We can possibly >>>> speed things up by push images to the dockerhub, as they don't need to >>>> change very often. >>>> >>> >>> Definitely such optimization would be a good thing to do! >>> At the moment, with 'machine' executor it fetches the base image and >>> then builds all the Docker layers again and again. >>> Here are the timings: >>> 1) Spinning up a VM - around 10secs >>> 2) prepare env variables - 0secs >>> 3) checkout code (varnish-cache) - 5secs >>> 4) activate QEMU - 2secs >>> 5) build packages >>> 5.1) x86 deb - 3m 30secs >>> 5.2) x86 rpm - 2m 50secs >>> 5.3) aarch64 rpm - 35mins >>> 5.4) aarch64 deb - 45mins >>> >>> >>>> - any reason why you clone pkg-varnish-cache in each job? The idea was >>>> to have it cloned once in tar-pkg-tools for consistency and >>>> reproducibility, which we lose here. >>>> >>> >>> I will extract the common steps once I see it working. This is my first >>> CircleCI project and I still find my ways in it! >>> >>> >>>> - do we want to change things for the amd64 platforms for the sake of >>>> consistency? >>>> >>> >>> So far there is nothing specific for amd4 or aarch64, except the base >>> Docker images. >>> For example make-deb-packages.sh is reused for both amd64 and aarch64 >>> builds. Same for -rpm- and now for -apk- (alpine). >>> >>> Once I feel the change is almost finished I will open a Pull Request for >>> more comments! >>> >>> Martin >>> >>> >>>> >>>> -- >>>> Guillaume Quintard >>>> >>>> >>>> On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov < >>>> martin.grigorov at gmail.com> wrote: >>>> >>>>> Hi, >>>>> >>>>> >>>>> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov < >>>>> martin.grigorov at gmail.com> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov < >>>>>> martin.grigorov at gmail.com> wrote: >>>>>> >>>>>>> Hi Guillaume, >>>>>>> >>>>>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < >>>>>>> guillaume at varnish-software.com> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> Offering arm64 packages requires a few things: >>>>>>>> - arm64-compatible code (all good in >>>>>>>> https://github.com/varnishcache/varnish-cache) >>>>>>>> - arm64-compatible package framework (all good in >>>>>>>> https://github.com/varnishcache/pkg-varnish-cache) >>>>>>>> - infrastructure to build the packages (uhoh, see below) >>>>>>>> - infrastructure to store and deliver ( >>>>>>>> https://packagecloud.io/varnishcache) >>>>>>>> >>>>>>>> So, everything is in place, expect for the third point. At the >>>>>>>> moment, there are two concurrent CI implementations: >>>>>>>> - travis: >>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's >>>>>>>> the historical one, and currently only runs compilation+test for OSX >>>>>>>> >>>>>>> >>>>>>> Actually it tests Linux AMD64 and ARM64 too. >>>>>>> >>>>>>> >>>>>>>> - circleci: >>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the >>>>>>>> new kid on the block, that builds all the packages and distchecks for all >>>>>>>> the packaged platforms >>>>>>>> >>>>>>>> The issue is that cirecleci doesn't support arm64 containers (for >>>>>>>> now?), so we would need to re-implement the packaging logic in Travis. It's >>>>>>>> not a big problem, but it's currently not a priority on my side. >>>>>>>> >>>>>>>> However, I am totally ready to provide help if someone wants to >>>>>>>> take that up. The added benefit it that Travis would be able to handle >>>>>>>> everything and we can retire the circleci experiment >>>>>>>> >>>>>>> >>>>>>> I will take a look in the coming days and ask you if I need help! >>>>>>> >>>>>> >>>>>> I've took a look at the current setup and here is what I've found as >>>>>> problems and possible solutions: >>>>>> >>>>>> 1) Circle CI >>>>>> 1.1) problem - the 'machine' and 'Docker' executors run on x86_64, so >>>>>> there is no way to build the packages in a "native" environment >>>>>> 1.2) possible solutions >>>>>> 1.2.1) use multiarch cross build >>>>>> 1.2.2) use 'machine' executor that registers QEMU via >>>>>> https://hub.docker.com/r/multiarch/qemu-user-static/ and then builds >>>>>> and runs a custom Docker image that executes a shell script with the build >>>>>> steps >>>>>> It will look something like >>>>>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but >>>>>> instead of uploading the Docker image as a last step it will run it. >>>>>> The RPM and DEB build related code from current config.yml will be >>>>>> extracted into shell scripts which will be copied in the custom Docker >>>>>> images >>>>>> >>>>>> From these two possible ways I have better picture in my head how to >>>>>> do 1.2.2, but I don't mind going deep in 1.2.1 if this is what you'd prefer. >>>>>> >>>>> >>>>> I've decided to stay with Circle CI and use 'machine' executor with >>>>> QEMU. >>>>> >>>>> The changed config.yml could be seen at >>>>> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and >>>>> the build at >>>>> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8 >>>>> The builds on x86 arch take 3-4 mins, but for aarch64 (emulation!) >>>>> ~40mins >>>>> For now the jobs just build the .deb & .rpm packages for CentOS 7 and >>>>> Ubuntu 18.04, both amd64 and aarch64. >>>>> TODOs: >>>>> - migrate Alpine >>>>> >>>> >> Build on Alpine aarch64 fails with: >> ... >> automake: this behaviour will change in future Automake versions: they >> will >> automake: unconditionally cause object files to be placed in the same >> subdirectory >> automake: of the corresponding sources. >> automake: project, to avoid future incompatibilities. >> parallel-tests: installing 'build-aux/test-driver' >> lib/libvmod_debug/Makefile.am:12: warning: libvmod_debug_la_LDFLAGS >> multiply defined in condition TRUE ... >> lib/libvmod_debug/automake_boilerplate.am:19: ... >> 'libvmod_debug_la_LDFLAGS' previously defined here >> lib/libvmod_debug/Makefile.am:9: 'lib/libvmod_debug/ >> automake_boilerplate.am' included from here >> + autoconf >> + CONFIG_SHELL=/bin/sh >> + export CONFIG_SHELL >> + ./configure '--prefix=/opt/varnish' '--mandir=/opt/varnish/man' >> --enable-maintainer-mode --enable-developer-warnings >> --enable-debugging-symbols --enable-dependency-tracking >> --with-persistent-storage --quiet >> configure: WARNING: dot not found - build will fail if svg files are out >> of date. >> configure: WARNING: No system jemalloc found, using system malloc >> configure: error: Could not find backtrace() support >> >> Does anyone know a workaround ? >> I use multiarch/alpine:aarch64-edge as a base Docker image >> >> Martin >> >> >> >>> - store the packages as CircleCI artifacts >>>>> - anything else that is still missing >>>>> >>>>> Adding more architectures would be as easy as adding a new Dockerfile >>>>> with a base image from the respective type. >>>>> >>>>> Martin >>>>> >>>>> >>>>>> 2) Travis CI >>>>>> 2.1) problems >>>>>> 2.1.1) generally Travis is slower than Circle! >>>>>> Althought if we use CircleCI 'machine' executor it will be slower >>>>>> than the current 'Docker' executor! >>>>>> 2.1.2) Travis supports only Ubuntu >>>>>> Current setup at CircleCI uses CentOS 7. >>>>>> I guess the build steps won't have problems on Ubuntu. >>>>>> >>>>>> 3) GitHub Actions >>>>>> GH Actions does not support ARM64 but it supports self hosted ARM64 >>>>>> runners >>>>>> 3.1) The problem is that there is no way to make a self hosted runner >>>>>> really private. I.e. if someone forks Varnish Cache any commit in the fork >>>>>> will trigger builds on the arm64 node. There is no way to reserve the >>>>>> runner only for commits against >>>>>> https://github.com/varnishcache/varnish-cache >>>>>> >>>>>> Do you see other problems or maybe different ways ? >>>>>> Do you have preferences which way to go ? >>>>>> >>>>>> Regards, >>>>>> Martin >>>>>> >>>>>> >>>>>>> >>>>>>> Regards, >>>>>>> Martin >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Guillaume Quintard >>>>>>>> _______________________________________________ >>>>>>>> varnish-dev mailing list >>>>>>>> varnish-dev at varnish-cache.org >>>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev >>>>>>>> >>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Wed Mar 25 00:39:06 2020 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Tue, 24 Mar 2020 17:39:06 -0700 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: Hi, So, you are pointing at the `dist` job, whose sole role is to provide us with a dist tarball, so we don't need that command line to work for everyone, just for that specific platform. On the other hand, https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L168 is closer to what you want, `distcheck` will be call on all platform, and you can see that it has the `--with-unwind` argument. -- Guillaume Quintard On Tue, Mar 24, 2020 at 3:05 PM Martin Grigorov wrote: > > > On Tue, Mar 24, 2020, 17:19 Guillaume Quintard < > guillaume at varnish-software.com> wrote: > >> Compare your configure line with what's currently in use (or the apkbuild >> file), there are a few options (with-unwind, without-jemalloc, etc.) That >> need to be set >> > > The configure line comes from "./autogen.des": > https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/autogen.des#L35-L42 > It is called at: > > https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L40 > In my branch at: > > https://github.com/martin-g/varnish-cache/blob/4b4626ee9cc366b032a45f27b54d77176125ef03/.circleci/make-apk-packages.sh#L26 > > It fails only on aarch64 for Alpine Linux. The x86_64 build for Alpine is > fine. > AARCH64 for CentOS 7 and Ubuntu 18.04 are also fine. > > Martin > > >> On Tue, Mar 24, 2020, 08:05 Martin Grigorov >> wrote: >> >>> Hi, >>> >>> On Tue, Mar 24, 2020 at 11:00 AM Martin Grigorov < >>> martin.grigorov at gmail.com> wrote: >>> >>>> Hi Guillaume, >>>> >>>> On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard < >>>> guillaume at varnish-software.com> wrote: >>>> >>>>> Hi Martin, >>>>> >>>>> Thank you for that. >>>>> A few remarks and questions: >>>>> - how much time does the "docker build" step takes? We can possibly >>>>> speed things up by push images to the dockerhub, as they don't need to >>>>> change very often. >>>>> >>>> >>>> Definitely such optimization would be a good thing to do! >>>> At the moment, with 'machine' executor it fetches the base image and >>>> then builds all the Docker layers again and again. >>>> Here are the timings: >>>> 1) Spinning up a VM - around 10secs >>>> 2) prepare env variables - 0secs >>>> 3) checkout code (varnish-cache) - 5secs >>>> 4) activate QEMU - 2secs >>>> 5) build packages >>>> 5.1) x86 deb - 3m 30secs >>>> 5.2) x86 rpm - 2m 50secs >>>> 5.3) aarch64 rpm - 35mins >>>> 5.4) aarch64 deb - 45mins >>>> >>>> >>>>> - any reason why you clone pkg-varnish-cache in each job? The idea was >>>>> to have it cloned once in tar-pkg-tools for consistency and >>>>> reproducibility, which we lose here. >>>>> >>>> >>>> I will extract the common steps once I see it working. This is my first >>>> CircleCI project and I still find my ways in it! >>>> >>>> >>>>> - do we want to change things for the amd64 platforms for the sake of >>>>> consistency? >>>>> >>>> >>>> So far there is nothing specific for amd4 or aarch64, except the base >>>> Docker images. >>>> For example make-deb-packages.sh is reused for both amd64 and aarch64 >>>> builds. Same for -rpm- and now for -apk- (alpine). >>>> >>>> Once I feel the change is almost finished I will open a Pull Request >>>> for more comments! >>>> >>>> Martin >>>> >>>> >>>>> >>>>> -- >>>>> Guillaume Quintard >>>>> >>>>> >>>>> On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov < >>>>> martin.grigorov at gmail.com> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> >>>>>> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov < >>>>>> martin.grigorov at gmail.com> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov < >>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>> >>>>>>>> Hi Guillaume, >>>>>>>> >>>>>>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < >>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> Offering arm64 packages requires a few things: >>>>>>>>> - arm64-compatible code (all good in >>>>>>>>> https://github.com/varnishcache/varnish-cache) >>>>>>>>> - arm64-compatible package framework (all good in >>>>>>>>> https://github.com/varnishcache/pkg-varnish-cache) >>>>>>>>> - infrastructure to build the packages (uhoh, see below) >>>>>>>>> - infrastructure to store and deliver ( >>>>>>>>> https://packagecloud.io/varnishcache) >>>>>>>>> >>>>>>>>> So, everything is in place, expect for the third point. At the >>>>>>>>> moment, there are two concurrent CI implementations: >>>>>>>>> - travis: >>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's >>>>>>>>> the historical one, and currently only runs compilation+test for OSX >>>>>>>>> >>>>>>>> >>>>>>>> Actually it tests Linux AMD64 and ARM64 too. >>>>>>>> >>>>>>>> >>>>>>>>> - circleci: >>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the >>>>>>>>> new kid on the block, that builds all the packages and distchecks for all >>>>>>>>> the packaged platforms >>>>>>>>> >>>>>>>>> The issue is that cirecleci doesn't support arm64 containers (for >>>>>>>>> now?), so we would need to re-implement the packaging logic in Travis. It's >>>>>>>>> not a big problem, but it's currently not a priority on my side. >>>>>>>>> >>>>>>>>> However, I am totally ready to provide help if someone wants to >>>>>>>>> take that up. The added benefit it that Travis would be able to handle >>>>>>>>> everything and we can retire the circleci experiment >>>>>>>>> >>>>>>>> >>>>>>>> I will take a look in the coming days and ask you if I need help! >>>>>>>> >>>>>>> >>>>>>> I've took a look at the current setup and here is what I've found as >>>>>>> problems and possible solutions: >>>>>>> >>>>>>> 1) Circle CI >>>>>>> 1.1) problem - the 'machine' and 'Docker' executors run on x86_64, >>>>>>> so there is no way to build the packages in a "native" environment >>>>>>> 1.2) possible solutions >>>>>>> 1.2.1) use multiarch cross build >>>>>>> 1.2.2) use 'machine' executor that registers QEMU via >>>>>>> https://hub.docker.com/r/multiarch/qemu-user-static/ and then >>>>>>> builds and runs a custom Docker image that executes a shell script with the >>>>>>> build steps >>>>>>> It will look something like >>>>>>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but >>>>>>> instead of uploading the Docker image as a last step it will run it. >>>>>>> The RPM and DEB build related code from current config.yml will be >>>>>>> extracted into shell scripts which will be copied in the custom Docker >>>>>>> images >>>>>>> >>>>>>> From these two possible ways I have better picture in my head how to >>>>>>> do 1.2.2, but I don't mind going deep in 1.2.1 if this is what you'd prefer. >>>>>>> >>>>>> >>>>>> I've decided to stay with Circle CI and use 'machine' executor with >>>>>> QEMU. >>>>>> >>>>>> The changed config.yml could be seen at >>>>>> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and >>>>>> the build at >>>>>> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8 >>>>>> The builds on x86 arch take 3-4 mins, but for aarch64 (emulation!) >>>>>> ~40mins >>>>>> For now the jobs just build the .deb & .rpm packages for CentOS 7 and >>>>>> Ubuntu 18.04, both amd64 and aarch64. >>>>>> TODOs: >>>>>> - migrate Alpine >>>>>> >>>>> >>> Build on Alpine aarch64 fails with: >>> ... >>> automake: this behaviour will change in future Automake versions: they >>> will >>> automake: unconditionally cause object files to be placed in the same >>> subdirectory >>> automake: of the corresponding sources. >>> automake: project, to avoid future incompatibilities. >>> parallel-tests: installing 'build-aux/test-driver' >>> lib/libvmod_debug/Makefile.am:12: warning: libvmod_debug_la_LDFLAGS >>> multiply defined in condition TRUE ... >>> lib/libvmod_debug/automake_boilerplate.am:19: ... >>> 'libvmod_debug_la_LDFLAGS' previously defined here >>> lib/libvmod_debug/Makefile.am:9: 'lib/libvmod_debug/ >>> automake_boilerplate.am' included from here >>> + autoconf >>> + CONFIG_SHELL=/bin/sh >>> + export CONFIG_SHELL >>> + ./configure '--prefix=/opt/varnish' '--mandir=/opt/varnish/man' >>> --enable-maintainer-mode --enable-developer-warnings >>> --enable-debugging-symbols --enable-dependency-tracking >>> --with-persistent-storage --quiet >>> configure: WARNING: dot not found - build will fail if svg files are out >>> of date. >>> configure: WARNING: No system jemalloc found, using system malloc >>> configure: error: Could not find backtrace() support >>> >>> Does anyone know a workaround ? >>> I use multiarch/alpine:aarch64-edge as a base Docker image >>> >>> Martin >>> >>> >>> >>>> - store the packages as CircleCI artifacts >>>>>> - anything else that is still missing >>>>>> >>>>>> Adding more architectures would be as easy as adding a new Dockerfile >>>>>> with a base image from the respective type. >>>>>> >>>>>> Martin >>>>>> >>>>>> >>>>>>> 2) Travis CI >>>>>>> 2.1) problems >>>>>>> 2.1.1) generally Travis is slower than Circle! >>>>>>> Althought if we use CircleCI 'machine' executor it will be slower >>>>>>> than the current 'Docker' executor! >>>>>>> 2.1.2) Travis supports only Ubuntu >>>>>>> Current setup at CircleCI uses CentOS 7. >>>>>>> I guess the build steps won't have problems on Ubuntu. >>>>>>> >>>>>>> 3) GitHub Actions >>>>>>> GH Actions does not support ARM64 but it supports self hosted ARM64 >>>>>>> runners >>>>>>> 3.1) The problem is that there is no way to make a self hosted >>>>>>> runner really private. I.e. if someone forks Varnish Cache any commit in >>>>>>> the fork will trigger builds on the arm64 node. There is no way to reserve >>>>>>> the runner only for commits against >>>>>>> https://github.com/varnishcache/varnish-cache >>>>>>> >>>>>>> Do you see other problems or maybe different ways ? >>>>>>> Do you have preferences which way to go ? >>>>>>> >>>>>>> Regards, >>>>>>> Martin >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> Regards, >>>>>>>> Martin >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Guillaume Quintard >>>>>>>>> _______________________________________________ >>>>>>>>> varnish-dev mailing list >>>>>>>>> varnish-dev at varnish-cache.org >>>>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev >>>>>>>>> >>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.grigorov at gmail.com Wed Mar 25 09:30:00 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Wed, 25 Mar 2020 11:30:00 +0200 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: Hi, I've moved 'dist' job to be executed in parallel with 'tar_pkg_tools' and the results from both are shared in the workspace for the actual packing jobs. Now the new error for aarch64-apk job is: abuild: varnish >>> varnish: Updating the sha512sums in APKBUILD... ]0; DEBUG: 4 ]0;abuild: varnish >>> varnish: Building /varnish 6.4.0-r1 (using abuild 3.5.0-r0) started Wed, 25 Mar 2020 09:22:02 +0000 >>> varnish: Checking sanity of /package/APKBUILD... >>> WARNING: varnish: No maintainer >>> varnish: Analyzing dependencies... 0% % ############################################>>> varnish: Installing for build: build-base gcc libc-dev libgcc pcre-dev ncurses-dev libedit-dev py-docutils linux-headers libunwind-dev python py3-sphinx Waiting for repository lock ERROR: Unable to lock database: Bad file descriptor ERROR: Failed to open apk database: Bad file descriptor >>> ERROR: varnish: builddeps failed ]0; >>> varnish: Uninstalling dependencies... Waiting for repository lock ERROR: Unable to lock database: Bad file descriptor ERROR: Failed to open apk database: Bad file descriptor Google suggested to do this: rm -rf /var/cache/apk mkdir /var/cache/apk It fails at 'abuild -r' - https://github.com/martin-g/varnish-cache/blob/b62c357b389c0e1e31e9c001cbffb55090c2e49f/.circleci/make-apk-packages.sh#L61 Any hints ? Martin On Wed, Mar 25, 2020 at 2:39 AM Guillaume Quintard < guillaume at varnish-software.com> wrote: > Hi, > > So, you are pointing at the `dist` job, whose sole role is to provide us > with a dist tarball, so we don't need that command line to work for > everyone, just for that specific platform. > > On the other hand, > https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L168 is > closer to what you want, `distcheck` will be call on all platform, and you > can see that it has the `--with-unwind` argument. > -- > Guillaume Quintard > > > On Tue, Mar 24, 2020 at 3:05 PM Martin Grigorov > wrote: > >> >> >> On Tue, Mar 24, 2020, 17:19 Guillaume Quintard < >> guillaume at varnish-software.com> wrote: >> >>> Compare your configure line with what's currently in use (or the >>> apkbuild file), there are a few options (with-unwind, without-jemalloc, >>> etc.) That need to be set >>> >> >> The configure line comes from "./autogen.des": >> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/autogen.des#L35-L42 >> It is called at: >> >> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L40 >> In my branch at: >> >> https://github.com/martin-g/varnish-cache/blob/4b4626ee9cc366b032a45f27b54d77176125ef03/.circleci/make-apk-packages.sh#L26 >> >> It fails only on aarch64 for Alpine Linux. The x86_64 build for Alpine is >> fine. >> AARCH64 for CentOS 7 and Ubuntu 18.04 are also fine. >> >> Martin >> >> >>> On Tue, Mar 24, 2020, 08:05 Martin Grigorov >>> wrote: >>> >>>> Hi, >>>> >>>> On Tue, Mar 24, 2020 at 11:00 AM Martin Grigorov < >>>> martin.grigorov at gmail.com> wrote: >>>> >>>>> Hi Guillaume, >>>>> >>>>> On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard < >>>>> guillaume at varnish-software.com> wrote: >>>>> >>>>>> Hi Martin, >>>>>> >>>>>> Thank you for that. >>>>>> A few remarks and questions: >>>>>> - how much time does the "docker build" step takes? We can possibly >>>>>> speed things up by push images to the dockerhub, as they don't need to >>>>>> change very often. >>>>>> >>>>> >>>>> Definitely such optimization would be a good thing to do! >>>>> At the moment, with 'machine' executor it fetches the base image and >>>>> then builds all the Docker layers again and again. >>>>> Here are the timings: >>>>> 1) Spinning up a VM - around 10secs >>>>> 2) prepare env variables - 0secs >>>>> 3) checkout code (varnish-cache) - 5secs >>>>> 4) activate QEMU - 2secs >>>>> 5) build packages >>>>> 5.1) x86 deb - 3m 30secs >>>>> 5.2) x86 rpm - 2m 50secs >>>>> 5.3) aarch64 rpm - 35mins >>>>> 5.4) aarch64 deb - 45mins >>>>> >>>>> >>>>>> - any reason why you clone pkg-varnish-cache in each job? The idea >>>>>> was to have it cloned once in tar-pkg-tools for consistency and >>>>>> reproducibility, which we lose here. >>>>>> >>>>> >>>>> I will extract the common steps once I see it working. This is my >>>>> first CircleCI project and I still find my ways in it! >>>>> >>>>> >>>>>> - do we want to change things for the amd64 platforms for the sake of >>>>>> consistency? >>>>>> >>>>> >>>>> So far there is nothing specific for amd4 or aarch64, except the base >>>>> Docker images. >>>>> For example make-deb-packages.sh is reused for both amd64 and aarch64 >>>>> builds. Same for -rpm- and now for -apk- (alpine). >>>>> >>>>> Once I feel the change is almost finished I will open a Pull Request >>>>> for more comments! >>>>> >>>>> Martin >>>>> >>>>> >>>>>> >>>>>> -- >>>>>> Guillaume Quintard >>>>>> >>>>>> >>>>>> On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov < >>>>>> martin.grigorov at gmail.com> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> >>>>>>> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov < >>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov < >>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi Guillaume, >>>>>>>>> >>>>>>>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < >>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> Offering arm64 packages requires a few things: >>>>>>>>>> - arm64-compatible code (all good in >>>>>>>>>> https://github.com/varnishcache/varnish-cache) >>>>>>>>>> - arm64-compatible package framework (all good in >>>>>>>>>> https://github.com/varnishcache/pkg-varnish-cache) >>>>>>>>>> - infrastructure to build the packages (uhoh, see below) >>>>>>>>>> - infrastructure to store and deliver ( >>>>>>>>>> https://packagecloud.io/varnishcache) >>>>>>>>>> >>>>>>>>>> So, everything is in place, expect for the third point. At the >>>>>>>>>> moment, there are two concurrent CI implementations: >>>>>>>>>> - travis: >>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's >>>>>>>>>> the historical one, and currently only runs compilation+test for OSX >>>>>>>>>> >>>>>>>>> >>>>>>>>> Actually it tests Linux AMD64 and ARM64 too. >>>>>>>>> >>>>>>>>> >>>>>>>>>> - circleci: >>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the >>>>>>>>>> new kid on the block, that builds all the packages and distchecks for all >>>>>>>>>> the packaged platforms >>>>>>>>>> >>>>>>>>>> The issue is that cirecleci doesn't support arm64 containers (for >>>>>>>>>> now?), so we would need to re-implement the packaging logic in Travis. It's >>>>>>>>>> not a big problem, but it's currently not a priority on my side. >>>>>>>>>> >>>>>>>>>> However, I am totally ready to provide help if someone wants to >>>>>>>>>> take that up. The added benefit it that Travis would be able to handle >>>>>>>>>> everything and we can retire the circleci experiment >>>>>>>>>> >>>>>>>>> >>>>>>>>> I will take a look in the coming days and ask you if I need help! >>>>>>>>> >>>>>>>> >>>>>>>> I've took a look at the current setup and here is what I've found >>>>>>>> as problems and possible solutions: >>>>>>>> >>>>>>>> 1) Circle CI >>>>>>>> 1.1) problem - the 'machine' and 'Docker' executors run on x86_64, >>>>>>>> so there is no way to build the packages in a "native" environment >>>>>>>> 1.2) possible solutions >>>>>>>> 1.2.1) use multiarch cross build >>>>>>>> 1.2.2) use 'machine' executor that registers QEMU via >>>>>>>> https://hub.docker.com/r/multiarch/qemu-user-static/ and then >>>>>>>> builds and runs a custom Docker image that executes a shell script with the >>>>>>>> build steps >>>>>>>> It will look something like >>>>>>>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but >>>>>>>> instead of uploading the Docker image as a last step it will run it. >>>>>>>> The RPM and DEB build related code from current config.yml will be >>>>>>>> extracted into shell scripts which will be copied in the custom Docker >>>>>>>> images >>>>>>>> >>>>>>>> From these two possible ways I have better picture in my head how >>>>>>>> to do 1.2.2, but I don't mind going deep in 1.2.1 if this is what you'd >>>>>>>> prefer. >>>>>>>> >>>>>>> >>>>>>> I've decided to stay with Circle CI and use 'machine' executor with >>>>>>> QEMU. >>>>>>> >>>>>>> The changed config.yml could be seen at >>>>>>> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and >>>>>>> the build at >>>>>>> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8 >>>>>>> The builds on x86 arch take 3-4 mins, but for aarch64 (emulation!) >>>>>>> ~40mins >>>>>>> For now the jobs just build the .deb & .rpm packages for CentOS 7 >>>>>>> and Ubuntu 18.04, both amd64 and aarch64. >>>>>>> TODOs: >>>>>>> - migrate Alpine >>>>>>> >>>>>> >>>> Build on Alpine aarch64 fails with: >>>> ... >>>> automake: this behaviour will change in future Automake versions: they >>>> will >>>> automake: unconditionally cause object files to be placed in the same >>>> subdirectory >>>> automake: of the corresponding sources. >>>> automake: project, to avoid future incompatibilities. >>>> parallel-tests: installing 'build-aux/test-driver' >>>> lib/libvmod_debug/Makefile.am:12: warning: libvmod_debug_la_LDFLAGS >>>> multiply defined in condition TRUE ... >>>> lib/libvmod_debug/automake_boilerplate.am:19: ... >>>> 'libvmod_debug_la_LDFLAGS' previously defined here >>>> lib/libvmod_debug/Makefile.am:9: 'lib/libvmod_debug/ >>>> automake_boilerplate.am' included from here >>>> + autoconf >>>> + CONFIG_SHELL=/bin/sh >>>> + export CONFIG_SHELL >>>> + ./configure '--prefix=/opt/varnish' '--mandir=/opt/varnish/man' >>>> --enable-maintainer-mode --enable-developer-warnings >>>> --enable-debugging-symbols --enable-dependency-tracking >>>> --with-persistent-storage --quiet >>>> configure: WARNING: dot not found - build will fail if svg files are >>>> out of date. >>>> configure: WARNING: No system jemalloc found, using system malloc >>>> configure: error: Could not find backtrace() support >>>> >>>> Does anyone know a workaround ? >>>> I use multiarch/alpine:aarch64-edge as a base Docker image >>>> >>>> Martin >>>> >>>> >>>> >>>>> - store the packages as CircleCI artifacts >>>>>>> - anything else that is still missing >>>>>>> >>>>>>> Adding more architectures would be as easy as adding a new >>>>>>> Dockerfile with a base image from the respective type. >>>>>>> >>>>>>> Martin >>>>>>> >>>>>>> >>>>>>>> 2) Travis CI >>>>>>>> 2.1) problems >>>>>>>> 2.1.1) generally Travis is slower than Circle! >>>>>>>> Althought if we use CircleCI 'machine' executor it will be slower >>>>>>>> than the current 'Docker' executor! >>>>>>>> 2.1.2) Travis supports only Ubuntu >>>>>>>> Current setup at CircleCI uses CentOS 7. >>>>>>>> I guess the build steps won't have problems on Ubuntu. >>>>>>>> >>>>>>>> 3) GitHub Actions >>>>>>>> GH Actions does not support ARM64 but it supports self hosted ARM64 >>>>>>>> runners >>>>>>>> 3.1) The problem is that there is no way to make a self hosted >>>>>>>> runner really private. I.e. if someone forks Varnish Cache any commit in >>>>>>>> the fork will trigger builds on the arm64 node. There is no way to reserve >>>>>>>> the runner only for commits against >>>>>>>> https://github.com/varnishcache/varnish-cache >>>>>>>> >>>>>>>> Do you see other problems or maybe different ways ? >>>>>>>> Do you have preferences which way to go ? >>>>>>>> >>>>>>>> Regards, >>>>>>>> Martin >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> Regards, >>>>>>>>> Martin >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Guillaume Quintard >>>>>>>>>> _______________________________________________ >>>>>>>>>> varnish-dev mailing list >>>>>>>>>> varnish-dev at varnish-cache.org >>>>>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev >>>>>>>>>> >>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at varnish-software.com Wed Mar 25 18:14:48 2020 From: guillaume at varnish-software.com (Guillaume Quintard) Date: Wed, 25 Mar 2020 11:14:48 -0700 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: is that script running as root? -- Guillaume Quintard On Wed, Mar 25, 2020 at 2:30 AM Martin Grigorov wrote: > Hi, > > I've moved 'dist' job to be executed in parallel with 'tar_pkg_tools' and > the results from both are shared in the workspace for the actual packing > jobs. > Now the new error for aarch64-apk job is: > > abuild: varnish >>> varnish: Updating the sha512sums in APKBUILD... > ]0; DEBUG: 4 > ]0;abuild: varnish >>> varnish: Building /varnish 6.4.0-r1 (using abuild > 3.5.0-r0) started Wed, 25 Mar 2020 09:22:02 +0000 > >>> varnish: Checking sanity of /package/APKBUILD... > >>> WARNING: varnish: No maintainer > >>> varnish: Analyzing dependencies... > 0% % > ############################################>>> varnish: Installing for > build: build-base gcc libc-dev libgcc pcre-dev ncurses-dev libedit-dev > py-docutils linux-headers libunwind-dev python py3-sphinx > Waiting for repository lock > ERROR: Unable to lock database: Bad file descriptor > ERROR: Failed to open apk database: Bad file descriptor > >>> ERROR: varnish: builddeps failed > ]0; >>> varnish: Uninstalling dependencies... > Waiting for repository lock > ERROR: Unable to lock database: Bad file descriptor > ERROR: Failed to open apk database: Bad file descriptor > > Google suggested to do this: > rm -rf /var/cache/apk > mkdir /var/cache/apk > > It fails at 'abuild -r' - > https://github.com/martin-g/varnish-cache/blob/b62c357b389c0e1e31e9c001cbffb55090c2e49f/.circleci/make-apk-packages.sh#L61 > > Any hints ? > > Martin > > On Wed, Mar 25, 2020 at 2:39 AM Guillaume Quintard < > guillaume at varnish-software.com> wrote: > >> Hi, >> >> So, you are pointing at the `dist` job, whose sole role is to provide us >> with a dist tarball, so we don't need that command line to work for >> everyone, just for that specific platform. >> >> On the other hand, >> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L168 is >> closer to what you want, `distcheck` will be call on all platform, and you >> can see that it has the `--with-unwind` argument. >> -- >> Guillaume Quintard >> >> >> On Tue, Mar 24, 2020 at 3:05 PM Martin Grigorov < >> martin.grigorov at gmail.com> wrote: >> >>> >>> >>> On Tue, Mar 24, 2020, 17:19 Guillaume Quintard < >>> guillaume at varnish-software.com> wrote: >>> >>>> Compare your configure line with what's currently in use (or the >>>> apkbuild file), there are a few options (with-unwind, without-jemalloc, >>>> etc.) That need to be set >>>> >>> >>> The configure line comes from "./autogen.des": >>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/autogen.des#L35-L42 >>> It is called at: >>> >>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L40 >>> In my branch at: >>> >>> https://github.com/martin-g/varnish-cache/blob/4b4626ee9cc366b032a45f27b54d77176125ef03/.circleci/make-apk-packages.sh#L26 >>> >>> It fails only on aarch64 for Alpine Linux. The x86_64 build for Alpine >>> is fine. >>> AARCH64 for CentOS 7 and Ubuntu 18.04 are also fine. >>> >>> Martin >>> >>> >>>> On Tue, Mar 24, 2020, 08:05 Martin Grigorov >>>> wrote: >>>> >>>>> Hi, >>>>> >>>>> On Tue, Mar 24, 2020 at 11:00 AM Martin Grigorov < >>>>> martin.grigorov at gmail.com> wrote: >>>>> >>>>>> Hi Guillaume, >>>>>> >>>>>> On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard < >>>>>> guillaume at varnish-software.com> wrote: >>>>>> >>>>>>> Hi Martin, >>>>>>> >>>>>>> Thank you for that. >>>>>>> A few remarks and questions: >>>>>>> - how much time does the "docker build" step takes? We can possibly >>>>>>> speed things up by push images to the dockerhub, as they don't need to >>>>>>> change very often. >>>>>>> >>>>>> >>>>>> Definitely such optimization would be a good thing to do! >>>>>> At the moment, with 'machine' executor it fetches the base image and >>>>>> then builds all the Docker layers again and again. >>>>>> Here are the timings: >>>>>> 1) Spinning up a VM - around 10secs >>>>>> 2) prepare env variables - 0secs >>>>>> 3) checkout code (varnish-cache) - 5secs >>>>>> 4) activate QEMU - 2secs >>>>>> 5) build packages >>>>>> 5.1) x86 deb - 3m 30secs >>>>>> 5.2) x86 rpm - 2m 50secs >>>>>> 5.3) aarch64 rpm - 35mins >>>>>> 5.4) aarch64 deb - 45mins >>>>>> >>>>>> >>>>>>> - any reason why you clone pkg-varnish-cache in each job? The idea >>>>>>> was to have it cloned once in tar-pkg-tools for consistency and >>>>>>> reproducibility, which we lose here. >>>>>>> >>>>>> >>>>>> I will extract the common steps once I see it working. This is my >>>>>> first CircleCI project and I still find my ways in it! >>>>>> >>>>>> >>>>>>> - do we want to change things for the amd64 platforms for the sake >>>>>>> of consistency? >>>>>>> >>>>>> >>>>>> So far there is nothing specific for amd4 or aarch64, except the base >>>>>> Docker images. >>>>>> For example make-deb-packages.sh is reused for both amd64 and aarch64 >>>>>> builds. Same for -rpm- and now for -apk- (alpine). >>>>>> >>>>>> Once I feel the change is almost finished I will open a Pull Request >>>>>> for more comments! >>>>>> >>>>>> Martin >>>>>> >>>>>> >>>>>>> >>>>>>> -- >>>>>>> Guillaume Quintard >>>>>>> >>>>>>> >>>>>>> On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov < >>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov < >>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov < >>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi Guillaume, >>>>>>>>>> >>>>>>>>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < >>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> Offering arm64 packages requires a few things: >>>>>>>>>>> - arm64-compatible code (all good in >>>>>>>>>>> https://github.com/varnishcache/varnish-cache) >>>>>>>>>>> - arm64-compatible package framework (all good in >>>>>>>>>>> https://github.com/varnishcache/pkg-varnish-cache) >>>>>>>>>>> - infrastructure to build the packages (uhoh, see below) >>>>>>>>>>> - infrastructure to store and deliver ( >>>>>>>>>>> https://packagecloud.io/varnishcache) >>>>>>>>>>> >>>>>>>>>>> So, everything is in place, expect for the third point. At the >>>>>>>>>>> moment, there are two concurrent CI implementations: >>>>>>>>>>> - travis: >>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's >>>>>>>>>>> the historical one, and currently only runs compilation+test for OSX >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Actually it tests Linux AMD64 and ARM64 too. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> - circleci: >>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the >>>>>>>>>>> new kid on the block, that builds all the packages and distchecks for all >>>>>>>>>>> the packaged platforms >>>>>>>>>>> >>>>>>>>>>> The issue is that cirecleci doesn't support arm64 containers >>>>>>>>>>> (for now?), so we would need to re-implement the packaging logic in Travis. >>>>>>>>>>> It's not a big problem, but it's currently not a priority on my side. >>>>>>>>>>> >>>>>>>>>>> However, I am totally ready to provide help if someone wants to >>>>>>>>>>> take that up. The added benefit it that Travis would be able to handle >>>>>>>>>>> everything and we can retire the circleci experiment >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I will take a look in the coming days and ask you if I need help! >>>>>>>>>> >>>>>>>>> >>>>>>>>> I've took a look at the current setup and here is what I've found >>>>>>>>> as problems and possible solutions: >>>>>>>>> >>>>>>>>> 1) Circle CI >>>>>>>>> 1.1) problem - the 'machine' and 'Docker' executors run on x86_64, >>>>>>>>> so there is no way to build the packages in a "native" environment >>>>>>>>> 1.2) possible solutions >>>>>>>>> 1.2.1) use multiarch cross build >>>>>>>>> 1.2.2) use 'machine' executor that registers QEMU via >>>>>>>>> https://hub.docker.com/r/multiarch/qemu-user-static/ and then >>>>>>>>> builds and runs a custom Docker image that executes a shell script with the >>>>>>>>> build steps >>>>>>>>> It will look something like >>>>>>>>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but >>>>>>>>> instead of uploading the Docker image as a last step it will run it. >>>>>>>>> The RPM and DEB build related code from current config.yml will be >>>>>>>>> extracted into shell scripts which will be copied in the custom Docker >>>>>>>>> images >>>>>>>>> >>>>>>>>> From these two possible ways I have better picture in my head how >>>>>>>>> to do 1.2.2, but I don't mind going deep in 1.2.1 if this is what you'd >>>>>>>>> prefer. >>>>>>>>> >>>>>>>> >>>>>>>> I've decided to stay with Circle CI and use 'machine' executor with >>>>>>>> QEMU. >>>>>>>> >>>>>>>> The changed config.yml could be seen at >>>>>>>> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and >>>>>>>> the build at >>>>>>>> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8 >>>>>>>> The builds on x86 arch take 3-4 mins, but for aarch64 (emulation!) >>>>>>>> ~40mins >>>>>>>> For now the jobs just build the .deb & .rpm packages for CentOS 7 >>>>>>>> and Ubuntu 18.04, both amd64 and aarch64. >>>>>>>> TODOs: >>>>>>>> - migrate Alpine >>>>>>>> >>>>>>> >>>>> Build on Alpine aarch64 fails with: >>>>> ... >>>>> automake: this behaviour will change in future Automake versions: they >>>>> will >>>>> automake: unconditionally cause object files to be placed in the same >>>>> subdirectory >>>>> automake: of the corresponding sources. >>>>> automake: project, to avoid future incompatibilities. >>>>> parallel-tests: installing 'build-aux/test-driver' >>>>> lib/libvmod_debug/Makefile.am:12: warning: libvmod_debug_la_LDFLAGS >>>>> multiply defined in condition TRUE ... >>>>> lib/libvmod_debug/automake_boilerplate.am:19: ... >>>>> 'libvmod_debug_la_LDFLAGS' previously defined here >>>>> lib/libvmod_debug/Makefile.am:9: 'lib/libvmod_debug/ >>>>> automake_boilerplate.am' included from here >>>>> + autoconf >>>>> + CONFIG_SHELL=/bin/sh >>>>> + export CONFIG_SHELL >>>>> + ./configure '--prefix=/opt/varnish' '--mandir=/opt/varnish/man' >>>>> --enable-maintainer-mode --enable-developer-warnings >>>>> --enable-debugging-symbols --enable-dependency-tracking >>>>> --with-persistent-storage --quiet >>>>> configure: WARNING: dot not found - build will fail if svg files are >>>>> out of date. >>>>> configure: WARNING: No system jemalloc found, using system malloc >>>>> configure: error: Could not find backtrace() support >>>>> >>>>> Does anyone know a workaround ? >>>>> I use multiarch/alpine:aarch64-edge as a base Docker image >>>>> >>>>> Martin >>>>> >>>>> >>>>> >>>>>> - store the packages as CircleCI artifacts >>>>>>>> - anything else that is still missing >>>>>>>> >>>>>>>> Adding more architectures would be as easy as adding a new >>>>>>>> Dockerfile with a base image from the respective type. >>>>>>>> >>>>>>>> Martin >>>>>>>> >>>>>>>> >>>>>>>>> 2) Travis CI >>>>>>>>> 2.1) problems >>>>>>>>> 2.1.1) generally Travis is slower than Circle! >>>>>>>>> Althought if we use CircleCI 'machine' executor it will be slower >>>>>>>>> than the current 'Docker' executor! >>>>>>>>> 2.1.2) Travis supports only Ubuntu >>>>>>>>> Current setup at CircleCI uses CentOS 7. >>>>>>>>> I guess the build steps won't have problems on Ubuntu. >>>>>>>>> >>>>>>>>> 3) GitHub Actions >>>>>>>>> GH Actions does not support ARM64 but it supports self hosted >>>>>>>>> ARM64 runners >>>>>>>>> 3.1) The problem is that there is no way to make a self hosted >>>>>>>>> runner really private. I.e. if someone forks Varnish Cache any commit in >>>>>>>>> the fork will trigger builds on the arm64 node. There is no way to reserve >>>>>>>>> the runner only for commits against >>>>>>>>> https://github.com/varnishcache/varnish-cache >>>>>>>>> >>>>>>>>> Do you see other problems or maybe different ways ? >>>>>>>>> Do you have preferences which way to go ? >>>>>>>>> >>>>>>>>> Regards, >>>>>>>>> Martin >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Regards, >>>>>>>>>> Martin >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Guillaume Quintard >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> varnish-dev mailing list >>>>>>>>>>> varnish-dev at varnish-cache.org >>>>>>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev >>>>>>>>>>> >>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.grigorov at gmail.com Wed Mar 25 19:55:18 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Wed, 25 Mar 2020 21:55:18 +0200 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: Hi, On Wed, Mar 25, 2020, 20:15 Guillaume Quintard < guillaume at varnish-software.com> wrote: > is that script running as root? > Yes. I also added 'USER root' to its Dockerfile and '-u 0' to 'docker run' arguments but it still doesn't work. The x86 build is OK. It must be something in the base docker image. I've disabled the Alpine aarch64 job for now. I'll send a PR tomorrow! Regards, Martin > -- > Guillaume Quintard > > > On Wed, Mar 25, 2020 at 2:30 AM Martin Grigorov > wrote: > >> Hi, >> >> I've moved 'dist' job to be executed in parallel with 'tar_pkg_tools' and >> the results from both are shared in the workspace for the actual packing >> jobs. >> Now the new error for aarch64-apk job is: >> >> abuild: varnish >>> varnish: Updating the sha512sums in APKBUILD... >> ]0; DEBUG: 4 >> ]0;abuild: varnish >>> varnish: Building /varnish 6.4.0-r1 (using abuild >> 3.5.0-r0) started Wed, 25 Mar 2020 09:22:02 +0000 >> >>> varnish: Checking sanity of /package/APKBUILD... >> >>> WARNING: varnish: No maintainer >> >>> varnish: Analyzing dependencies... >> 0% % >> ############################################>>> varnish: Installing for >> build: build-base gcc libc-dev libgcc pcre-dev ncurses-dev libedit-dev >> py-docutils linux-headers libunwind-dev python py3-sphinx >> Waiting for repository lock >> ERROR: Unable to lock database: Bad file descriptor >> ERROR: Failed to open apk database: Bad file descriptor >> >>> ERROR: varnish: builddeps failed >> ]0; >>> varnish: Uninstalling dependencies... >> Waiting for repository lock >> ERROR: Unable to lock database: Bad file descriptor >> ERROR: Failed to open apk database: Bad file descriptor >> >> Google suggested to do this: >> rm -rf /var/cache/apk >> mkdir /var/cache/apk >> >> It fails at 'abuild -r' - >> https://github.com/martin-g/varnish-cache/blob/b62c357b389c0e1e31e9c001cbffb55090c2e49f/.circleci/make-apk-packages.sh#L61 >> >> Any hints ? >> >> Martin >> >> On Wed, Mar 25, 2020 at 2:39 AM Guillaume Quintard < >> guillaume at varnish-software.com> wrote: >> >>> Hi, >>> >>> So, you are pointing at the `dist` job, whose sole role is to provide us >>> with a dist tarball, so we don't need that command line to work for >>> everyone, just for that specific platform. >>> >>> On the other hand, >>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L168 is >>> closer to what you want, `distcheck` will be call on all platform, and you >>> can see that it has the `--with-unwind` argument. >>> -- >>> Guillaume Quintard >>> >>> >>> On Tue, Mar 24, 2020 at 3:05 PM Martin Grigorov < >>> martin.grigorov at gmail.com> wrote: >>> >>>> >>>> >>>> On Tue, Mar 24, 2020, 17:19 Guillaume Quintard < >>>> guillaume at varnish-software.com> wrote: >>>> >>>>> Compare your configure line with what's currently in use (or the >>>>> apkbuild file), there are a few options (with-unwind, without-jemalloc, >>>>> etc.) That need to be set >>>>> >>>> >>>> The configure line comes from "./autogen.des": >>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/autogen.des#L35-L42 >>>> It is called at: >>>> >>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L40 >>>> In my branch at: >>>> >>>> https://github.com/martin-g/varnish-cache/blob/4b4626ee9cc366b032a45f27b54d77176125ef03/.circleci/make-apk-packages.sh#L26 >>>> >>>> It fails only on aarch64 for Alpine Linux. The x86_64 build for Alpine >>>> is fine. >>>> AARCH64 for CentOS 7 and Ubuntu 18.04 are also fine. >>>> >>>> Martin >>>> >>>> >>>>> On Tue, Mar 24, 2020, 08:05 Martin Grigorov >>>>> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> On Tue, Mar 24, 2020 at 11:00 AM Martin Grigorov < >>>>>> martin.grigorov at gmail.com> wrote: >>>>>> >>>>>>> Hi Guillaume, >>>>>>> >>>>>>> On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard < >>>>>>> guillaume at varnish-software.com> wrote: >>>>>>> >>>>>>>> Hi Martin, >>>>>>>> >>>>>>>> Thank you for that. >>>>>>>> A few remarks and questions: >>>>>>>> - how much time does the "docker build" step takes? We can possibly >>>>>>>> speed things up by push images to the dockerhub, as they don't need to >>>>>>>> change very often. >>>>>>>> >>>>>>> >>>>>>> Definitely such optimization would be a good thing to do! >>>>>>> At the moment, with 'machine' executor it fetches the base image and >>>>>>> then builds all the Docker layers again and again. >>>>>>> Here are the timings: >>>>>>> 1) Spinning up a VM - around 10secs >>>>>>> 2) prepare env variables - 0secs >>>>>>> 3) checkout code (varnish-cache) - 5secs >>>>>>> 4) activate QEMU - 2secs >>>>>>> 5) build packages >>>>>>> 5.1) x86 deb - 3m 30secs >>>>>>> 5.2) x86 rpm - 2m 50secs >>>>>>> 5.3) aarch64 rpm - 35mins >>>>>>> 5.4) aarch64 deb - 45mins >>>>>>> >>>>>>> >>>>>>>> - any reason why you clone pkg-varnish-cache in each job? The idea >>>>>>>> was to have it cloned once in tar-pkg-tools for consistency and >>>>>>>> reproducibility, which we lose here. >>>>>>>> >>>>>>> >>>>>>> I will extract the common steps once I see it working. This is my >>>>>>> first CircleCI project and I still find my ways in it! >>>>>>> >>>>>>> >>>>>>>> - do we want to change things for the amd64 platforms for the sake >>>>>>>> of consistency? >>>>>>>> >>>>>>> >>>>>>> So far there is nothing specific for amd4 or aarch64, except the >>>>>>> base Docker images. >>>>>>> For example make-deb-packages.sh is reused for both amd64 and >>>>>>> aarch64 builds. Same for -rpm- and now for -apk- (alpine). >>>>>>> >>>>>>> Once I feel the change is almost finished I will open a Pull Request >>>>>>> for more comments! >>>>>>> >>>>>>> Martin >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Guillaume Quintard >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov < >>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov < >>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov < >>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi Guillaume, >>>>>>>>>>> >>>>>>>>>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < >>>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi, >>>>>>>>>>>> >>>>>>>>>>>> Offering arm64 packages requires a few things: >>>>>>>>>>>> - arm64-compatible code (all good in >>>>>>>>>>>> https://github.com/varnishcache/varnish-cache) >>>>>>>>>>>> - arm64-compatible package framework (all good in >>>>>>>>>>>> https://github.com/varnishcache/pkg-varnish-cache) >>>>>>>>>>>> - infrastructure to build the packages (uhoh, see below) >>>>>>>>>>>> - infrastructure to store and deliver ( >>>>>>>>>>>> https://packagecloud.io/varnishcache) >>>>>>>>>>>> >>>>>>>>>>>> So, everything is in place, expect for the third point. At the >>>>>>>>>>>> moment, there are two concurrent CI implementations: >>>>>>>>>>>> - travis: >>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's >>>>>>>>>>>> the historical one, and currently only runs compilation+test for OSX >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Actually it tests Linux AMD64 and ARM64 too. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> - circleci: >>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the >>>>>>>>>>>> new kid on the block, that builds all the packages and distchecks for all >>>>>>>>>>>> the packaged platforms >>>>>>>>>>>> >>>>>>>>>>>> The issue is that cirecleci doesn't support arm64 containers >>>>>>>>>>>> (for now?), so we would need to re-implement the packaging logic in Travis. >>>>>>>>>>>> It's not a big problem, but it's currently not a priority on my side. >>>>>>>>>>>> >>>>>>>>>>>> However, I am totally ready to provide help if someone wants to >>>>>>>>>>>> take that up. The added benefit it that Travis would be able to handle >>>>>>>>>>>> everything and we can retire the circleci experiment >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I will take a look in the coming days and ask you if I need help! >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I've took a look at the current setup and here is what I've found >>>>>>>>>> as problems and possible solutions: >>>>>>>>>> >>>>>>>>>> 1) Circle CI >>>>>>>>>> 1.1) problem - the 'machine' and 'Docker' executors run on >>>>>>>>>> x86_64, so there is no way to build the packages in a "native" environment >>>>>>>>>> 1.2) possible solutions >>>>>>>>>> 1.2.1) use multiarch cross build >>>>>>>>>> 1.2.2) use 'machine' executor that registers QEMU via >>>>>>>>>> https://hub.docker.com/r/multiarch/qemu-user-static/ and then >>>>>>>>>> builds and runs a custom Docker image that executes a shell script with the >>>>>>>>>> build steps >>>>>>>>>> It will look something like >>>>>>>>>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but >>>>>>>>>> instead of uploading the Docker image as a last step it will run it. >>>>>>>>>> The RPM and DEB build related code from current config.yml will >>>>>>>>>> be extracted into shell scripts which will be copied in the custom Docker >>>>>>>>>> images >>>>>>>>>> >>>>>>>>>> From these two possible ways I have better picture in my head how >>>>>>>>>> to do 1.2.2, but I don't mind going deep in 1.2.1 if this is what you'd >>>>>>>>>> prefer. >>>>>>>>>> >>>>>>>>> >>>>>>>>> I've decided to stay with Circle CI and use 'machine' executor >>>>>>>>> with QEMU. >>>>>>>>> >>>>>>>>> The changed config.yml could be seen at >>>>>>>>> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and >>>>>>>>> the build at >>>>>>>>> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8 >>>>>>>>> The builds on x86 arch take 3-4 mins, but for aarch64 (emulation!) >>>>>>>>> ~40mins >>>>>>>>> For now the jobs just build the .deb & .rpm packages for CentOS 7 >>>>>>>>> and Ubuntu 18.04, both amd64 and aarch64. >>>>>>>>> TODOs: >>>>>>>>> - migrate Alpine >>>>>>>>> >>>>>>>> >>>>>> Build on Alpine aarch64 fails with: >>>>>> ... >>>>>> automake: this behaviour will change in future Automake versions: >>>>>> they will >>>>>> automake: unconditionally cause object files to be placed in the same >>>>>> subdirectory >>>>>> automake: of the corresponding sources. >>>>>> automake: project, to avoid future incompatibilities. >>>>>> parallel-tests: installing 'build-aux/test-driver' >>>>>> lib/libvmod_debug/Makefile.am:12: warning: libvmod_debug_la_LDFLAGS >>>>>> multiply defined in condition TRUE ... >>>>>> lib/libvmod_debug/automake_boilerplate.am:19: ... >>>>>> 'libvmod_debug_la_LDFLAGS' previously defined here >>>>>> lib/libvmod_debug/Makefile.am:9: 'lib/libvmod_debug/ >>>>>> automake_boilerplate.am' included from here >>>>>> + autoconf >>>>>> + CONFIG_SHELL=/bin/sh >>>>>> + export CONFIG_SHELL >>>>>> + ./configure '--prefix=/opt/varnish' '--mandir=/opt/varnish/man' >>>>>> --enable-maintainer-mode --enable-developer-warnings >>>>>> --enable-debugging-symbols --enable-dependency-tracking >>>>>> --with-persistent-storage --quiet >>>>>> configure: WARNING: dot not found - build will fail if svg files are >>>>>> out of date. >>>>>> configure: WARNING: No system jemalloc found, using system malloc >>>>>> configure: error: Could not find backtrace() support >>>>>> >>>>>> Does anyone know a workaround ? >>>>>> I use multiarch/alpine:aarch64-edge as a base Docker image >>>>>> >>>>>> Martin >>>>>> >>>>>> >>>>>> >>>>>>> - store the packages as CircleCI artifacts >>>>>>>>> - anything else that is still missing >>>>>>>>> >>>>>>>>> Adding more architectures would be as easy as adding a new >>>>>>>>> Dockerfile with a base image from the respective type. >>>>>>>>> >>>>>>>>> Martin >>>>>>>>> >>>>>>>>> >>>>>>>>>> 2) Travis CI >>>>>>>>>> 2.1) problems >>>>>>>>>> 2.1.1) generally Travis is slower than Circle! >>>>>>>>>> Althought if we use CircleCI 'machine' executor it will be slower >>>>>>>>>> than the current 'Docker' executor! >>>>>>>>>> 2.1.2) Travis supports only Ubuntu >>>>>>>>>> Current setup at CircleCI uses CentOS 7. >>>>>>>>>> I guess the build steps won't have problems on Ubuntu. >>>>>>>>>> >>>>>>>>>> 3) GitHub Actions >>>>>>>>>> GH Actions does not support ARM64 but it supports self hosted >>>>>>>>>> ARM64 runners >>>>>>>>>> 3.1) The problem is that there is no way to make a self hosted >>>>>>>>>> runner really private. I.e. if someone forks Varnish Cache any commit in >>>>>>>>>> the fork will trigger builds on the arm64 node. There is no way to reserve >>>>>>>>>> the runner only for commits against >>>>>>>>>> https://github.com/varnishcache/varnish-cache >>>>>>>>>> >>>>>>>>>> Do you see other problems or maybe different ways ? >>>>>>>>>> Do you have preferences which way to go ? >>>>>>>>>> >>>>>>>>>> Regards, >>>>>>>>>> Martin >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Regards, >>>>>>>>>>> Martin >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Guillaume Quintard >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> varnish-dev mailing list >>>>>>>>>>>> varnish-dev at varnish-cache.org >>>>>>>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev >>>>>>>>>>>> >>>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.grigorov at gmail.com Thu Mar 26 08:15:21 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Thu, 26 Mar 2020 10:15:21 +0200 Subject: Support for AARCH64 In-Reply-To: References: <8156.1583910935@critter.freebsd.dk> Message-ID: Hello, Here is the PR: https://github.com/varnishcache/varnish-cache/pull/3263 I will add some more documentation about the new setup. Any feedback is welcome! Regards, Martin On Wed, Mar 25, 2020 at 9:55 PM Martin Grigorov wrote: > Hi, > > On Wed, Mar 25, 2020, 20:15 Guillaume Quintard < > guillaume at varnish-software.com> wrote: > >> is that script running as root? >> > > Yes. > I also added 'USER root' to its Dockerfile and '-u 0' to 'docker run' > arguments but it still doesn't work. > The x86 build is OK. > It must be something in the base docker image. > I've disabled the Alpine aarch64 job for now. > I'll send a PR tomorrow! > > Regards, > Martin > > >> -- >> Guillaume Quintard >> >> >> On Wed, Mar 25, 2020 at 2:30 AM Martin Grigorov < >> martin.grigorov at gmail.com> wrote: >> >>> Hi, >>> >>> I've moved 'dist' job to be executed in parallel with 'tar_pkg_tools' >>> and the results from both are shared in the workspace for the actual >>> packing jobs. >>> Now the new error for aarch64-apk job is: >>> >>> abuild: varnish >>> varnish: Updating the sha512sums in APKBUILD... >>> ]0; DEBUG: 4 >>> ]0;abuild: varnish >>> varnish: Building /varnish 6.4.0-r1 (using abuild >>> 3.5.0-r0) started Wed, 25 Mar 2020 09:22:02 +0000 >>> >>> varnish: Checking sanity of /package/APKBUILD... >>> >>> WARNING: varnish: No maintainer >>> >>> varnish: Analyzing dependencies... >>> 0% % >>> ############################################>>> varnish: Installing for >>> build: build-base gcc libc-dev libgcc pcre-dev ncurses-dev libedit-dev >>> py-docutils linux-headers libunwind-dev python py3-sphinx >>> Waiting for repository lock >>> ERROR: Unable to lock database: Bad file descriptor >>> ERROR: Failed to open apk database: Bad file descriptor >>> >>> ERROR: varnish: builddeps failed >>> ]0; >>> varnish: Uninstalling dependencies... >>> Waiting for repository lock >>> ERROR: Unable to lock database: Bad file descriptor >>> ERROR: Failed to open apk database: Bad file descriptor >>> >>> Google suggested to do this: >>> rm -rf /var/cache/apk >>> mkdir /var/cache/apk >>> >>> It fails at 'abuild -r' - >>> https://github.com/martin-g/varnish-cache/blob/b62c357b389c0e1e31e9c001cbffb55090c2e49f/.circleci/make-apk-packages.sh#L61 >>> >>> Any hints ? >>> >>> Martin >>> >>> On Wed, Mar 25, 2020 at 2:39 AM Guillaume Quintard < >>> guillaume at varnish-software.com> wrote: >>> >>>> Hi, >>>> >>>> So, you are pointing at the `dist` job, whose sole role is to provide >>>> us with a dist tarball, so we don't need that command line to work for >>>> everyone, just for that specific platform. >>>> >>>> On the other hand, >>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L168 is >>>> closer to what you want, `distcheck` will be call on all platform, and you >>>> can see that it has the `--with-unwind` argument. >>>> -- >>>> Guillaume Quintard >>>> >>>> >>>> On Tue, Mar 24, 2020 at 3:05 PM Martin Grigorov < >>>> martin.grigorov at gmail.com> wrote: >>>> >>>>> >>>>> >>>>> On Tue, Mar 24, 2020, 17:19 Guillaume Quintard < >>>>> guillaume at varnish-software.com> wrote: >>>>> >>>>>> Compare your configure line with what's currently in use (or the >>>>>> apkbuild file), there are a few options (with-unwind, without-jemalloc, >>>>>> etc.) That need to be set >>>>>> >>>>> >>>>> The configure line comes from "./autogen.des": >>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/autogen.des#L35-L42 >>>>> It is called at: >>>>> >>>>> https://github.com/varnishcache/varnish-cache/blob/4f9d8bed6b24bf9ee900c754f37615fdba1c44db/.circleci/config.yml#L40 >>>>> In my branch at: >>>>> >>>>> https://github.com/martin-g/varnish-cache/blob/4b4626ee9cc366b032a45f27b54d77176125ef03/.circleci/make-apk-packages.sh#L26 >>>>> >>>>> It fails only on aarch64 for Alpine Linux. The x86_64 build for Alpine >>>>> is fine. >>>>> AARCH64 for CentOS 7 and Ubuntu 18.04 are also fine. >>>>> >>>>> Martin >>>>> >>>>> >>>>>> On Tue, Mar 24, 2020, 08:05 Martin Grigorov < >>>>>> martin.grigorov at gmail.com> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> On Tue, Mar 24, 2020 at 11:00 AM Martin Grigorov < >>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>> >>>>>>>> Hi Guillaume, >>>>>>>> >>>>>>>> On Mon, Mar 23, 2020 at 8:01 PM Guillaume Quintard < >>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>> >>>>>>>>> Hi Martin, >>>>>>>>> >>>>>>>>> Thank you for that. >>>>>>>>> A few remarks and questions: >>>>>>>>> - how much time does the "docker build" step takes? We can >>>>>>>>> possibly speed things up by push images to the dockerhub, as they don't >>>>>>>>> need to change very often. >>>>>>>>> >>>>>>>> >>>>>>>> Definitely such optimization would be a good thing to do! >>>>>>>> At the moment, with 'machine' executor it fetches the base image >>>>>>>> and then builds all the Docker layers again and again. >>>>>>>> Here are the timings: >>>>>>>> 1) Spinning up a VM - around 10secs >>>>>>>> 2) prepare env variables - 0secs >>>>>>>> 3) checkout code (varnish-cache) - 5secs >>>>>>>> 4) activate QEMU - 2secs >>>>>>>> 5) build packages >>>>>>>> 5.1) x86 deb - 3m 30secs >>>>>>>> 5.2) x86 rpm - 2m 50secs >>>>>>>> 5.3) aarch64 rpm - 35mins >>>>>>>> 5.4) aarch64 deb - 45mins >>>>>>>> >>>>>>>> >>>>>>>>> - any reason why you clone pkg-varnish-cache in each job? The idea >>>>>>>>> was to have it cloned once in tar-pkg-tools for consistency and >>>>>>>>> reproducibility, which we lose here. >>>>>>>>> >>>>>>>> >>>>>>>> I will extract the common steps once I see it working. This is my >>>>>>>> first CircleCI project and I still find my ways in it! >>>>>>>> >>>>>>>> >>>>>>>>> - do we want to change things for the amd64 platforms for the sake >>>>>>>>> of consistency? >>>>>>>>> >>>>>>>> >>>>>>>> So far there is nothing specific for amd4 or aarch64, except the >>>>>>>> base Docker images. >>>>>>>> For example make-deb-packages.sh is reused for both amd64 and >>>>>>>> aarch64 builds. Same for -rpm- and now for -apk- (alpine). >>>>>>>> >>>>>>>> Once I feel the change is almost finished I will open a Pull >>>>>>>> Request for more comments! >>>>>>>> >>>>>>>> Martin >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Guillaume Quintard >>>>>>>>> >>>>>>>>> >>>>>>>>> On Mon, Mar 23, 2020 at 6:25 AM Martin Grigorov < >>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Mar 18, 2020 at 5:31 PM Martin Grigorov < >>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> On Thu, Mar 12, 2020 at 4:35 PM Martin Grigorov < >>>>>>>>>>> martin.grigorov at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi Guillaume, >>>>>>>>>>>> >>>>>>>>>>>> On Thu, Mar 12, 2020 at 3:23 PM Guillaume Quintard < >>>>>>>>>>>> guillaume at varnish-software.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi, >>>>>>>>>>>>> >>>>>>>>>>>>> Offering arm64 packages requires a few things: >>>>>>>>>>>>> - arm64-compatible code (all good in >>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache) >>>>>>>>>>>>> - arm64-compatible package framework (all good in >>>>>>>>>>>>> https://github.com/varnishcache/pkg-varnish-cache) >>>>>>>>>>>>> - infrastructure to build the packages (uhoh, see below) >>>>>>>>>>>>> - infrastructure to store and deliver ( >>>>>>>>>>>>> https://packagecloud.io/varnishcache) >>>>>>>>>>>>> >>>>>>>>>>>>> So, everything is in place, expect for the third point. At the >>>>>>>>>>>>> moment, there are two concurrent CI implementations: >>>>>>>>>>>>> - travis: >>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.travis.yml It's >>>>>>>>>>>>> the historical one, and currently only runs compilation+test for OSX >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Actually it tests Linux AMD64 and ARM64 too. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> - circleci: >>>>>>>>>>>>> https://github.com/varnishcache/varnish-cache/blob/master/.circleci/config.yml the >>>>>>>>>>>>> new kid on the block, that builds all the packages and distchecks for all >>>>>>>>>>>>> the packaged platforms >>>>>>>>>>>>> >>>>>>>>>>>>> The issue is that cirecleci doesn't support arm64 containers >>>>>>>>>>>>> (for now?), so we would need to re-implement the packaging logic in Travis. >>>>>>>>>>>>> It's not a big problem, but it's currently not a priority on my side. >>>>>>>>>>>>> >>>>>>>>>>>>> However, I am totally ready to provide help if someone wants >>>>>>>>>>>>> to take that up. The added benefit it that Travis would be able to handle >>>>>>>>>>>>> everything and we can retire the circleci experiment >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I will take a look in the coming days and ask you if I need >>>>>>>>>>>> help! >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I've took a look at the current setup and here is what I've >>>>>>>>>>> found as problems and possible solutions: >>>>>>>>>>> >>>>>>>>>>> 1) Circle CI >>>>>>>>>>> 1.1) problem - the 'machine' and 'Docker' executors run on >>>>>>>>>>> x86_64, so there is no way to build the packages in a "native" environment >>>>>>>>>>> 1.2) possible solutions >>>>>>>>>>> 1.2.1) use multiarch cross build >>>>>>>>>>> 1.2.2) use 'machine' executor that registers QEMU via >>>>>>>>>>> https://hub.docker.com/r/multiarch/qemu-user-static/ and then >>>>>>>>>>> builds and runs a custom Docker image that executes a shell script with the >>>>>>>>>>> build steps >>>>>>>>>>> It will look something like >>>>>>>>>>> https://github.com/yukimochi-containers/alpine-vpnserver/blob/69bb0a612c9df3e4ba78064d114751b760f0df9d/.circleci/config.yml#L19-L38 but >>>>>>>>>>> instead of uploading the Docker image as a last step it will run it. >>>>>>>>>>> The RPM and DEB build related code from current config.yml will >>>>>>>>>>> be extracted into shell scripts which will be copied in the custom Docker >>>>>>>>>>> images >>>>>>>>>>> >>>>>>>>>>> From these two possible ways I have better picture in my head >>>>>>>>>>> how to do 1.2.2, but I don't mind going deep in 1.2.1 if this is what you'd >>>>>>>>>>> prefer. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I've decided to stay with Circle CI and use 'machine' executor >>>>>>>>>> with QEMU. >>>>>>>>>> >>>>>>>>>> The changed config.yml could be seen at >>>>>>>>>> https://github.com/martin-g/varnish-cache/tree/feature/aarch64-packages/.circleci and >>>>>>>>>> the build at >>>>>>>>>> https://app.circleci.com/pipelines/github/martin-g/varnish-cache/71/workflows/3a275d79-62a9-48b4-9aef-1585de1c87c8 >>>>>>>>>> The builds on x86 arch take 3-4 mins, but for aarch64 >>>>>>>>>> (emulation!) ~40mins >>>>>>>>>> For now the jobs just build the .deb & .rpm packages for CentOS 7 >>>>>>>>>> and Ubuntu 18.04, both amd64 and aarch64. >>>>>>>>>> TODOs: >>>>>>>>>> - migrate Alpine >>>>>>>>>> >>>>>>>>> >>>>>>> Build on Alpine aarch64 fails with: >>>>>>> ... >>>>>>> automake: this behaviour will change in future Automake versions: >>>>>>> they will >>>>>>> automake: unconditionally cause object files to be placed in the >>>>>>> same subdirectory >>>>>>> automake: of the corresponding sources. >>>>>>> automake: project, to avoid future incompatibilities. >>>>>>> parallel-tests: installing 'build-aux/test-driver' >>>>>>> lib/libvmod_debug/Makefile.am:12: warning: libvmod_debug_la_LDFLAGS >>>>>>> multiply defined in condition TRUE ... >>>>>>> lib/libvmod_debug/automake_boilerplate.am:19: ... >>>>>>> 'libvmod_debug_la_LDFLAGS' previously defined here >>>>>>> lib/libvmod_debug/Makefile.am:9: 'lib/libvmod_debug/ >>>>>>> automake_boilerplate.am' included from here >>>>>>> + autoconf >>>>>>> + CONFIG_SHELL=/bin/sh >>>>>>> + export CONFIG_SHELL >>>>>>> + ./configure '--prefix=/opt/varnish' '--mandir=/opt/varnish/man' >>>>>>> --enable-maintainer-mode --enable-developer-warnings >>>>>>> --enable-debugging-symbols --enable-dependency-tracking >>>>>>> --with-persistent-storage --quiet >>>>>>> configure: WARNING: dot not found - build will fail if svg files are >>>>>>> out of date. >>>>>>> configure: WARNING: No system jemalloc found, using system malloc >>>>>>> configure: error: Could not find backtrace() support >>>>>>> >>>>>>> Does anyone know a workaround ? >>>>>>> I use multiarch/alpine:aarch64-edge as a base Docker image >>>>>>> >>>>>>> Martin >>>>>>> >>>>>>> >>>>>>> >>>>>>>> - store the packages as CircleCI artifacts >>>>>>>>>> - anything else that is still missing >>>>>>>>>> >>>>>>>>>> Adding more architectures would be as easy as adding a new >>>>>>>>>> Dockerfile with a base image from the respective type. >>>>>>>>>> >>>>>>>>>> Martin >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> 2) Travis CI >>>>>>>>>>> 2.1) problems >>>>>>>>>>> 2.1.1) generally Travis is slower than Circle! >>>>>>>>>>> Althought if we use CircleCI 'machine' executor it will be >>>>>>>>>>> slower than the current 'Docker' executor! >>>>>>>>>>> 2.1.2) Travis supports only Ubuntu >>>>>>>>>>> Current setup at CircleCI uses CentOS 7. >>>>>>>>>>> I guess the build steps won't have problems on Ubuntu. >>>>>>>>>>> >>>>>>>>>>> 3) GitHub Actions >>>>>>>>>>> GH Actions does not support ARM64 but it supports self hosted >>>>>>>>>>> ARM64 runners >>>>>>>>>>> 3.1) The problem is that there is no way to make a self hosted >>>>>>>>>>> runner really private. I.e. if someone forks Varnish Cache any commit in >>>>>>>>>>> the fork will trigger builds on the arm64 node. There is no way to reserve >>>>>>>>>>> the runner only for commits against >>>>>>>>>>> https://github.com/varnishcache/varnish-cache >>>>>>>>>>> >>>>>>>>>>> Do you see other problems or maybe different ways ? >>>>>>>>>>> Do you have preferences which way to go ? >>>>>>>>>>> >>>>>>>>>>> Regards, >>>>>>>>>>> Martin >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Regards, >>>>>>>>>>>> Martin >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Guillaume Quintard >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> varnish-dev mailing list >>>>>>>>>>>>> varnish-dev at varnish-cache.org >>>>>>>>>>>>> >>>>>>>>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev >>>>>>>>>>>>> >>>>>>>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From phk at phk.freebsd.dk Mon Mar 30 09:41:08 2020 From: phk at phk.freebsd.dk (Poul-Henning Kamp) Date: Mon, 30 Mar 2020 09:41:08 +0000 Subject: Summertime moves the bugwash! Message-ID: <51742.1585561268@critter.freebsd.dk> For those participating in bugwash from outside the EU-hegemony: Please note that we europeans have moved to summertime now, so the bugwash at 1500 EU-time is now at 1300 UTC. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk at FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.