00:13:32  * node-ghjoined
00:13:32  * node-ghpart
00:41:21  * node-ghjoined
00:41:22  * node-ghpart
00:48:15  * jaywonquit (Remote host closed the connection)
00:49:10  * jaywonjoined
00:53:33  * jaywonquit (Ping timeout: 252 seconds)
00:59:03  * jaywon_joined
01:19:03  * Fishrock123quit (Remote host closed the connection)
01:29:46  * Fishrock123joined
01:45:05  * Fishrock123quit (Remote host closed the connection)
01:45:42  * Fishrock123joined
01:45:51  * Fishrock123quit (Remote host closed the connection)
01:46:11  * Fishrock123joined
01:46:38  * Fishrock123quit (Remote host closed the connection)
02:08:38  * Fishrock123joined
02:10:06  * Fishrock123quit (Remote host closed the connection)
02:10:22  * Fishrock123joined
02:10:52  * Fishrock123quit (Remote host closed the connection)
02:22:41  <Trott>Oh, yeah, definite thanks on it. I'm just wondering if the label needs to be updated. (Not sure to what.)
02:24:13  <refack>`test-internet+benchmarks`
03:05:48  * jaywon_quit (Remote host closed the connection)
03:06:32  * jaywonjoined
03:11:30  * jaywonquit (Ping timeout: 268 seconds)
03:36:19  * Fishrock123joined
03:41:06  * Fishrock123quit (Ping timeout: 268 seconds)
04:02:55  * jaywon_joined
04:04:48  * jaywon__joined
04:08:31  * jaywon_quit (Ping timeout: 260 seconds)
04:08:52  <Trott>👍
04:17:13  * Fishrock123joined
04:21:30  * Fishrock123quit (Remote host closed the connection)
04:21:50  * Fishrock123joined
04:22:17  * Fishrock123quit (Remote host closed the connection)
05:18:55  * Fishrock123joined
05:24:19  * node-slack-bot_part
05:24:32  * node-slack-bot_joined
05:28:48  * Fishrock123quit (Remote host closed the connection)
05:29:09  * Fishrock123joined
05:29:35  * Fishrock123quit (Remote host closed the connection)
05:29:53  * Fishrock123joined
05:30:22  * Fishrock123quit (Remote host closed the connection)
05:32:56  * Fishrock123joined
05:57:04  * jaywon__quit (Remote host closed the connection)
06:14:36  * node-ghjoined
06:14:36  * node-ghpart
06:18:55  * jaywonjoined
06:23:31  * Fishrock123quit (Ping timeout: 272 seconds)
06:26:34  * Fishrock123joined
06:27:40  * Fishrock123quit (Remote host closed the connection)
06:28:00  * Fishrock123joined
06:28:27  * Fishrock123quit (Remote host closed the connection)
06:28:44  * Fishrock123joined
06:29:14  * Fishrock123quit (Remote host closed the connection)
06:35:51  * Fishrock123joined
07:02:44  * Fishrock123quit (Remote host closed the connection)
07:03:02  * Fishrock123joined
07:03:31  * Fishrock123quit (Remote host closed the connection)
07:18:19  * Fishrock123joined
07:23:06  * Fishrock123quit (Ping timeout: 268 seconds)
07:28:43  * jaywonquit (Remote host closed the connection)
07:29:18  * jaywonjoined
07:29:36  * Fishrock123joined
07:33:52  * jaywonquit (Ping timeout: 260 seconds)
07:40:50  * jaywonjoined
07:56:23  * Fishrock123quit (Remote host closed the connection)
07:56:44  * jaywonquit
08:28:43  * Fishrock123joined
08:29:10  * srl295quit (Quit: Connection closed for inactivity)
08:33:21  * Fishrock123quit (Ping timeout: 260 seconds)
08:36:07  * Fishrock123joined
08:42:45  * Fishrock123quit (Quit: Leaving...)
10:49:56  * sxajoined
14:17:17  * node-ghjoined
14:17:17  * node-ghpart
14:18:49  * node-ghjoined
14:18:49  * node-ghpart
14:24:12  * node-ghjoined
14:24:12  * node-ghpart
14:24:31  * node-ghjoined
14:24:31  * node-ghpart
14:53:19  * node-ghjoined
14:53:19  * node-ghpart
15:13:24  * node-ghjoined
15:13:24  * node-ghpart
15:47:32  * node-ghjoined
15:47:32  * node-ghpart
16:39:42  * helio-frotajoined
16:53:08  * helio-frotapart ("Leaving")
18:24:29  <MylesBorins>hey all... the v8.13.0 release just went out and we realized that linux-x86 tarballs didn't end up being included
18:31:36  <MylesBorins>it looks like centos 32 is skipping for 8.x now
18:31:37  <MylesBorins>:(
18:32:27  <refack>I can fix the job and you can trigger just that platform
18:32:51  <refack>(I thought the tarballs came from the mac machine)
18:40:42  <MylesBorins>will do
18:40:50  <MylesBorins>lmk when done
18:41:14  <MylesBorins>and if you are responsible for the new matrix stuff thanks v much it is v nce
18:42:20  <refack>Yeah, we been playing with that in the public CI, so I promoted it
18:42:29  <refack>Check if this is Ok https://ci-release.nodejs.org/job/iojs+release/3947/console
18:42:47  <MylesBorins>and ftr the osx machine makes the generic tarball
18:42:51  <MylesBorins>with src
18:42:56  <MylesBorins>but the linux specific builds come from centos
18:43:32  <MylesBorins>what did you end up having to change?
18:44:03  <MylesBorins>looks like the groovy script matrix thingy
18:44:25  <refack>Yeah
18:44:44  <refack>change exclude on `gte(8)` to exclude on `gte(9)`
18:45:40  <refack>Ahh, ok the *`-headers` what I call "The SDK"
18:48:13  <MylesBorins>not just -headers
18:48:17  <MylesBorins>(which it also does)
18:48:35  <MylesBorins>https://ci-release.nodejs.org/job/iojs+release/3943/nodes=osx1010-release-tar/
18:48:56  <MylesBorins>it does darwin binary tarball, headers tarball, and source code tarball
18:54:18  <MylesBorins>oh shit
18:54:19  <MylesBorins>it is failing to compile on centos32
18:54:21  <MylesBorins>:(
18:54:45  <MylesBorins>did libuv drop support for 32 bit os?
18:56:17  <MylesBorins>did we drop testing for 32 bit os's on master?
18:56:18  <MylesBorins>well upstream that is
18:56:34  <refack>I think spesificly it's centos5
18:56:45  <MylesBorins>yeah...
18:56:46  <MylesBorins>ugh
18:56:56  <MylesBorins>looks like the same code for skipping centos testing hit the main CI as well
18:56:57  <MylesBorins>:(
18:56:58  <refack>For master we downgraded to experimental
18:57:05  <MylesBorins>so debian8 32 it is building
18:57:05  <refack>And dropped support for centos5
18:57:10  <MylesBorins>but we never tested on centos5
18:57:17  <MylesBorins>we can't drop support for it while we are still building for 8.x
18:57:23  <MylesBorins>8.13.0 might be DOA now
18:57:24  <MylesBorins>:(
18:58:02  <refack>There's a whole thread where the Red Hat people state that centos6 is backward compatible
18:58:17  <MylesBorins>but we are not testing on centos6 either
18:58:19  <refack>We should test centos5 for 8
18:58:19  <MylesBorins>for 8.x
18:58:37  <MylesBorins>well no centos6 32
18:58:38  <refack>The bug in the matrix was happening on the Public CI
18:59:03  <MylesBorins>yup
18:59:20  <refack>You could pull out the libuv bump ¯\_(ツ)_/¯
18:59:23  <MylesBorins>so now we have a problem in that the matrix bug not only blocked the building of the asset... but it blocked the testing of the release
18:59:32  <MylesBorins>and now we can't build it
18:59:37  <MylesBorins>this is a kind of big deal :(
18:59:42  <refack>yeah
19:00:00  <refack>Only thing is centos5 is EOL for a year now
19:00:15  <MylesBorins>we are going to have to cut an 8.13.1
19:00:16  <MylesBorins>ugh
19:00:51  <refack>Or you can build on the centos6-32
19:02:32  * node-ghjoined
19:02:32  * node-ghpart
19:02:46  * node-ghjoined
19:02:46  * node-ghpart
19:03:29  <refack>We are "covered" by https://github.com/nodejs/node/blob/v8.x/BUILDING.md#supported-platforms-1
19:03:51  * node-ghjoined
19:03:51  * node-ghpart
19:05:02  <MylesBorins>https://github.com/nodejs/build/issues/1580
19:05:27  <refack>https://github.com/nodejs/build/pull/1579
19:05:30  <MylesBorins>wanna try kicking off the build with centos 6?
19:05:37  <refack>🙂
19:05:54  <refack>Let me tweak the job so it's possible
19:07:05  <refack>You'll need to kick it https://ci-release.nodejs.org/computer/release-digitalocean-centos6-x86-1/
19:07:10  <refack>The machine
19:12:38  <MylesBorins>literally?
19:17:56  <refack>Well I think they are in Texas, so it's a bit of a comute. But if it's responding to ssh, make sure the Jenkins service is running
19:18:17  <MylesBorins>I have no idea how to do that
19:18:21  <MylesBorins>and I don't have access to those machines
19:20:39  <refack>We need some one from https://github.com/nodejs-private/secrets/tree/master/build/release/.gpg
19:21:40  <refack>rvagg: mhdawson__ joaocgreis_ jbergstroem
19:22:06  <refack>And I pinged Gib
19:27:03  * node-ghjoined
19:27:03  * node-ghpart
19:31:17  <MylesBorins>kk thanks
19:39:23  <mhdawson__>ok I'm on a call but will try to look at it at the same time
19:40:08  <mhdawson__>Is there a particular machine that needs to be kicked?
19:40:24  <mhdawson__>So I don't need to read through all of the chat
19:44:14  <mhdawson__>ok looks like it was digitalocean-centos6-x86-1, restarted jenkins agent
20:48:51  <refack>Thanks mhdawson__
20:49:05  <refack>MylesBorins: test job completed https://ci-release.nodejs.org/job/iojs+release/nodes=centos6-32-gcc48/3948/
20:52:16  <MylesBorins>ok so qq
20:52:17  <MylesBorins>should we be moving the 64 bit release to centos6 as well?
20:52:21  <MylesBorins>obviously we are not going to rebuild... but I am a bit concerned with things building before and not building now
20:52:29  <MylesBorins>and potentially introducing breakages in the LTS release
20:53:30  <refack>From what I understand from the RedHat and glibc people the binaries should be compatible, as in they explicitly test and validate that
20:54:43  <refack>My personal opinion is do the necessary minimal change, i.e centos6 for 32bit, and 64 as is was ¯\_(ツ)_/¯
21:11:19  <mhdawson__>So I remember that Rod had posted that he'd updated the amd 64 bit, but this is x86 right >
21:11:20  <mhdawson__>?
21:12:17  <MylesBorins>yup
21:12:20  <mhdawson__>So I'm not quite sure on the context with respect to your question on moving the 64 bit release to centos6.
21:12:31  <mhdawson__>Refael
21:12:36  <MylesBorins>this is the failure that happened trying to compile on centos5 x86
21:12:36  <MylesBorins>https://ci-release.nodejs.org/job/iojs+release/3947/nodes=centos5-release-32/console
21:12:37  <mhdawson__>Refael's question that is
21:12:59  <MylesBorins>the first and most imporant question is if it is ok to promote the release being built with centos-5 instead
21:13:26  <mhdawson__>versus what ?
21:14:48  <MylesBorins>so the build explodes on centos5
21:14:52  <MylesBorins>but builds with centos6
21:15:02  <MylesBorins>we have never released 8.x with the centos5 builds
21:15:30  <mhdawson__>sigh. I assume that we have no centos5 in our test CI?
21:16:16  <MylesBorins>we had it... it got disabled due to a configuration bug
21:16:18  <MylesBorins>so the 8.x release got built, signed off on, and promoted before we realized the x86 binaries were missing
21:16:29  <MylesBorins>so our options are to promote the x86 build with centos5 and see what happens
21:16:38  <MylesBorins>or to back out libuv changes and do a quick 8.13.1 or 8.13.0
21:16:41  <MylesBorins>8.14.0
21:16:42  <mhdawson__>So that is just the 32 bit binaries right?
21:16:47  <MylesBorins>yup
21:16:50  <MylesBorins>the linux 32 bit binary
21:16:58  <MylesBorins>it is running on many systems, not just centos
21:17:06  <mhdawson__>right
21:17:11  <MylesBorins>and I believe that it fits within our support matrix for gcc and libc
21:17:31  <MylesBorins>as per https://github.com/nodejs/node/blob/v8.x/BUILDING.md#supported-platforms-1
21:20:01  <mhdawson__>Do you mean that the other way around? That we have never released 8.x being built on centos6
21:20:22  <MylesBorins>that's what I meant
21:20:33  <MylesBorins>sorry for confusing things
21:20:40  <MylesBorins>we've always done 8.x built with centos 5
21:21:25  <mhdawson__>I'm not comfortable with releasing having built on centos 6 without more due diligence. Looking at the error it seems like something will not work on some platforms.
21:22:04  <mhdawson__>It seems to me like we would have either not done the libuv update if we had the centos5 machines online or found a work around
21:22:17  <MylesBorins>me neither... but now we have an issue of timing
21:22:19  <MylesBorins>exactly re: above
21:22:20  <MylesBorins>but now we don't have the time to figure it out
21:22:26  <MylesBorins>and have to come up with a very quick solution
21:22:39  <mhdawson__>The simplest solution is no release today
21:23:20  * sxaquit (Ping timeout: 244 seconds)
21:23:28  <MylesBorins>8.13.0 has already been released
21:23:29  <MylesBorins>:(
21:23:36  <MylesBorins>we didn't notice the missing platform until after the release
21:24:06  <mhdawson__>Is there any way to "un-release"
21:24:16  <MylesBorins>because the bug in the build infra didn't run centos at all and failed silently
21:26:18  <MylesBorins>nope
21:26:20  <MylesBorins>that's why I'm kind of ringing the firebells here
21:27:07  <mhdawson__>If there is no other option my take would be an 8.13.1 with the libuv change backed out
21:28:03  <mhdawson__>I assume the libuv change was considered non-breaking
21:28:39  <mhdawson__>If it was considered a minor though, them SemVer might be a killer if its considered breaking to back it out
21:29:33  <MylesBorins>we landed them as non semver-minor this time
21:29:55  <mhdawson__>We probably still do need to think about how we an "cancel" or remove a build in the future
21:30:17  <MylesBorins>it has been debatable in the past if the libuv changes were semver-major
21:30:18  <MylesBorins>but backing them out is a entire other thing right now
21:30:19  <MylesBorins>should we back all of them out?
21:30:21  <MylesBorins>a huge reason for doing the semver-minor was to get those changes
21:30:21  <mhdawson__>Either way in this case there will be a "gap" since there will be no builds for x86
21:30:29  <MylesBorins>TBH, the problem here is that changes were made to both the release and public CI
21:30:43  <MylesBorins>and there was no due diligence done to ensure there were not regressions
21:30:58  <MylesBorins>this is how we got here
21:31:49  <mhdawson__>I guess we'll need to look at the history of those changes. It's why I'm always a bit afraid of when when change the release infra
21:32:06  <refack>BTW it wasn't a bug https://github.com/nodejs/build/issues/1153
21:32:10  <refack>It was miscomunication
21:32:23  <refack>Rod tought we do node8 on centos6
21:32:35  <refack>It was by design
21:34:16  <MylesBorins>we did a release 2 months ago without issue
21:34:17  <MylesBorins>but I think with that being said we are likely better to be focusing on a blameless post-mortem afterwards
21:34:21  <refack>And FTR we did a huge amount of due diligence, we discussed this for almost a year, We solicited feedback from RedHat and the glibc people
21:34:54  <refack>We missed aligniging this with you
21:34:57  <mhdawson__>But we never switched the Release machines to build on centos6?
21:35:11  <refack>They were ready
21:35:35  <refack>But the issue didn't come to head until libuv activly droped support
21:35:42  <mhdawson__>But the key issue is that we dropped testing in test CI, but never switched over the Release machines
21:36:10  <refack>So we (BuildWG) thought everything is honky dory WRT to cenos6
21:36:16  <mhdawson__>Does that mean you believe that the minimum levels we specify in the building doc are statisfied by building on centos6?
21:36:27  <refack>And Myles tought we're stilll supporting centos5
21:36:35  <refack>Yes
21:36:36  <mhdawson__>and if so do we build 8.x for any other platforms on centOS6?
21:38:20  <MylesBorins>I'm talking with ofrobots about this in a chat right now
21:39:08  <MylesBorins>btw the problem specifically is that on centos 5 " error: ‘EPOLL_CLOEXEC’ undeclared (first use in this function)"
21:40:17  <MylesBorins>refack can you fix the build scripts to continue to build using centos 5
21:40:18  <MylesBorins>?
21:40:22  <refack>It was introduce in https://github.com/libuv/libuv/pull/1940#issuecomment-414017699
21:40:44  <refack>For test or release?
21:40:57  <refack>for the release I did it
21:41:12  <refack>For test I opened https://github.com/nodejs/build/pull/1579
21:42:12  <refack>But it's part of the bigger conversation and that is centos5 has been EoL for a year, and RedHat said "Please build on centos6"
21:42:17  <MylesBorins>for both test + release
21:42:18  <MylesBorins>CI and ci-release
21:42:19  <MylesBorins>I'm going to try a patch for libuv that might be able to fix this and I want to be able to run the test in CI and then attempt a build
21:43:19  <refack>Ok, so libuv is (1) test ci is two (2) and ci-release is done
21:44:50  <MylesBorins>so to be clear, they will all default to centos5 now for both x86 and x64?
21:45:33  <refack>default might not be the right word, but noth will be included in the test matrix
21:45:36  <refack>for libuv
21:45:38  <refack>and node8
21:45:46  <refack>*both
21:45:56  <refack>https://ci.nodejs.org/view/libuv/job/libuv-test-commit-linux/1158/
21:46:35  <MylesBorins>https://ci.nodejs.org/job/node-test-commit/23465/
21:46:47  <MylesBorins>I added an ifdef to define the missing symbol
21:46:49  <refack>wait
21:47:10  <mhdawson__>refack so if the selection script says
21:47:12  <mhdawson__> [ /^centos5/, anyType, gte(8) ],
21:47:23  <MylesBorins>the selction script is inverted
21:47:28  <MylesBorins>it is exclusion not inclusion
21:47:33  <mhdawson__>That is don't build on centos5 for 8 or lower
21:47:56  <refack>https://ci.nodejs.org/job/node-test-commit/23466/
21:48:16  <mhdawson__>@Myles
21:48:26  <refack>mhdawson__: yes that is by design as per https://github.com/nodejs/build/issues/1153
21:48:28  <MylesBorins>mhdawson__ if this patch works we can move forward with the build on centos 5
21:48:34  <mhdawson__>correct but I think that line still says exclude if greater than or equial
21:48:45  <mhdawson__>right ?
21:48:46  <MylesBorins>that means don't run on anything higher than 8
21:48:53  <refack>yes we didn't want to build node8 on centos5
21:48:54  <MylesBorins>it should maybe be gt?
21:49:01  <MylesBorins>but we need to be
21:49:07  <refack>that's the fix
21:49:08  <MylesBorins>changing from 5 to 6 is going to be semver major
21:49:13  <MylesBorins>(this breakage is a good example of that)
21:49:19  <MylesBorins>we can't change the system mid LTS
21:49:26  <MylesBorins>without extensive testing
21:49:58  <mhdawson__>Just looking through the parts to understand why it is still building on 5
21:49:59  <refack>Not nececerily becasue of the first line in https://github.com/nodejs/node/blob/v8.x/BUILDING.md#supported-platforms-1
21:50:03  <refack>It's just not nice
21:50:46  <MylesBorins>mhdawson__ why it is still buidling as in "why we decided to continue releasing via 5" or as in "it is still able to build?"
21:50:53  <MylesBorins>we've been building with 5 the entire LTS cycle
21:51:14  <mhdawson__>If the selection script says don't build on gte 8 why it still is
21:51:27  <refack>Myles asked me to force it
21:52:01  <mhdawson__>ok, then why was it not before?
21:52:12  <mhdawson__>The last change to the selection script was september
21:52:17  <MylesBorins>refack not force it, I want it changed back
21:52:18  <MylesBorins>to use 5 not 6
21:52:19  <MylesBorins>permanently
21:52:20  <MylesBorins>until we have consensus about changing 8.x to be releasing via centos6
21:52:22  <MylesBorins>as of a month ago it was building with 5
21:52:23  <MylesBorins>oh 2 months ago
21:52:31  <MylesBorins>that was likely after the last 8.x release is my guess
21:52:35  <MylesBorins>(last 8.x release was in sept)
21:53:59  <mhdawson__>hmm, the portion with [ /^centos5/, anyType, gte(8) ],
21:54:02  <mhdawson__>was changed 7 months agoi
21:54:04  <refack>So I've "force" enabled centos5 to be available for ci-release, and included in the test matrix
21:54:15  <MylesBorins>if this build works I think we should float the patch and immediately release 8.13.1
21:54:16  <MylesBorins>thoughts?
21:54:24  <MylesBorins>can we dig back into the history of the build matrix and see which machine we used for 8.12.0?
21:54:26  <mhdawson__>Still trying to catch up on the context
21:54:45  <refack>When was it?
21:54:48  <refack>8.12?
21:54:56  <mhdawson__>how did you "force" it?
21:55:08  <refack>mhdawson__: does https://github.com/nodejs/build/issues/1580 help?
21:55:25  <refack>I copied the script inline and changed the 8 to 7
21:55:31  <MylesBorins>do we have any logs about which machines we used for 8.12.0
21:55:42  <refack>what was the date?
21:56:13  <mhdawson__>ok, was not thinking you had access to release jenkins
21:56:17  <MylesBorins>sept 11
21:56:29  <refack>Bad date
21:56:37  <MylesBorins>it is possible that the change was made far enough back that we were indeed building on centos 6
21:57:10  <refack>yes
21:57:20  <refack>It was like that forever
21:57:31  <mhdawson__>Right script seems to have had that for a long time, is the question of when the selector script when into use, but I have a feeling it was before September
21:57:34  <refack>Well ever since we did the groovymatrix
21:58:17  <MylesBorins>so what made it regress and not have the build happen at all today?
21:58:18  <MylesBorins>if we can pinpoint that and see that it is unrelated to that matrix
21:58:19  <MylesBorins>then it is likely safe to assume that we built that last 8.x on centos6
21:59:11  <MylesBorins>ali is digging into the glib c versions of past releases to get us a bit more info
21:59:32  <refack>The machine was offline
21:59:53  <refack>Myles it was implemented in May
21:59:54  <refack>We implemented the matrix in May
22:00:16  <MylesBorins>that would have at least had the release waiting on that release right?
22:00:17  <MylesBorins>it should have been grey correct?
22:00:19  <MylesBorins>the release shouldn't have been green if a machine that was part of the matrix was offline
22:00:28  <refack>I was gray
22:00:49  <refack>I think
22:01:06  <refack>Unless you unchecked it
22:01:08  <MylesBorins>https://ci-release.nodejs.org/job/iojs+release/3943/console
22:01:11  <MylesBorins>it didn't even show up
22:01:15  <MylesBorins>I didn't check / uncheck anything
22:01:26  <MylesBorins>so I think that we are safe to assume the prior release was made using centos6 at this point
22:01:52  <refack>which you did https://ci-release.nodejs.org/job/iojs+release/3943/parameters/
22:02:08  <refack>So you take back the "martix is nice" ;)
22:02:17  <MylesBorins>apologies super odd
22:02:20  <MylesBorins>I didn't touch the matrix when kicking off the build
22:02:22  <MylesBorins>v odd
22:02:30  <refack>Well the machine was offline
22:02:37  <refack>So clusterf**k
22:02:45  <MylesBorins>yeah... this silent failure if the machine is offline is bad news
22:02:47  <MylesBorins>ok
22:02:53  <MylesBorins>so I think I have my head wrapped around all the things now
22:03:03  <MylesBorins>centos6 machine was offline and then failed silently
22:03:19  <MylesBorins>it never built and it deselected itself from the matrix
22:03:36  <refack>And we've been building node8 on centos6 at least since May
22:03:37  <MylesBorins>the centos5 stuff was a total fools errand... and we are actually v lucky that it broke
22:03:45  <mhdawson__>Agreed, those selection boxes are new, I wonder if its a new feature that un-selects them if there is no on-line machine?
22:04:16  <refack>It's not sopposed to, it is a feature, but I disabled it. I think
22:04:17  <MylesBorins>refack are the defaults still to build on centos6 for 8.x now?
22:04:18  <MylesBorins>I think the proper action now, which thankfully is the simplest
22:04:19  <mhdawson__>Which is a very bad "feature" if that is true
22:04:20  <MylesBorins>is to rebuild on centos6 and call it a day
22:04:27  <refack>No I need to revert
22:04:31  <MylesBorins>kk
22:04:36  <MylesBorins>lmk when things are "in their right place"
22:04:38  <MylesBorins>and thanks for digging in
22:06:14  <refack>So default are back
22:07:14  * node-ghjoined
22:07:14  * node-ghpart
22:07:21  * node-ghjoined
22:07:21  * node-ghpart
22:07:51  * node-ghjoined
22:07:51  * node-ghpart
22:07:56  <rvagg>oh good, you figured it out .. i was about to come in here raging about putting 8 on centos5, that's our centos6 cutover version
22:08:29  <mhdawson__>Umm, I don't see the restoration in the release CI
22:08:58  <rvagg>ok, so this offline thing, we get this a lot with both centos6 and centos5 across our infra, I've have to do so many manual restarts
22:09:02  <mhdawson__>but maybe thats because a new job has not been kicked off
22:09:06  <rvagg>probably need to put monit on these things
22:09:47  <refack>I think I added a cron to the test-ci centos workers
22:09:48  <mhdawson__>Ok never mind see the change is back in the release CI now
22:10:06  * sxajoined
22:10:13  <mhdawson__>Ok, glad the solution turns out to be straight forward.
22:10:41  * node-ghjoined
22:10:41  * node-ghpart
22:10:53  <refack>MylesBorins: called it a fire drill
22:11:00  <refack>I'd give us 8/10
22:11:14  <refack>We had multiple workarounds ready within a few hours
22:11:36  <MylesBorins>rvagg sorry about the confusion
22:12:14  <rvagg>MylesBorins: did we get RCs for 8.x? I don't recall seeing any, but maybe I'm not listening to the right places
22:12:17  <MylesBorins>😇
22:12:19  <MylesBorins>perhaps we should document the build matrix for each version somewhere?
22:12:22  <MylesBorins>we did
22:12:29  <MylesBorins>but the issue was this was all failing silently
22:12:38  <MylesBorins>there was no notifcation that the centos32 build was not running
22:12:51  <MylesBorins>and without hand checking all the assets it wasn't clear we were missing something until the release
22:12:56  <rvagg>yeah, those stupid grey balls on Jenkins don't help at all
22:13:06  <MylesBorins>especially since we have other platform that are skipped
22:13:21  <MylesBorins>we've talked in the past about introducing a thing in CI to confirm assets
22:13:30  <MylesBorins>we could potentially do this in the relase script before promoting
22:13:31  <rvagg>maybe ... we should build this matrix into the promotion script so that it knows what should be there and can tell you, instead of giving you a big list
22:13:41  <rvagg>yeah .. that might be a good way of "documenting" it too
22:14:01  <refack>In the IDf we say "bust your a** on the training grounds, and it'll be a piece of cake on the battle field"
22:14:47  <MylesBorins>so one other bit
22:14:55  <MylesBorins>the pi1 build is taking FORRREEEEEVVFVVVVEEEERRR
22:14:56  <MylesBorins>90c7bfa579
22:15:06  <MylesBorins>https://ci-release.nodejs.org/job/iojs+release/3943/nodes=pi1-docker/
22:15:10  <MylesBorins>been running for 6 hour snow
22:15:16  <MylesBorins>did we change anything?
22:16:16  <MylesBorins>also ftr, I've kicked off the centos6 build again
22:16:17  <MylesBorins>will promote it as soon as it is done
22:16:29  <refack>If I'm reading the build output correctly we at ~40%
22:18:17  <MylesBorins>If I remember correctly pi1 was building really slowly until you made a change, potentially to -j1 i think, and then it sped up quite a bit
22:18:51  <refack>Me or Rod?
22:19:32  <refack>Anyway Rod rebuild the whole cluster, so it's not unexpected
22:25:40  <rvagg>will check, it would have taken ages to get started because the workspace was cleaned out
22:25:49  <rvagg>but if there were RCs then the ccache should be primed
22:29:52  <rvagg>mm, it's all cache miss I think, so the RCs didn't help, that's a bit odd
22:30:55  <refack>MylesBorins: did the V8 patch that Ali make was in the RC?
22:32:17  <MylesBorins>refack yup
22:32:18  <MylesBorins>we got it in and tested at the last minute
22:32:19  <MylesBorins>that ABI breakage was no bueno
22:32:21  <MylesBorins>rvagg the rc build timed out
22:32:22  <MylesBorins>so it could be that
22:32:39  <refack>Ok because it would have invalidated the chace
22:33:08  <refack>Yeah I was watching that ABI thing
22:35:06  <rvagg>https://ci-release.nodejs.org/job/iojs+release/nodes=pi1-docker/ RC builds were cancelled, not letting them finished, so cache is dry
22:35:18  <rvagg>RCs are helpful for priming cache on Pi if nothing else
22:40:29  <MylesBorins>true
22:40:40  <MylesBorins>well thankfully I doubt we'll have a fire drill like this again 😅
22:40:52  <MylesBorins>I'm so happy it turned out to be way less of an issue than I initially though
22:41:26  <refack>I'm happy the BuildWG was able to be responsive
23:20:59  * node-ghjoined
23:20:59  * node-ghpart