00:20:06  * apapirovskijoined
00:26:02  * apapirovskiquit (Quit: Leaving...)
00:37:25  <mylesborins>has anyone taken a look into the various ci-release failures?
00:37:47  <mylesborins>https://github.com/nodejs/build/issues/1112
00:38:00  * node-ghjoined
00:38:00  * node-ghpart
00:39:12  <mylesborins>six release is DOA
00:39:13  <mylesborins>6.13.0 is planned to go out tomorrow, am not likely to delay for aix
00:42:41  * node-ghjoined
00:42:41  * node-ghpart
01:02:45  * BridgeARquit (Ping timeout: 264 seconds)
01:34:12  * node-ghjoined
01:34:12  * node-ghpart
01:52:35  * node-ghjoined
01:52:35  * node-ghpart
01:54:52  * BridgeARjoined
02:06:24  <maclover7>mylesborins: I would take a look but I don't have access to ci-release (see https://github.com/nodejs/build/issues/1107)
02:09:37  * node-ghjoined
02:09:37  * node-ghpart
02:11:21  * node-ghjoined
02:11:22  * node-ghpart
02:12:09  * node-ghjoined
02:12:09  * node-ghpart
02:13:59  * node-ghjoined
02:13:59  * node-ghpart
02:14:20  * node-ghjoined
02:14:20  * node-ghpart
02:15:47  * node-ghjoined
02:15:47  * node-ghpart
02:16:30  * node-ghjoined
02:16:30  * node-ghpart
02:17:49  * node-ghjoined
02:17:50  * node-ghpart
02:19:04  * node-ghjoined
02:19:04  * node-ghpart
02:19:24  * node-ghjoined
02:19:24  * node-ghpart
02:22:04  * node-ghjoined
02:22:04  * node-ghpart
02:23:52  * node-ghjoined
02:23:52  * node-ghpart
02:24:11  * node-ghjoined
02:24:11  * node-ghpart
02:25:22  * node-ghjoined
02:25:22  * node-ghpart
02:25:29  * node-ghjoined
02:25:29  * node-ghpart
02:42:41  * node-ghjoined
02:42:41  * node-ghpart
02:47:29  * node-ghjoined
02:47:29  * node-ghpart
02:49:16  * node-ghjoined
02:49:16  * node-ghpart
02:49:42  * node-ghjoined
02:49:42  * node-ghpart
03:12:57  * node-ghjoined
03:12:57  * node-ghpart
04:48:17  * node-ghjoined
04:48:17  * node-ghpart
06:15:05  * BridgeARquit (Ping timeout: 240 seconds)
06:18:26  * node-ghjoined
06:18:26  * node-ghpart
06:19:52  * node-ghjoined
06:19:52  * node-ghpart
06:24:59  * node-ghjoined
06:24:59  * node-ghpart
06:31:18  * node-ghjoined
06:31:18  * node-ghpart
06:40:32  <rvagg>I'm on the node-test-commit-linux-containered failures ... .ccache/tmp/ write errors
07:40:59  * seishunjoined
08:15:47  * node-ghjoined
08:15:47  * node-ghpart
08:16:24  * node-ghjoined
08:16:24  * node-ghpart
08:17:28  <rvagg>ok, I think that's got it ... 💤
08:33:00  * node-ghjoined
08:33:01  * node-ghpart
08:48:28  * seishunquit (Ping timeout: 256 seconds)
09:04:25  * node-ghjoined
09:04:25  * node-ghpart
09:08:18  * node-ghjoined
09:08:18  * node-ghpart
09:09:01  * node-ghjoined
09:09:01  * node-ghpart
09:10:29  * gibfahnjoined
09:11:14  <gibfahn>rvagg: do you know what's up with https://github.com/nodejs/build/issues/1116#issuecomment-365195791 ? If misclick then no worries, but I thought the job might have issues or something.
10:55:48  * node-ghjoined
10:55:49  * node-ghpart
10:59:49  * node-ghjoined
10:59:49  * node-ghpart
11:00:08  * node-ghjoined
11:00:08  * node-ghpart
11:01:10  * node-ghjoined
11:01:10  * node-ghpart
11:01:55  * williamkapkequit
11:02:12  * williamkapkejoined
11:20:02  * node-ghjoined
11:20:02  * node-ghpart
11:25:11  * mylesborinsquit (Quit: farewell for now)
11:25:41  * mylesborinsjoined
11:47:21  * BridgeARjoined
13:12:59  * targosjoined
13:13:16  <targos>gibfahn: hey
13:13:25  <gibfahn>Hi
13:14:34  <targos>I would like to make a change to the node-update-v8-canary job on jenkis
13:14:42  <targos>could you help me with that?
13:18:02  <targos>gibfahn: basically I'd like to build and test after the update and stop if that fails
13:18:43  <targos>can I just do it in the same shell script?
13:19:38  <gibfahn>Don't see why not
13:19:42  <gibfahn>what does the job currently do?
13:20:40  <targos>two reasons for doing it: 1. I receive an e-mail if the job fails. 2. No need to trigger the full test-commit-v8 if it doesn't work on a 64bit linux
13:20:57  <gibfahn>Okay
13:20:59  <gibfahn>How about this
13:21:07  <targos>it updates V8 and pushes to the canary repo
13:21:15  <gibfahn>I add the existing script to the node build repo and make the job run from there
13:21:20  <gibfahn>Then you can just PR an update and test it out
13:21:27  <gibfahn>That way it's easy to roll back
13:21:31  <gibfahn>If we have to
13:21:51  <targos>ok
14:12:57  * node-ghjoined
14:12:57  * node-ghpart
14:13:27  <gibfahn>targos: take a look at https://github.com/nodejs/build/pull/1117 , if it LGTY then I'll land and change the job
14:15:28  * refackquit
14:15:48  * refackjoined
14:40:31  * evanlucasjoined
14:51:54  * mmarchiniquit
14:52:07  * mmarchinijoined
14:58:41  * gibfahnquit
14:58:59  * gibfahnjoined
15:00:18  <targos>gibfahn: thanks! lgtm
15:03:54  * node-ghjoined
15:03:54  * node-ghpart
15:32:37  * node-ghjoined
15:32:37  * node-ghpart
15:46:51  <targos>gibfahn: I have no idea how to change the config to use this new file
16:02:13  * BridgeARquit (Remote host closed the connection)
16:11:35  * BridgeARjoined
16:18:52  * chorrelljoined
16:18:58  <gibfahn>targos: I'm looking at it now
16:20:35  * BridgeARquit (Ping timeout: 240 seconds)
16:23:19  <Trott>Is there some way to tell why https://ci.nodejs.org/job/node-stress-single-test-pi1-binary/44/ hasn't kicked off the subjob yet?
16:23:53  <gibfahn>Trott: look at the console output: https://ci.nodejs.org/job/node-stress-single-test-pi1-binary/44/console
16:24:05  <gibfahn>The parent job has to run first, to clone the repo, then each child job just copies that clone (to save time)
16:24:20  <gibfahn>The parent job is waiting for an executor
16:24:21  <Trott>"queue: Waiting for next available executor on pi1-raspbian-wheezy"
16:24:25  <targos>https://ci.nodejs.org/job/node-test-binary-arm/13596/
16:24:32  <targos>this job is never ending
16:24:51  <targos>might be blocking everything
16:25:33  <gibfahn>Is it not making progress?
16:25:48  <gibfahn>i.e. has it been like that for ages, or is it just that there's a backlog that hasn't cleared?
16:26:11  <gibfahn>If the former then I'll kill it
16:26:30  <targos>I don't know
16:26:53  <targos>for example this one is stuck at `git chekout`: https://ci.nodejs.org/job/node-test-binary-arm/13596/RUN_SUBSET=0,label=pi1-raspbian-wheezy/console
16:26:53  <Trott>Seems like they're all stuck on `git checkout -f refs/remotes/jenkins_tmp`
16:27:27  <Trott>Oh, not all, just most. :-D
16:32:50  * BridgeARjoined
16:33:33  <Trott>Should we ask BridgeAR if it's OK to cancel that job or is someone actively troubleshooting?
16:38:51  <gibfahn>I wouldn't bother asking, just kill the job and restart the node-test-pull-request job
16:44:33  <Trott>Might not even restart it given that three other sub-tasks failed. Hey, BridgeAR, I'm going to cancel https://ci.nodejs.org/job/node-test-binary-arm/13596/ in the hopes that it frees up some workers. As you can see above, it's stalled out. Sorry for any inconvenience!
16:57:04  * chorrellquit (Quit: My MacBook has gone to sleep. ZZZzzz…)
17:00:46  * chorrelljoined
17:05:11  * BridgeARquit (Remote host closed the connection)
17:07:22  <mylesborins>gibfahn aix is working now
17:07:50  <gibfahn>Hurray!
17:08:03  <gibfahn>Fix is documented, so anyone with release access should be able to fix it next time
17:08:17  <mylesborins>I should probably consider asking for release access
17:08:24  <mylesborins>so I can fix these things
17:08:32  <mylesborins>that being said I'm more than happy to not have that responsibility
17:08:36  <mylesborins>😅
17:14:32  <gibfahn>I guess it depends how often things go wrong
17:14:46  <gibfahn>I feel like things have been pretty good up till recently with ci-release
17:14:51  * chorrellquit (Quit: My MacBook has gone to sleep. ZZZzzz…)
17:30:20  * seishunjoined
18:06:46  <rvagg>I've had to take arm-fanned offline, something wrong with the network
18:09:50  * node-ghjoined
18:09:50  * node-ghpart
18:16:15  * node-ghjoined
18:16:15  * node-ghpart
18:23:13  <mylesborins>yeah was noticing that
18:23:14  <mylesborins>I'm getting some really weird periodic flakes on ppcle now
18:23:15  <mylesborins>like different random tests failing each CI run
18:26:38  <gibfahn>Uhh, that doesn't sound good
18:26:44  <gibfahn>Links?
18:27:32  * chorrelljoined
18:35:40  <mylesborins>same thing with deb
18:35:57  <mylesborins>https://github.com/nodejs/node/pull/18623
18:36:04  <mylesborins>https://github.com/nodejs/node/pull/18751
18:36:12  <mylesborins>been trying to get these two prs to be green forever
18:36:22  <mylesborins>multiple test runs with a variety of flakes in the comments
18:37:54  <mylesborins>we are also getting lots of citgm failures that seem to be exclusive to our infra
18:38:00  <mylesborins>specifically leveldown on ubuntu 1404 and 1605
19:02:33  * evanlucasquit (Remote host closed the connection)
19:20:25  * srl295joined
19:27:59  * node-ghjoined
19:28:00  * node-ghpart
19:28:21  * apapirovskijoined
20:08:42  * chorrellquit (Quit: My MacBook has gone to sleep. ZZZzzz…)
20:11:38  * chorrelljoined
20:48:27  * seishunquit (Ping timeout: 240 seconds)
21:01:17  * chorrellquit (Quit: My MacBook has gone to sleep. ZZZzzz…)
21:02:18  * chorrelljoined
21:02:21  * chorrellquit (Client Quit)
22:07:48  * node-ghjoined
22:07:48  * node-ghpart
22:09:40  * apapirovskiquit (Remote host closed the connection)
22:46:25  * node-ghjoined
22:46:25  * node-ghpart
22:48:06  * node-ghjoined
22:48:06  * node-ghpart
22:48:31  * node-ghjoined
22:48:31  * node-ghpart
22:50:37  * node-ghjoined
22:50:37  * node-ghpart
23:03:44  * node-ghjoined
23:03:45  * node-ghpart