00:19:50  * node-ghjoined
00:19:50  * node-ghpart
01:27:13  * juggernaut451joined
01:43:51  * juggernaut451quit (Remote host closed the connection)
01:54:15  * apapirovskijoined
01:58:35  * apapirovskiquit (Ping timeout: 245 seconds)
02:12:00  * node-ghjoined
02:12:01  * node-ghpart
02:51:40  * sgimenoquit (Ping timeout: 265 seconds)
03:45:34  * apapirovskijoined
03:45:44  * apapirovskiquit (Remote host closed the connection)
03:46:03  * apapirovskijoined
04:47:44  * node-ghjoined
04:47:44  * node-ghpart
04:50:04  * node-ghjoined
04:50:04  * node-ghpart
04:51:38  * apapirovskiquit (Remote host closed the connection)
04:53:22  * node-ghjoined
04:53:22  * node-ghpart
04:54:29  * apapirovskijoined
04:59:34  * apapirovskiquit (Ping timeout: 260 seconds)
05:06:25  * apapirovskijoined
05:28:57  * node-ghjoined
05:28:57  * node-ghpart
05:46:08  * apapirovskiquit (Remote host closed the connection)
06:06:27  * apapirovskijoined
06:11:05  * apapirovskiquit (Ping timeout: 252 seconds)
06:30:52  * apapirovskijoined
06:38:55  * apapirovskiquit (Quit: Leaving...)
07:52:56  * apapirovskijoined
08:20:12  * seishunjoined
08:36:34  * apapirovskiquit (Ping timeout: 260 seconds)
09:00:07  * BridgeAR2joined
10:13:32  * node-ghjoined
10:13:33  * node-ghpart
10:25:07  * mylesborinsquit (Quit: farewell for now)
10:25:38  * mylesborinsjoined
10:57:27  * BridgeAR2quit (Ping timeout: 252 seconds)
11:19:54  * node-ghjoined
11:19:55  * node-ghpart
11:44:56  * juggernaut451joined
11:55:35  * juggernaut451quit (Remote host closed the connection)
12:40:20  * apapirovskijoined
13:26:08  <Trott>This is an interesting one: test-macstadium-macos10.10-x64-1 is running a stress-single-test on test-fs-readfile-tostring-fail.js but it's not showing up in the Jenkins interface and it's happily running node-test-commit jobs at the same time, making those jobs fail.
13:27:20  <Trott>I'm not sure what to do about that so I'm going to take it out of the rotation.
13:27:50  <refack>Jenkins be Janky ¯\_(ツ)_/¯
13:28:48  <Trott>Except that it's running a CITGM job right now so I don't want to terminate that.
13:29:04  <Trott>So, harumph.
13:29:42  <Trott>Maybe I'll remove the test out of the node-stress-single-test so it can't run the test. I can see the actual job was canceled (by me!) in the Jenkins interface.
13:30:54  <Trott>Done.
14:03:27  * seishunquit (Ping timeout: 240 seconds)
14:09:57  * seishunjoined
14:17:17  * seishunquit (Ping timeout: 245 seconds)
14:27:37  * BridgeAR2joined
14:50:40  * node-ghjoined
14:50:41  * node-ghpart
15:15:36  * seishunjoined
15:31:11  <Trott>Can't run Windows stress tests due to a 3-minute timeout. See https://github.com/nodejs/node/issues/20836#issuecomment-391390203. Not sure who to ping but going to guess joaocgreis and maybe refack
15:32:53  * node-ghjoined
15:32:53  * node-ghpart
15:33:45  * node-ghjoined
15:33:45  * node-ghpart
15:33:56  * node-ghjoined
15:33:56  * node-ghpart
15:34:40  * node-ghjoined
15:34:40  * node-ghpart
15:34:59  * node-ghjoined
15:34:59  * node-ghpart
15:36:00  * node-ghjoined
15:36:00  * node-ghpart
15:36:41  * node-ghjoined
15:36:41  * node-ghpart
15:37:18  <maclover7>Trott: thank you for helping go through the issue tracker
15:37:44  * node-ghjoined
15:37:44  * node-ghpart
15:38:50  * node-ghjoined
15:38:50  * node-ghpart
15:39:55  * node-ghjoined
15:39:55  * node-ghpart
15:40:53  * node-ghjoined
15:40:53  * node-ghpart
15:42:10  * node-ghjoined
15:42:10  * node-ghpart
15:44:48  * node-ghjoined
15:44:49  * node-ghpart
15:57:46  * node-ghjoined
15:57:46  * node-ghpart
16:12:11  * node-ghjoined
16:12:11  * node-ghpart
16:14:57  * seishunquit (Ping timeout: 240 seconds)
16:16:48  * node-ghjoined
16:16:49  * node-ghpart
16:18:58  * node-ghjoined
16:18:59  * node-ghpart
16:19:56  * apapirovskiquit (Remote host closed the connection)
16:20:15  <joaocgreis>Trott: stress-tests should be fixed, but Jenkins is running 7 node-test-commit now so only one of my rebuilds started. Please start the other one later, or I can do it
16:20:41  <joaocgreis>I actually tried to start it twice, so they might still decide to appear there...
16:32:00  * seishunjoined
16:33:35  * juggernaut451joined
17:08:43  * juggernaut451quit (Remote host closed the connection)
17:20:52  * juggernaut451joined
17:30:31  * apapirovskijoined
17:34:47  * apapirovskiquit (Ping timeout: 245 seconds)
17:38:47  * node-ghjoined
17:38:47  * node-ghpart
17:40:01  * juggernaut451quit (Remote host closed the connection)
17:53:12  * node-ghjoined
17:53:12  * node-ghpart
18:04:36  * node-ghjoined
18:04:36  * node-ghpart
18:15:48  * apapirovskijoined
18:19:26  * apapirov_joined
18:21:29  * node-ghjoined
18:21:30  * node-ghpart
18:23:07  * apapirovskiquit (Ping timeout: 245 seconds)
18:24:51  <mmarchini>CI Health Dashboard (proof-of-concept): https://nodejs-ci-health.mmarchini.me/
18:24:53  <mmarchini>Still need a lot of refinement though
18:27:13  <Trott>mmarchini: maclover7 also created a different type of dashboard that might be worth looking at. Yours looks cool to me. 72% failure rate is making me sad, though.
18:27:52  <Trott>Also, hi, everyone! I can't start jobs on CI. They just vanish. Build queue full, I guess?
18:28:26  * seishunquit (Ping timeout: 252 seconds)
18:38:53  * srl295joined
18:40:47  * seishunjoined
18:49:20  * ryzokuke_joined
18:51:58  <refack>mmarchini: RE CI health http://node-build-monitor.herokuapp.com/
18:54:11  <mmarchini>maclover7 mentioned node-build-monitor on https://github.com/nodejs/build/issues/1232, but it displays machine health and not job health
18:55:45  <mmarchini>Trott: I also got sad when I saw 72% failure :(
18:55:50  <refack>YEah, it's more for us build folk. AFAICT your tool shows the combination of code health and CI cluster health (which is 🤧 🛏️)
19:04:36  * node-ghjoined
19:04:37  * node-ghpart
19:06:18  * node-ghjoined
19:06:18  * node-ghpart
19:16:01  * node-ghjoined
19:16:02  * node-ghpart
19:17:38  * node-ghjoined
19:17:39  * node-ghpart
19:20:14  * node-ghjoined
19:20:14  * node-ghpart
19:20:28  * node-ghjoined
19:20:29  * node-ghpart
19:23:44  * node-ghjoined
19:23:44  * node-ghpart
19:25:40  * apapirov_quit (Ping timeout: 245 seconds)
19:26:59  * node-ghjoined
19:26:59  * node-ghpart
19:36:04  <maclover7>mmarchini: hi!
19:36:23  <maclover7>my suggestion would be to work on the job health type stuff
19:36:30  <maclover7>there's a lot of good info there that could be mined
19:38:06  <maclover7>very nice initial work
19:38:17  <ryzokuke_>hmm, is there a guide for inspecting flaky tests?
19:38:37  <ryzokuken>sorry, this client is the better one 😅
19:39:06  <maclover7>ryzokuken: Do you mean the logs from the test?
19:39:16  <maclover7>Not completely sure what you mean by client
19:39:31  <ryzokuken>sorry, please ignore the message.
19:39:42  <ryzokuken>I just meant that I sent the message from the wrong IRC user
19:39:56  <ryzokuken>anyway, my point being: I think I could look up all the people working on fixing flaky tests lately and ask them for advice/tips that could help make a document
19:40:32  <maclover7>ah yeah
19:40:38  <ryzokuken>fixing flaky tests seems to be the right kind of "good second task" I've been thinking of.
19:40:42  <maclover7>I think that there's a part in the collaborator guide about this?
19:40:48  <ryzokuken>things we could use to retain contributors
19:40:57  <maclover7>In general people will open an issue in nodejs/node with backtraces
19:41:05  <ryzokuken>idk if there's anything regarding flaky tests?
19:41:16  <maclover7>Like take a look at these https://github.com/nodejs/node/issues?q=is%3Aopen+is%3Aissue+label%3A%22CI+%2F+flaky+test%22
19:42:14  * apapirovskijoined
19:42:17  <ryzokuken>I know there's a ton of them
19:42:31  <ryzokuken>and tbqh it's one the best things a contributor can fix
19:42:43  <maclover7>Yeah I agree
19:42:53  <ryzokuken>maybe we could jot down a document to get them started with it
19:43:10  <maclover7>I mean a lot of it's usually just debugging
19:43:22  <maclover7>Figuring out what (usually an edge case) is causing the problem
19:45:59  * ryzokuke_changed nick to ryzokuken[zzz]
19:46:06  * ryzokuken[zzz]changed nick to ryzokuke_
19:46:16  <ryzokuken>agreed
19:46:32  <ryzokuken>I wonder if we could make a comprehensive guide for new contributors though
19:46:32  * apapirovskiquit (Ping timeout: 252 seconds)
19:58:41  <maclover7>ryzokuken: yeah, seems reasonable
19:58:48  <maclover7>I can try and take a look at something
19:58:57  <ryzokuken>thanks :)
19:59:00  <maclover7>rvagg: can you please take a look at the pi cluster, a ton of machines offline right now
19:59:15  <ryzokuken>ping me whenever you work on it
20:05:04  * seishunquit (Ping timeout: 260 seconds)
20:37:33  * ryzokuke_quit (Quit: Textual IRC Client: www.textualapp.com)
20:40:43  <maclover7>Can someone please check in on test-osuosl-ubuntu1404-ppc64_le-3 and test-packetnet-centos7-arm64-2, both are not responding to ssh/pings
20:44:33  * apapirovskijoined
20:49:00  * apapirovskiquit (Ping timeout: 245 seconds)
21:22:57  * node-ghjoined
21:22:57  * node-ghpart
21:27:30  <Trott>maclover7: test-osuosl-ubuntu1404-ppc64_le-3 would be the IBM folks, but mhdawson is away at a conference. Not sure who else in here is from IBM and/or knows/understands the OSU setup.
21:31:33  <Trott>Well, this isn't good...
21:31:41  <Trott>https://www.irccloud.com/pastebin/I2zznrUX/
21:32:25  <Trott>(Or is it? I'm under the impression that we generally are not supposed to have system binaries for `node` installed on the machines, and I'm wondering if it's what's causing a problem for me right now.)
21:46:28  * apapirovskijoined
21:49:51  <maclover7>Trott: yeah, `node` shouldn't be there
21:49:58  <maclover7>Might have leaked in from node-inspect CI runs
21:50:29  <Trott>Anything I can/should do to correct that?
21:50:37  * apapirovskiquit (Ping timeout: 245 seconds)
21:51:52  * richardlaujoined
21:52:26  <maclover7>addaleax: apapirovski: Does this screen in Jenkins help at all with resolving that flaky test --> https://ci.nodejs.org/job/node-test-commit-linuxone/lastFailedBuild/nodes=rhel72-s390x/testReport/junit/(root)/test/async_hooks_test_zlib_zlib_binding_deflate/
21:52:43  <maclover7>Trott: not sure, probably ignore until when/if it becomes a problem?
21:52:48  <maclover7>In theory it shouldn't
21:54:49  <richardlau>If node-inspect CI runs can leak installed node binaries it might be worth noting so in https://github.com/nodejs/build/issues/1253
21:56:12  * node-ghjoined
21:56:13  * node-ghpart
22:07:05  * apapirovskijoined
22:11:30  * apapirovskiquit (Ping timeout: 245 seconds)
22:22:25  <addaleax>maclover7: a link to the test output of the "last failed build" is not ideal because by now another build has failed :/ assuming you’re talking about job #1552 or #1556, the answer is 'probably not'
22:23:37  <addaleax>like, chances are 90 % that the real bug occurs a (possible long) while before that crash is happening
22:34:16  <Trott>According to mcollina, that zlib.deflate async-hooks thing is a bug in V8 and is fixed in the pull request to update V8 to (I think) 6.7.
23:37:54  * apapirovskijoined
23:42:25  * apapirovskiquit (Ping timeout: 248 seconds)