01:53:36  * srl295quit (Quit: Connection closed for inactivity)
02:05:04  * apapirovskijoined
02:09:22  * apapirovskiquit (Ping timeout: 245 seconds)
02:28:07  * apapirovskijoined
02:28:40  * apapirov_joined
02:32:26  * apapirovskiquit (Ping timeout: 252 seconds)
02:33:05  * node-ghjoined
02:33:05  * node-ghpart
02:44:49  * Rubenjoined
02:47:42  * BridgeAR2quit (Ping timeout: 245 seconds)
02:53:10  * Rubenquit (Ping timeout: 245 seconds)
03:42:54  <Trott>rvagg: If you have a moment to reboot or otherwise reset test-requireio_williamkapke-debian9-arm64_pi3-1, it has three build failures in a row. Everything else (AFAIK) is OK with the Pi devices, so fixing that one would clear up a lot of spurious failures. Here's what I *think* is the relevant stuff from the console:
03:42:58  <Trott>https://www.irccloud.com/pastebin/ZeKhaIJl/
03:43:21  <Trott>I'm going to take test-requireio_williamkapke-debian9-arm64_pi3-1 offline in Jenkins for now.
04:46:02  * apapirov_quit (Ping timeout: 245 seconds)
05:00:33  * apapirovskijoined
05:01:37  * apapirovskiquit (Remote host closed the connection)
05:02:10  * apapirovskijoined
05:02:19  * apapirovskiquit (Remote host closed the connection)
05:02:31  * apapirovskijoined
07:38:49  * seishunjoined
07:45:25  * apapirovskiquit (Remote host closed the connection)
09:01:15  * apapirovskijoined
09:14:24  * richardlauquit (*.net *.split)
10:06:44  * apapirovskiquit (Remote host closed the connection)
10:25:04  * mylesborinsquit (Quit: farewell for now)
10:25:35  * mylesborinsjoined
10:35:52  * Rubenjoined
10:47:49  * apapirovskijoined
10:57:59  * juggernaut451joined
11:07:30  * juggernaut451quit (Remote host closed the connection)
12:14:39  <apapirovski>Anyone from build got a moment to help out with https://github.com/nodejs/node/issues/20907 per chance?
12:21:08  <mmarchini>shouldn't we mark this test as flaky while it's not fixed?
12:23:52  <apapirovski>Yeah, we should. Someone needs to open a PR... I'm trying to debug it with my limited access lol.
12:24:14  <mmarchini>I'll open a PR
12:26:45  <apapirovski>Puzzled why I can't seem to replicate the failure on any of the 5 machines I've setup now... tried ubuntu & fedora, with several different gcc versions. :/
12:27:59  <rvagg>ok, pi cluster has been cleaned up, only one machine that wont' come back online so it needs an sd card inspection. Trott & maclover7
12:34:03  <mmarchini>is the failure happening on arm as well?
12:35:14  <apapirovski>mmarchini: if you're referencing the test i mentioned, i don't think so? just linux & linuxone, afaik
12:46:42  <mmarchini>PR to mark test-zlib.zlib-binding.deflate as flaky: https://github.com/nodejs/node/pull/20935
13:21:51  <maclover7>rvagg: thank you! For the future, what did you do to get it back up again
13:22:08  <maclover7>apapirovski: I will take a look at 20907 a little later today
13:22:20  <apapirovski>thanks maclover7!!
13:22:24  <maclover7>Also, what machine(s) do you want the gcc version for?
13:22:29  <maclover7>That's much easier to get :)
13:22:51  <apapirovski>the fedora ones on commit-linux
13:23:15  <maclover7>do you have a machine ID
13:23:29  <apapirovski>one sec
13:23:31  <maclover7>like test-digitalocean-freebsd10-x64-1, for example
13:23:40  <maclover7>jenkins should show what machine the job ran on
13:24:18  <apapirovski>i think it's test-rackspace-fedora26-x64-1 and test-rackspace-fedora27-x64-1
13:24:23  <apapirovski>lmk if you're looking for something different
13:25:56  <maclover7>nope that's what I need
13:25:59  <maclover7>let me ssh in and get that info
13:26:48  <maclover7>https://www.irccloud.com/pastebin/myCMhdsh/
13:27:27  <maclover7>https://www.irccloud.com/pastebin/IozaGAD7/
13:27:34  <maclover7>^ lmk if you need anything else from the machines
13:34:38  * apapirovskiquit (Remote host closed the connection)
14:01:28  * apapirovskijoined
14:48:20  <apapirovski>thanks for the info maclover7
14:49:02  <apapirovski>sadly that just confirms i'm running the same. no clue what makes those systems special enough to cause those failures
14:50:03  * node-ghjoined
14:50:03  * node-ghpart
15:07:51  <mmarchini>maybe it's something in the provider's environment? apapirovski are you trying it on bare-metal, VM or cloud?
15:08:05  <apapirovski>tried vm & cloud
15:08:15  <mmarchini>rackspace?
15:08:25  <apapirovski>tried that and DO... *shrug*
15:08:32  <mmarchini>uh
15:08:38  <apapirovski>lol i know...
15:09:04  <apapirovski>i think i might know what it is...
15:09:21  <apapirovski>(the bug i mean, not sure why I can't replicate)
15:43:35  * node-ghjoined
15:43:35  * node-ghpart
16:00:29  * apapirovskiquit (Remote host closed the connection)
16:34:42  * apapirovskijoined
16:38:07  * apapirovskiquit (Remote host closed the connection)
16:43:06  * node-ghjoined
16:43:06  * node-ghpart
17:05:48  * node-ghjoined
17:05:48  * node-ghpart
17:05:56  * node-ghjoined
17:05:56  * node-ghpart
17:06:16  * node-ghjoined
17:06:16  * node-ghpart
17:06:23  * node-ghjoined
17:06:23  * node-ghpart
17:06:48  * node-ghjoined
17:06:48  * node-ghpart
17:34:41  * juggernaut451joined
17:39:57  * apapirovskijoined
17:44:51  * apapirovskiquit (Ping timeout: 256 seconds)
17:44:55  * juggernaut451quit (Remote host closed the connection)
17:44:56  * node-ghjoined
17:44:56  * node-ghpart
17:50:19  <mylesborins>Super emergency right now
17:50:24  <mylesborins>vs2017 build machine is not working
17:50:25  * node-ghjoined
17:50:25  * node-ghpart
17:50:34  <mylesborins>ping @mhdawson @rvagg
17:50:46  <mylesborins>joaocgreis
17:50:53  <refack>I'll see what I can do
17:53:40  <joaocgreis>locked from a previous job
17:53:52  <joaocgreis>refack: are you working on it or can I clean the machine?
17:54:10  <refack>I cleaned the WS
17:54:15  <refack>You can retry
17:55:55  <maclover7>mylesborins: I know this release was put together quickly, but for future releases, can you please give as much headsup as possible to the build wg
17:56:09  <maclover7>so we can be ready to help the release get out as needed
17:56:17  <mylesborins>maclover7 assume a release every thursday
17:56:18  <mylesborins>s/thursday/tuesday
17:56:38  <mylesborins>but we open release PRs that have a target date days in advance
17:56:41  <mylesborins>to weeks in advance
17:56:45  <mylesborins>I can just /cc build in those prs
17:56:48  <mylesborins>would that be helpful?
17:56:56  <mylesborins>looks like aix is failing now
17:57:10  <maclover7>hmm that cc can often be a bit overused causing people to ignore notifications
17:57:15  <maclover7>maybe post here in #node-build too?
17:57:23  <maclover7>putting something on the node foundation calendar might be good too
17:57:25  <mylesborins>I can try my best
17:57:32  <mylesborins>I don't think we should put it in the calendar
17:58:16  <mylesborins>releases are "targets" but not promises
17:58:17  <mylesborins>and lots of different things block it
17:58:18  <mylesborins>I can join the next wg meeting and lets discuss better options for this
17:58:20  <mylesborins>or at collab summit
17:58:21  <refack>I guess those jobs are sensitive to rapid start+abort
17:58:40  <maclover7>Whatever works, just so we all know that there is something important that'll be happening, so we know to try our best to be around
17:58:42  <refack>locks the git workspace
17:58:47  <maclover7>While also acknowledging that we are all volunteers :)
17:59:23  <mylesborins>absolutely
17:59:31  <mylesborins>I mean a huge thing that would be helpful is more folks working on build
17:59:39  <mylesborins>potentially some folks at orgs that are compensating them for the work
18:00:16  <mylesborins>I don't think it is fair to rely on volunteers for stuff like this
18:00:17  <mylesborins>although with that I 100% appreciate all the help
18:00:18  <mylesborins>and think y'all are awesome
18:00:21  <refack>devops is a thankless job 🤷
18:01:20  <refack>AIX had the same issue, and should now be resolved.
18:02:08  <refack>mylesborins: if you are cancelling jobs 3​43​2 and 3​43​3, try to stagger it so that they at least complete the git step
18:04:16  <mylesborins>ok
18:04:19  <mylesborins>:D
18:04:21  <mylesborins>staggering
18:09:13  <mylesborins>things are now building
18:09:15  <mylesborins>thanks y'all
18:09:48  <refack>there was also hickup with the centos6 bot, but it also seems to work now
18:17:04  <mylesborins>ugh
18:17:05  <mylesborins>and yay
18:18:47  <mylesborins>did someone make changes to citgm recently?
18:18:53  <mylesborins>lots of stuff is broken
18:18:53  <mylesborins>https://ci.nodejs.org/view/Node.js-citgm/job/citgm-smoker/1437/
18:19:32  <mylesborins>seems like something to do with new compilers that certain machines don't have
18:20:09  <refack>it's possible that the new `node-inspector` job is changing the baseline of the machines
18:20:25  <refack>But I'm spaculating
18:21:23  * apapirovskijoined
18:25:01  <refack>Nope, it seems like ordinary bit-rot. The `node-commit` job was updated for the PPC AIX and OS390 machines, but the CITGM job wasn't
18:25:40  * apapirovskiquit (Ping timeout: 245 seconds)
18:26:07  * node-ghjoined
18:26:07  * node-ghpart
18:26:26  <mylesborins>I opened an issue and documented the machines failing due to infra
18:37:48  * apapirovskijoined
18:41:31  * Rubenquit (Ping timeout: 256 seconds)
18:53:45  * BridgeARjoined
18:56:10  * seishunquit (Ping timeout: 264 seconds)
18:56:57  * jaywonjoined
19:03:49  * seishunjoined
19:04:45  * node-ghjoined
19:04:45  * node-ghpart
19:06:18  * node-ghjoined
19:06:18  * node-ghpart
19:07:20  * node-ghjoined
19:07:21  * node-ghpart
19:25:18  * refackquit
19:25:26  * node-ghjoined
19:25:26  * node-ghpart
19:25:43  * refackjoined
19:27:47  * jaywonquit (Remote host closed the connection)
19:34:01  * orangemochachanged nick to Guest11409
19:34:05  * rvaggchanged nick to Guest53322
19:34:06  * devsnekchanged nick to Guest16779
19:34:14  * ljharbchanged nick to Guest82130
19:34:15  * qbitchanged nick to Guest53447
19:35:24  * Guest82130changed nick to LJHarb
19:46:59  * Guest53447quit (Quit: WeeChat 2.0.1)
20:01:10  * jaywonjoined
20:05:40  * jaywonquit (Ping timeout: 245 seconds)
20:07:42  * Guest16779changed nick to devsnek
20:17:56  * node-ghjoined
20:17:57  * node-ghpart
20:22:33  <maclover7>rvagg: how do you start docker daemon on the pis?
20:22:38  <maclover7>systemctl requires root password
20:27:27  * seishunquit (Ping timeout: 240 seconds)
20:42:12  * node-ghjoined
20:42:12  * node-ghpart
20:44:20  <maclover7>Marked as many of the broken Pis as possible as offline, to avoid failing builds
20:44:32  <maclover7>BridgeAR: when you have a minute can you take a look at some of your old issues in nodejs/build
20:44:55  <BridgeAR>maclover7: sure, I'll do that in a bit
20:46:30  * node-ghjoined
20:46:31  * node-ghpart
20:47:26  * node-ghjoined
20:47:26  * node-ghpart
20:50:34  * node-ghjoined
20:50:34  * node-ghpart
20:58:21  * node-ghjoined
20:58:22  * node-ghpart
21:03:20  * node-ghjoined
21:03:21  * node-ghpart
21:05:35  * node-ghjoined
21:05:36  * node-ghpart
21:06:28  * node-ghjoined
21:06:28  * node-ghpart
21:07:43  * node-ghjoined
21:07:43  * node-ghpart
21:08:41  * node-ghjoined
21:08:41  * node-ghpart
21:11:55  * qbitjoined
21:19:35  * node-ghjoined
21:19:35  * node-ghpart
21:33:31  * jaywonjoined
21:34:50  * node-ghjoined
21:34:50  * node-ghpart
21:36:28  * node-ghjoined
21:36:28  * node-ghpart
21:37:23  * node-ghjoined
21:37:24  * node-ghpart
21:41:58  * node-ghjoined
21:41:58  * node-ghpart
21:43:18  * node-ghjoined
21:43:18  * node-ghpart
21:46:57  * seishunjoined
21:46:59  * node-ghjoined
21:46:59  * node-ghpart
21:49:09  * node-ghjoined
21:49:09  * node-ghpart
21:50:41  * node-ghjoined
21:50:41  * node-ghpart
21:53:24  * node-ghjoined
21:53:24  * node-ghpart
21:55:52  * seishunquit (Ping timeout: 252 seconds)
21:55:53  * apapirovskiquit (Read error: Connection reset by peer)
21:56:10  * apapirovskijoined
21:56:50  * apapirov_joined
21:58:17  * apapirovskiquit (Read error: Connection reset by peer)
21:58:18  * apapiro__joined
22:01:55  * apapirov_quit (Ping timeout: 245 seconds)
22:03:25  * apapiro__quit (Remote host closed the connection)
22:04:15  * node-ghjoined
22:04:16  * node-ghpart
22:06:21  * node-ghjoined
22:06:21  * node-ghpart
22:06:58  * jaywonquit (Remote host closed the connection)
22:07:34  <maclover7>Sorry for lots of pings, but rvagg mhdawson joaocgreis can you ssh into ci.nodejs.org? It looks like the host is down
22:07:51  <maclover7>CI down for anyone else?
22:23:56  * apapirovskijoined
22:23:59  * jaywonjoined
22:28:30  * apapirovskiquit (Ping timeout: 252 seconds)
22:28:39  <joaocgreis>maclover7: down for me as well, let me see what I can do. Nothing to be sorry about!
22:35:34  <joaocgreis>no ssh, power cycled
22:38:25  <joaocgreis>back up
22:46:36  * jaywonquit (Remote host closed the connection)
22:47:13  * jaywonjoined
22:51:30  * jaywonquit (Ping timeout: 245 seconds)
22:54:38  <maclover7>joaocgreis: back up for me, thanks!!
22:56:42  * jaywonjoined
22:57:01  * node-ghjoined
22:57:01  * node-ghpart
23:05:20  * apapirovskijoined
23:05:20  * node-ghjoined
23:05:20  * node-ghpart
23:09:53  * apapirovskiquit (Ping timeout: 248 seconds)
23:34:08  * node-ghjoined
23:34:08  * node-ghpart
23:46:50  * apapirovskijoined
23:51:29  * apapirovskiquit (Ping timeout: 248 seconds)