00:01:26  * qbitquit (Ping timeout: 276 seconds)
00:06:40  * qbitjoined
00:06:57  * qbitchanged nick to Guest12961
00:07:00  * lanceballquit (*.net *.split)
00:07:01  * lucalanzianiquit (*.net *.split)
00:07:02  * devsnekquit (*.net *.split)
00:09:44  * mylesborinsquit (Ping timeout: 260 seconds)
00:11:02  * ryzokuken_quit (Ping timeout: 256 seconds)
00:11:31  * mylesborinsjoined
00:13:24  * Guest12961quit (Ping timeout: 268 seconds)
00:15:23  * Guest12961joined
00:15:49  * lanceballjoined
00:15:49  * lucalanzianijoined
00:15:49  * devsnekjoined
00:17:56  * srl295quit (Quit: Connection closed for inactivity)
00:46:29  * Guest12961changed nick to qbit
00:50:32  * ryzokuken[m]joined
04:09:48  <Trott_>osx1012 git checkout has timed out twice in a row on CI. Anything to do when we start seeing something like that?
04:09:56  <Trott_>Example: https://ci.nodejs.org/job/node-test-commit-osx/19928/nodes=osx1012/console
04:10:11  <Trott_>https://www.irccloud.com/pastebin/jws45u8Y/
04:27:03  <Trott_>Seems to be working again. I'll chalk it up to network issues between the host and GitHub, and hope it doesn't recur.
07:01:24  * seishunjoined
09:20:41  * seishunquit (Ping timeout: 256 seconds)
09:42:52  * ryzokuken[m]quit (Read error: Connection reset by peer)
09:50:27  * ryzokuken[m]joined
10:25:11  * mylesborinsquit (Quit: farewell for now)
10:25:21  * mylesborinsjoined
10:50:02  * seishunjoined
13:41:05  <maclover7>If any infra admins are around, can you please try and restart `test-digitalocean-freebsd11-x64-2`? Can't seem to ssh into that machine
13:42:53  * node-ghjoined
13:42:53  * node-ghpart
13:43:41  * node-ghjoined
13:43:41  * node-ghpart
13:44:36  * lanceballquit (Changing host)
13:44:37  * lanceballjoined
14:33:28  * node-ghjoined
14:33:28  * node-ghpart
15:05:29  * Fishrock123joined
15:06:14  * Fishrock123quit (Client Quit)
15:35:15  <Trott_>Any idea what would have caused all three macOS hosts to fail simultaneously with this?:
15:35:17  <Trott_>https://www.irccloud.com/pastebin/LdQ3E80W/
15:35:33  <Trott_>Check out all three subtasks of https://ci.nodejs.org/job/node-test-commit-osx/19931/nodes=osx1012/
15:36:10  <refack>If it's all three, I'd say network partition
15:37:04  <Trott_>Since it was off hours, I'm guessing it's not something anyone did (or at least not anyone on our side) and just something that kind of happened? Ignore it and hope it doesn't happen again? (I'm totally OK with that.)
15:37:53  <Trott_>Also, here's one I've not seen before: https://ci.nodejs.org/job/node-test-commit-linuxone/3165/nodes=rhel72-s390x/console
15:38:10  <Trott_>https://www.irccloud.com/pastebin/FZkGCmbj/
15:38:23  <Trott_>Probably also something transient, perhaps network, and to be ignored if it doesn't recur?
15:38:37  <refack>https://www.irccloud.com/pastebin/NmynuVNV/
15:39:06  <refack>Also same time
15:39:30  <Trott_>The LinuxONE thing seems to be a half hour earlier.
15:39:33  <refack>Ohh no, the linuxONE is 30 minutes friure
15:39:50  <Trott_>Gotta step AFK for a bit. Back soon, I hope.
15:49:49  <refack>We have this https://ci.nodejs.org/monitoring/nodes/test-softlayer-ubuntu1604-x64-1? but I don't see anything interesting there.
15:50:13  <refack>What might be related is it's the same time that the internet suite runs https://ci.nodejs.org/job/node-test-commit-custom-suites/125/default/console
16:04:20  <Trott_>The internet suite runs as part of node-daily-master and those other jobs I'm posting about *also* are node-daily-master subtasks. So that should be the case every day. I guess we'll see if it happens again tomorrow.
16:04:53  <Trott_>(Or we can manually kick off a node-daily-master job if we're eager to know now now now.)
16:05:04  <refack>https://ci.nodejs.org/job/node-daily-master/1227/
16:05:52  <Trott_>Ah, I see you already had that thought several minutes ago. :-D
16:06:05  <refack>;)
16:06:29  <refack>Well and it seems that there's no (direct) connection...
16:07:55  <Trott_>linuxone and internet both passed, so that's good, I guess.
16:08:57  <Trott_>Pretty delighted that we run the internet suite once a day. We should add pummel too if we can.
16:10:05  <Trott_>Being able to move some of the long-running sequential tests to pummel and know that they'll still be run from time to time in CI might go a long way towards speeding up our test suite.
16:11:46  <refack>Small caveat, that suite runs only on one platform... Better than nothing, but should keep in mind when considering moving other tests
16:12:15  <Trott_>Yeah, absolutely.
16:12:28  <Trott_>Have to choose the tests carefully.
16:13:35  <refack>Or use the daily-master to run more suites on all platforms... that will need some tweaking though
17:26:27  * seishunquit (Ping timeout: 240 seconds)
17:57:11  * node-ghjoined
17:57:11  * node-ghpart
18:17:10  * seishunjoined
21:16:52  * ryzokuken[m]quit (Ping timeout: 240 seconds)
22:06:49  * ryzokuken[m]joined
22:11:57  * seishunquit (Ping timeout: 240 seconds)