04:04:35  <jbergstroem>evanlucas: ping me too. mhdawson is the boss but i can occasionally help
04:44:35  * jalcinechanged nick to jacky
04:53:01  <evanlucas>jbergstroem cool thanks!
08:01:53  * Fishrock123joined
08:13:47  * Fishrock123quit (Remote host closed the connection)
08:18:48  * Fishrock123joined
09:17:34  * Fishrock123quit (Quit: Leaving...)
10:41:14  * thealphanerdquit (Remote host closed the connection)
10:41:35  * thealphanerdjoined
12:10:15  * Fishrock123joined
12:25:02  * Fishrock123quit (Remote host closed the connection)
12:34:24  * Fishrock123joined
12:43:39  * Fishrock123quit (Remote host closed the connection)
13:04:34  * lanceballchanged nick to lance|afk
13:04:37  * lance|afkchanged nick to lanceball
13:05:12  * Fishrock123joined
13:05:13  * chorrelljoined
13:16:10  * chorrellquit (Quit: My Mac has gone to sleep. ZZZzzz…)
13:17:40  * chorrelljoined
14:03:18  * chorrellquit (Read error: Connection reset by peer)
14:04:27  * chorrelljoined
14:11:56  * Fishrock123quit (Remote host closed the connection)
14:54:06  * Fishrock123joined
15:44:33  * chorrellquit (Quit: My Mac has gone to sleep. ZZZzzz…)
15:54:07  <Trott>Two questions:
15:55:09  <Trott>What's up with FIPS running the tests twice? At least here: https://ci.nodejs.org/job/node-test-commit-linux-fips/2842/nodes=ubuntu1404-64/consoleFull Look for "ok 1 parallel" It's running the tests twice. Wha? (And with different results. Look at test 960. Double wha?)
15:57:42  <Trott>And Raspberry Pi so I guess /cc rvagg: There seems to be a much higher probability of build failures at 5AM EDT (9AM UTC) when the daily master job runs. There have been build failures the last several days that then seem to go away completely or mostly subsequently. Not sure what to make of that or even if there's anything that can reasonably be done, but a
15:57:42  <Trott>noticeable pattern. (And it's the primary cause of node-daily-master being red. :-| )
15:59:06  <Trott>(Back to FIPS: Other FIPS builds don't seem to have the double test run, so maybe it's peculiar to a failure today or to node-daily-master...)
16:44:16  * Fishrock123quit (Remote host closed the connection)
17:22:11  <jbergstroem>note: I added a freebsd host at rackspace; it's intended to replace one at DO but i found some issues so I took it out of rotation
17:28:35  * chorrelljoined
17:33:48  <Trott>joaocgreis: Is there an easy to way to get a FIPS build test on the stress test job? I want to look more closely at stuff like https://ci.nodejs.org/job/node-test-commit-linux-fips/2848/nodes=ubuntu1404-64/console
17:38:55  <joaocgreis>Trott: https://ci.nodejs.org/view/All/job/node-stress-single-test-fips/ is this one good?
17:39:08  <Trott>Ah, I suspect that'll do it. Thanks!
17:41:54  * Fishrock123joined
17:57:48  * Fishrock123quit (Remote host closed the connection)
18:03:46  * Fishrock123joined
18:07:44  <Trott>So, it looks like the FIPS host that runs at Digital Ocean resolves `localhost` reliably via `dns.lookup()` but the FIPS build at SoftLayer does not, so it's causing two tests to be flaky.
18:08:03  <Trott>mhdawson maybe? ^^^^^^
18:08:57  <Trott>How bad would it be to temporarily take the SoftLayer hosts out of the mix until that's sorted out? Because it's causing a lot of CI failures....
18:09:05  <Trott>jbergstroem, maybe? ^^^^^^^^
18:09:36  <jbergstroem>thats more than a few hosts
18:09:41  <jbergstroem>let me read up
18:09:56  <jbergstroem>missing localhost entries in hosts?
18:11:15  <jbergstroem>nope
18:13:56  <Trott>It's IPv6-specific, if that helps.
18:14:07  <jbergstroem>oh yeah
18:14:08  <jbergstroem>this is
18:14:56  <jbergstroem>https://github.com/nodejs/build/issues/415
18:16:14  <Trott>Yeah, but it's weirder than that. `dns.lookup('localhost', {family:6, all:true}, callbackFunction)` is checked to see that it includes `::1`. If it doesn't, then the test is skipped.
18:16:27  <Trott>Sometimes the test is being skipped, and sometimes not. ???
18:16:36  <Trott>As far as I'm aware, no other hosts are exhibiting that behavior.
18:16:41  <Trott>Although now I want to go check...
18:16:58  * node-ghjoined
18:16:58  * node-ghpart
18:17:43  * node-ghjoined
18:17:43  * node-ghpart
18:25:20  * chorrellquit (Quit: Textual IRC Client: www.textualapp.com)
18:27:28  <Fishrock123>ci.nodejs.org’s server DNS address could not be found.
18:27:29  <Fishrock123>uh
18:27:59  <Fishrock123>did jenkins crash
18:28:37  <Trott>Looks up and happily running a 24-minute old job from here...
18:29:12  <Fishrock123>hmm back up now
18:30:51  <Trott>Maybe I'm misunderstanding what you mean, but that sounds more like a momentary DNS blip somewhere (and probably closer to your network's end than nodejs.org's end).
18:33:39  <jbergstroem>Trott: i just know that the ::1 line is missing from softlayer
18:33:44  <Fishrock123>probably yeah
18:33:54  <jbergstroem>and potentially some other hosts
18:35:06  * Fishrock123quit (Remote host closed the connection)
18:35:20  <Trott>jbergstroem: Yeah, but the tests are written in a way to check and account for that. Only place it seems to be not working is SoftLayer FIPS hosts. Like, it checks, and it's there, so it runs the test, but then it's not there! Whee! Regardless, I suspect adding that line to /etc/hosts on the SoftLayer machines will resolve the issue. Is that do-able?
18:36:06  <jbergstroem>Trott: I can add it manually to the missing hosts and then have it on our todo for refactoring ansible so its added for all hosts
18:36:32  <Trott>👍 ✨
18:36:53  <jbergstroem>hm starnge
18:42:27  <Trott>So there's two issues here. One is the larger "not all things that have IPv6 configured are using ::1 for localhost in /etc/hosts" which is something to fix with ansible I guess and is something that isn't causing *too* many large problems. But then there's this SoftLayer FIPS-specific wrinkle of "sometimes it's including ::1 in the list of all IPv6
18:42:27  <Trott>addresses that 'localhost' resolves to, and sometimes it's not!" And it's that thing that's weird. I imagine adding ::1 to /etc/hosts will get rid of that.
18:43:25  * node-ghjoined
18:43:25  * node-ghpart
18:43:36  <jbergstroem>^ plus fixed a few "start java at boot plez"
18:43:44  <jbergstroem>Trott: keep me posted if this persists
18:44:01  <Trott>Running a stress test right now to confirm it fixes the problem.
18:44:12  <jbergstroem>perf
18:50:59  <Trott>jbergstroem: That fixed it!
18:51:17  <Trott>Without the change: https://ci.nodejs.org/job/node-stress-single-test-fips/7/nodes=ubuntu1404-64/console
18:51:26  <Trott>With the change: https://ci.nodejs.org/job/node-stress-single-test-fips/8/nodes=ubuntu1404-64/console
18:51:28  <Trott>Thanks!!!!!
18:51:48  <jbergstroem>🍷👌🏻
18:52:02  <Trott>Now to go off to the issue tracker and elsewhere to see if anyone was tripped up by this in the last 24 hours. (It was uncovered by a change I landed yesterday.)
18:53:27  <jbergstroem>this has been known for a bit
18:53:37  <jbergstroem>i recall ben pinging me about it
18:53:43  <jbergstroem>ben noordhuis -- we have multiple ben's nowadays
18:56:20  <Trott>jbergstroem: Yeah, it's been an issue, but one we've papered over in the test setup...
19:13:41  <jbergstroem>Trott: one less issue to care about (until we add more softlayer hosts and i still haven't gotten to automate it..)
19:36:00  * Fishrock123joined
19:41:30  * Fishrock123quit (Ping timeout: 272 seconds)
20:19:37  * Fishrock123joined
20:48:11  * lanceballchanged nick to lance|afk
21:04:56  <evanlucas>hm, something weird is going on with ci-release jenkins
21:05:21  <evanlucas>the centos5 32 is from a few days ago and not building
21:05:59  <evanlucas>ping jbergstroem?
21:06:04  <jbergstroem>pong
21:06:09  <jbergstroem>checking
21:06:12  <evanlucas>thanks!!
21:06:21  <evanlucas>there were two release builds running
21:06:27  <jbergstroem>ah
21:06:29  <evanlucas>they had been running for like 22 hours
21:06:33  <jbergstroem>they did maintainance work
21:06:34  <jbergstroem>maintenance
21:06:36  <evanlucas>I just cancelled them
21:07:42  <jbergstroem>fixed, give it a minute
21:07:46  <evanlucas>ah cool
21:07:47  <evanlucas>thanks!
21:08:02  <jbergstroem>np
21:16:26  <jbergstroem>evanlucas: its building
21:16:35  <evanlucas>yay
22:19:55  * Fishrock123quit (Quit: Leaving...)
22:52:35  <evanlucas>hmmm looks like the v6.3.1 binaries did not get promoted properly?
22:52:43  <evanlucas>https://nodejs.org/dist/
22:54:16  <evanlucas>nevermind...just took a lot longer than it normally does
23:02:40  <evanlucas>nope, the v6.3.1 directory exists, but it the index.{json,tab} files were not updated and v6.3.1 does not appear on https://nodejs.org/dist/
23:49:19  <jbergstroem>evanlucas: i see files there
23:49:27  <jbergstroem>likely due to the extra hour in between