00:24:52  * Fishrock123joined
00:42:04  * Fishrock123quit (Remote host closed the connection)
00:47:44  * Fishrock123joined
01:27:35  * Fishrock123quit (Quit: Leaving...)
02:17:49  * joyee_quit (Remote host closed the connection)
02:18:04  * joyeejoined
03:34:53  * joyeequit (Remote host closed the connection)
03:35:25  * joyeejoined
03:37:51  * joyeequit (Remote host closed the connection)
03:38:27  * joyeejoined
03:52:36  * joyeequit (Remote host closed the connection)
03:53:11  * joyeejoined
04:23:06  * joyee_joined
04:26:41  * joyeequit (Ping timeout: 252 seconds)
05:16:30  <Trott>Took test-requireio_securogroup-debian7-arm_pi1p-1 offline. for continuous failures. rvagg joaocgreis
05:17:13  <Trott>Sample failure:
05:17:19  <Trott>https://www.irccloud.com/pastebin/7T9gBy5i/
05:17:27  <Trott>That's from https://ci.nodejs.org/job/node-test-binary-arm/11013/RUN_SUBSET=5,label=pi1-raspbian-wheezy/console
05:38:36  <rvagg>Trott you took that offline only the other day didn't you? Must have been put back online by the upgrade / restart. I'll unplug when I get back to the office.
06:46:43  * seishunjoined
08:08:10  * seishunquit (Ping timeout: 264 seconds)
09:53:52  * node-ghjoined
09:53:52  * node-ghpart
10:02:40  <rvagg>Trott: I've brought test-requireio_securogroup-debian7-arm_pi1p-1 back online, there was a zombie `node` process in there that must have had been locking the filesystem preventing `git clean -fdx` from removing it. `kill -KILL <pid>` did the trick and it's cleaning properly again now
10:18:40  * joyee_quit (Remote host closed the connection)
10:19:16  * joyeejoined
10:25:08  * mylesborinsquit (Quit: farewell for now)
10:25:24  * joyeequit (Remote host closed the connection)
10:25:39  * mylesborinsjoined
10:56:31  * joyeejoined
10:56:49  * joyeequit (Remote host closed the connection)
11:09:00  * joyeejoined
11:14:04  * joyeequit (Ping timeout: 258 seconds)
11:17:26  <refack>rvagg: test-requireio_bengl-debian7-arm_pi1p-2 has an NFS problem, there an undeletable file there - https://ci.nodejs.org/job/node-test-binary-arm/11024/RUN_SUBSET=0,label=pi1-raspbian-wheezy/console
11:17:57  <refack>Yesterday I moved `out` to a different path, but that apparently was only temporary
11:52:59  * node-ghjoined
11:52:59  * node-ghpart
11:54:43  <rvagg>refack: same problem that Rich was having above ^^, fixed in the same way: https://www.dropbox.com/s/faxrdwcqg2gcxvh/Screen%20Shot%202017-10-18%20at%2010.52.50%20pm.png?dl=0
11:55:21  <rvagg>although, two `node <defunct>` instances, with two on one machine, is a bit of a concern
11:55:29  <rvagg>can't recall seeing that before
11:55:32  <rvagg>maybe I haven't looked
11:57:44  <refack>ohh, I probably missed the `[node] defuncts` signature. Was looking for the long `.../node test/...` one
11:58:02  <refack>P.S. is it safe to just `restart 0` the PIs?
11:59:40  <rvagg>refack: not so much, if you do restart one then log back in afterward and `sudo mount -a`, some of them (all?) have a weird nfs problem on boot where they wait for network for ~10 minutes trying to mount nfs and fail to do so and end up coming back up without the mounts. If they are allowed to join back in the cluster without NFS mounts then (a) they are slower and (b) will fill up their disks quicker so beware!
12:00:30  <rvagg>refack: you're welcome to try and figure out the nfs problem if you're inclined though, it's a real annoyance but I've been avoiding fixing it, just take a machine out of the cluster and reboot it, being patient for it to come back online
12:00:58  <refack>Ok. I was afraid of something like that...
12:01:07  <rvagg>found 2 more `node <defunct>` processes on 2 different machines
12:01:29  <refack>I'll give it a try...
12:03:03  <rvagg>fwiw I remember having this nfs boot problem at least 20 years ago, it's not a new thing and I'm not sure I ever understood it properly then and have no idea why it persists today. Even Michael Dawson is experiencing it now iwth the backup Pi he's set up in his office.
12:04:33  <rvagg>ugh, two `[bash] <defunct>` on one of the release Pi's, and they're unkillable
12:05:05  * joyeejoined
12:05:37  * refackgoogling "ubuntu ps <defunct>"
12:06:53  <refack>https://askubuntu.com/questions/201303/what-is-a-defunct-process-and-why-doesnt-it-get-killed
12:08:32  <refack>Don't like Zombies, not in my process table, nor on TV
12:09:09  * node-ghjoined
12:09:09  * node-ghpart
12:13:12  <rvagg>I think I killed a critical jenkins process getting rid of that defunct bash ... ooops, had to restart the build it was working on (6.12.0-rc1, about the third time I've had to restart this one to get all green)
12:13:27  <rvagg>oh well, time for bed and time to stop making a mess
12:13:53  <refack>Night
12:31:35  * node-ghjoined
12:31:35  * node-ghpart
12:57:20  * lanceballchanged nick to lance|afk
12:57:23  * lance|afkchanged nick to lanceball
13:01:23  * evanlucasjoined
13:27:22  * sgimenojoined
13:31:28  * chorrelljoined
14:14:10  * node-ghjoined
14:14:10  * node-ghpart
14:19:10  * node-ghjoined
14:19:10  * node-ghpart
14:31:07  * node-ghjoined
14:31:07  * node-ghpart
14:37:17  * chorrellquit (Quit: Textual IRC Client: www.textualapp.com)
14:40:13  * chorrelljoined
14:41:40  * node-ghjoined
14:41:40  * node-ghpart
14:44:32  * node-ghjoined
14:44:32  * node-ghpart
14:46:30  * node-ghjoined
14:46:30  * node-ghpart
15:24:22  * sgimenoquit (Remote host closed the connection)
16:04:32  <Trott>Can we get ubuntu1604-32 added as an option in the stress test? joaocgreis
16:12:53  <joaocgreis>Trott: done (not tested but should be good, let me know if not)
16:13:07  <Trott>joaocgreis: Thanks!
16:26:30  * Fishrock123joined
16:26:41  * chorrellquit (Quit: My MacBook has gone to sleep. ZZZzzz…)
17:00:55  * seishunjoined
17:06:05  <seishun>refack: any progress with https://github.com/nodejs/build/pull/809 and https://github.com/nodejs/build/pull/797 ?
17:26:28  * chorrelljoined
17:51:02  * chorrellquit (Quit: My MacBook has gone to sleep. ZZZzzz…)
17:57:04  * chorrelljoined
18:00:25  * chorrellquit (Client Quit)
18:02:20  * chorrelljoined
18:10:40  * chorrellquit (Quit: My MacBook has gone to sleep. ZZZzzz…)
18:11:16  * chorrelljoined
18:31:43  * joyeequit (Remote host closed the connection)
18:41:50  * chorrellquit (Quit: My MacBook has gone to sleep. ZZZzzz…)
18:41:54  * joyeejoined
18:44:46  * chorrelljoined
18:46:01  * joyeequit (Ping timeout: 240 seconds)
18:51:32  * Fishrock123quit (Remote host closed the connection)
18:52:09  * Fishrock123joined
18:56:47  * Fishrock123quit (Ping timeout: 252 seconds)
19:02:32  * Fishrock123joined
19:11:24  <Trott>What's the Makefile task (if any) for a cross-compile? Like, what generates the binaries we use on the Raspberry Pi devices in CI? rvagg
19:29:42  <seishun>I thought it builds on a raspberry pi too
19:31:59  * lanceballchanged nick to lance|afk
20:04:21  * Fishrock123quit (Remote host closed the connection)
20:04:54  <Trott>seishun: Probably (so maybe "cross-compile" isn't the right word) but I thought it would run a particular commend. I'm guessing it runs `make build-ci` rather than `make build` but I'm not sure. Trying to fix test-make-doc broken-ness on Raspberry Pi's in CI.
20:04:59  * Fishrock123joined
20:06:19  <Trott>Now that I'm thinking about it, I realize I can probably find the command in the console output of the appropriate Jenkins task.
20:09:12  * Fishrock123quit (Ping timeout: 258 seconds)
20:15:09  * Fishrock123joined
20:21:19  * Fishrock123quit (Remote host closed the connection)
20:40:07  * seishunquit (Ping timeout: 260 seconds)
20:44:47  * Fishrock123joined
20:52:15  * lance|afkchanged nick to lanceball
20:53:16  * Fishrock123quit (Remote host closed the connection)
20:53:50  * Fishrock123joined
20:58:16  * Fishrock123quit (Ping timeout: 258 seconds)
21:04:47  * Fishrock123joined
21:08:30  * chorrellquit (Quit: Textual IRC Client: www.textualapp.com)
21:09:03  * Fishrock123quit (Remote host closed the connection)
21:18:21  * joyeejoined
21:19:59  <Trott>I removed a test in a pull request, but it's still running on the Raspberry Pi devices. Any ideas? rvagg
21:20:34  <rvagg>Hang on. Just getting up here, will look.
21:22:29  <refack>Trott: check out http://logs.libuv.org/node-build/2017-10-17#12:23:36.674
21:22:39  * joyeequit (Ping timeout: 248 seconds)
21:24:33  <Trott>refack: Thanks. That explains why that didn't work for me. :-D I went with Plan B which is to move the test to `test/doctool` and enable `test/doctool` in `make test-ci` but not in `make test-ci-js`.
21:24:50  <Trott>Still puzzled why the Raspberry Pi devices are running a test that doesn't exist.
21:25:57  <refack>I've seen something strange in jobs started from `node-test-pr`https://ci.nodejs.org/job/node-test-commit-arm-fanned/11835/parameters/
21:25:59  <rvagg>mylesborins: FYI https://ci-release.nodejs.org/job/iojs+release/2131/ got a full green for your 6.12.0-rc1 except for a smartos one (complaining about REPLACEME, not sure why it's unique on that). Has a bunch of trouble with ci-release yesterday but we seem to be back into the flow
21:26:02  * evanlucasquit (Remote host closed the connection)
21:26:04  <refack>I tought I was reading thing wrong
21:26:26  <mylesborins>rvagg I thought I updated the replaceme
21:26:28  <mylesborins>sigh
21:26:44  <refack>But I think Jenkins is not passing the right branch to `node-test-commit-arm-fanned `
21:26:44  <rvagg>refack: what's strange in there?
21:26:49  <rvagg>oh
21:27:38  <refack>Which correlates with the `jenkins-multijob` update
21:28:03  <Trott>It seems like for my pull request that has two commits, it is only using the second one? Is that possible?
21:28:03  <rvagg>mylesborins: yeah, the fact that _only_ smartos14 has a problem with this is weird, might be something to look into, either git weirdness or make is doing something different there
21:28:14  <mylesborins>oh wait.. I didn't update those tags yet
21:28:24  <mylesborins>I think smartos is the only one that fails on that for rc releases
21:28:57  <rvagg>gonna have to investigate that, that's too weird
21:29:09  <rvagg>Trott: which job is this?
21:29:23  <Trott>https://ci.nodejs.org/job/node-test-binary-arm/11049/RUN_SUBSET=3,label=pi3-raspbian-jessie/console
21:29:44  <rvagg>goodness, the jenkins queue is pretty big atm
21:30:24  <Trott>Well, I'll terminate one of my runs....
21:34:47  <rvagg>acting strange, see all the grey icons on ci.nodejs.org yet the queue is huge and there's supposed to be a ton of running jobs. ci-release was doing this yesterday and I ended up killing as much as I could manually and restarting
21:35:00  <rvagg>according to the icons, there's only one job actually running
21:36:22  <rvagg>Trott: can confirm that your job is indeed weird, that test shouldn't be in there, you were right to go back to the cross-compile because that'd be where it comes from
22:09:53  * Fishrock123joined
22:15:12  * Fishrock123quit (Ping timeout: 260 seconds)
22:17:15  <rvagg>Trott: check this out: https://ci.nodejs.org/job/node-cross-compile/11690/label=cc-armv7/consoleFull is a child of your job @ https://ci.nodejs.org/job/node-test-pull-request/10831/ which says pr_id=16301 (your test-make-doc one), yet it's pulling "6dcc37d0ed": "src: combine loops in CopyJsStringArray()"
22:17:47  <rvagg>that's PR 16247
22:21:08  <rvagg>ah, which is probably HEAD
22:21:45  <refack>So I wasn't seeing ghosts
22:23:34  <refack>So for 11 days we only run 70 tests on the PIs, and even those we're all HEAD 🤣
22:36:07  <rvagg>mm, I don't get it at all, it's fine until it passes from node-test-pull-request to node-test-commit-arm-fanned, suddenly it's working on a different commit. My guess is that there's an out-of-order thing happening and _jenkins_local_branch is being used by a different test in between when it should be exclusive
22:37:26  <rvagg>joaocgreis: I'm going to have to defer to you on this one, too strange but perhaps the new Jenkins is doing some out-of-order execution that breaks our assumptions. Follow https://ci.nodejs.org/job/node-test-pull-request/10831/ down to the arm-fanned job and see the HEAD commit change (I think it switches from the PR HEAD to the parent/master HEAD)
22:47:16  <joaocgreis>That reminds me of one particular issue... Let me check.
22:51:58  * Fishrock123joined
23:00:25  * joyeejoined
23:04:33  * joyeequit (Ping timeout: 248 seconds)
23:31:19  * node-ghjoined
23:31:19  * node-ghpart
23:31:54  * node-ghjoined
23:31:54  * node-ghpart
23:38:10  * Fishrock123quit (Remote host closed the connection)
23:38:38  * Fishrock123joined
23:46:30  <joaocgreis>Trott, refack, rvagg: Passing properties to subjobs was missing on aix and arm-fanned, so it used the default (master). Should be fixed now
23:47:19  <rvagg>joaocgreis: whoops, that sounds like my fault. Those are the tickboxes on subjobs aren't they?
23:47:40  <joaocgreis>rvagg: it was not your fault
23:48:49  <joaocgreis>it is the details under "advanced", the properties file must be passed to subjobs. The tickboxes were good
23:49:30  <joaocgreis>so.. I expect failing tests now, since we haven't been testing for some time on those platforms
23:50:27  <joaocgreis>mylesborins: this might be particularly relevant for LTS, since aix and arm-fanned were testing master instead of whatever branch they were supposed to
23:51:38  <Trott>Glad I said something. :-D
23:51:42  <Trott>Thanks for fixing it.
23:55:31  <joaocgreis>Trott: rebuilding your job from above as a test: https://ci.nodejs.org/job/node-test-pull-request/10838/ (not sure if it's still useful to you, if it is then here it is)
23:55:49  <Trott>Heh, I just restarted it at https://ci.nodejs.org/job/node-test-commit/13287/
23:55:55  <Trott>https://ci.nodejs.org/job/node-test-pull-request/10837/
23:56:14  <Trott>Although seeing the results of two runs might not be a bad thing....
23:57:07  <joaocgreis>Trott: ok, I'll let it run. Feel free to abort mine, if CI gets crowded or something
23:57:29  <joaocgreis>just checked, aix and arm-fanned look good, so I'm done with it
23:58:12  <Trott>Awesome, thanks.