00:05:04  * jenkins-monitorquit (Remote host closed the connection)
00:05:14  * jenkins-monitorjoined
00:58:20  <Trott>test-npm-install is failing all the time on Raspberry Pi now. Timing out. It seems to be taking far longer to run than it did in the past. On Pi 1 devices, it used to take about 30s. Now it's taking longer than 180s. Not sure what's up, but if anyone has any ideas or if there's already an open issue, hey, let me know.
00:58:42  <Trott>Example successful run: https://ci.nodejs.org/job/node-test-binary-arm/6039/RUN_SUBSET=4,label=pi1-raspbian-wheezy/console
00:58:50  <Trott>https://www.irccloud.com/pastebin/rDQUfaNq/
00:59:21  <Trott>Nearly all runs since then look more like https://ci.nodejs.org/job/node-test-binary-arm/6056/RUN_SUBSET=5,label=pi1-raspbian-wheezy/console
00:59:31  <Trott>https://www.irccloud.com/pastebin/AsmmGNTd/
01:11:52  <Trott>Hmmm, looks like I may have oversold just how much the problem is that specific test. Never mind, I think.
01:15:41  <Trott>Whatever the case, CI has been failing almost every time today. :-(
01:15:59  <Trott>Not sure if it's infra-related or unrelated, haven't had much time to look today...
01:16:49  <Trott>Raspberry Pi seems to be unresponsive, though. Like the cross-compile ends and then the task just hangs....
01:18:43  <Trott>Examples of it happening RIGHT NOW if anyone is around and wants to look: https://ci.nodejs.org/job/node-test-commit-arm-fanned/6710/ and https://ci.nodejs.org/job/node-test-commit-arm-fanned/6711/. (Maybe I just need to be super patient. I see a previous Raspberry Pi job has been running for five-and-a-half hours and is chugging along past that point.)
01:25:35  <jbergstroem>hey
01:25:37  * node-ghjoined
01:25:37  * node-ghpart
01:26:18  <jbergstroem>hm
01:26:37  <jbergstroem>might be internet related
01:26:41  <jbergstroem>let me check the nfs host
01:31:34  <jbergstroem>i think its related to the clean stuff
01:31:36  <jbergstroem>those hosts look busy
01:33:08  <jbergstroem>yeah somethings going on with disk
01:43:20  <jbergstroem>Trott: it seems to make progress albeit slow
01:43:25  <jbergstroem>not sure if killing it will help
01:43:30  <jbergstroem>i'll leave it for a bit longer
02:24:11  * rmgquit (Ping timeout: 240 seconds)
02:26:58  * rmgjoined
03:29:06  <jbergstroem>looks done'
03:32:38  * not-an-aardvarkjoined
04:00:21  <Trott>jbergstroem: It looks to me like /home/iojs/build is an NFS mount. Which means that the test tmp directory and the fixtures directory are also NFS mounts. So if there's NFS issues, anything creating or watching files in tests is likely to be affected. Is that broadly true? If so, the results aren't alarming. (All the affected tests seem to be writing files
04:00:21  <Trott>to the test tmp dir.)
07:36:08  * evanlucasquit (Read error: Connection reset by peer)
07:36:50  * evanlucasjoined
07:47:32  * mscdexjoined
07:47:59  <mscdex>is there some network or similar issue going on with the arm nodes? they're taking an extraordinarily long time to git fetch
07:48:27  <mscdex>pi1, pi2, and pi3
07:48:56  <mscdex>looks like the git operations are taking like 15-20 mins each?
09:04:23  * not-an-aardvarkquit (Quit: Connection closed for inactivity)
09:11:05  * sgimenoquit (Ping timeout: 256 seconds)
09:25:12  * sgimenojoined
09:51:19  <jbergstroem>Trott: we're talking raspberry pi's; using usb storage is super slow. all of them are using ssd mounted over nfs
09:51:33  <jbergstroem>Trott: they all have their own ccache/storage mounted
09:52:23  <jbergstroem>Trott: we had disk issues a few months ago which is why i mentioned checking it; but it was likely tied to temperature for the host. i couldn't see any apparent disk issues when i checked yesterday
09:58:40  * node-ghjoined
09:58:40  * node-ghpart
10:00:48  * node-ghjoined
10:00:48  * node-ghpart
10:01:04  <jbergstroem>mhdawson: is benchmarking sync/rsync working as intended now?
11:25:07  * mylesborinsquit (Quit: farewell for now)
11:25:37  * mylesborinsjoined
14:02:30  * lance|afkchanged nick to lanceball
14:07:15  * targosquit (Quit: Leaving)
14:22:07  * node-ghjoined
14:22:07  * node-ghpart
14:24:12  * node-ghjoined
14:24:13  * node-ghpart
15:09:42  * node-ghjoined
15:09:42  * node-ghpart
15:13:16  * node-ghjoined
15:13:16  * node-ghpart
15:22:44  * node-ghjoined
15:22:44  * node-ghpart
15:44:43  * node-ghjoined
15:44:43  * node-ghpart
15:47:50  * node-ghjoined
15:47:50  * node-ghpart
16:13:49  * mscdexpart ("Leaving")
16:24:51  * node-ghjoined
16:24:51  * node-ghpart
17:04:51  * node-ghjoined
17:04:51  * node-ghpart
17:05:11  * node-ghjoined
17:05:11  * node-ghpart
17:56:48  * node-ghjoined
17:56:48  * node-ghpart
18:18:39  * node-ghjoined
18:18:39  * node-ghpart
18:20:21  * node-ghjoined
18:20:21  * node-ghpart
18:41:28  * node-ghjoined
18:41:28  * node-ghpart
18:56:41  * node-ghjoined
18:56:41  * node-ghpart
19:01:11  * node-ghjoined
19:01:11  * node-ghpart
19:12:17  * node-ghjoined
19:12:17  * node-ghpart
19:14:05  * node-ghjoined
19:14:05  * node-ghpart
19:18:12  * node-ghjoined
19:18:12  * node-ghpart
19:48:41  * node-ghjoined
19:48:41  * node-ghpart
19:50:17  * node-ghjoined
19:50:17  * node-ghpart
19:53:24  * node-ghjoined
19:53:24  * node-ghpart
20:13:10  * lanceballchanged nick to lance|afk
21:13:49  <phillipj>any pointers as to how docs are generated and promoted to nodejs.org?
21:14:04  <phillipj>I assume it's build related somehow
21:15:02  <phillipj>just landed a commit in core adding GA tracking to docs, but that needs an env variable set when generating the docs
21:33:02  * node-ghjoined
21:33:02  * node-ghpart
21:42:37  * node-ghjoined
21:42:37  * node-ghpart
22:17:03  * node-ghjoined
22:17:03  * node-ghpart
22:19:51  * node-ghjoined
22:19:51  * node-ghpart
22:26:52  * node-ghjoined
22:26:52  * node-ghpart
23:02:51  * node-ghjoined
23:02:52  * node-ghpart
23:07:13  * node-ghjoined
23:07:14  * node-ghpart
23:28:12  * node-ghjoined
23:28:12  * node-ghpart
23:52:17  * node-ghjoined
23:52:17  * node-ghpart