00:53:30  * Fishrock123quit (Remote host closed the connection)
01:00:15  * Fishrock123joined
01:15:11  * jenkins-monitorquit (Remote host closed the connection)
01:15:12  * jenkins-monitor1quit (Remote host closed the connection)
01:15:21  * jenkins-monitorjoined
01:15:21  * jenkins-monitor1joined
01:20:03  * jenkins-monitor1quit (Remote host closed the connection)
01:20:03  * jenkins-monitorquit (Read error: Connection reset by peer)
01:20:12  * jenkins-monitor1joined
01:20:12  * jenkins-monitorjoined
02:28:57  <rvagg>I've taken care of iojs+release fwiw, ci-release was completely clogged with build jobs, very strange, unsure yet how it's happened
02:29:27  <rvagg>also, sourcing Pi v1 B+ boards is becoming difficult, I still have 4 on back-order with RS but tried to put another one in an order yesterday but they rejected it
02:29:55  <rvagg>I found a local company that has some in stock still, but I've gone ahead and emailed the Raspberry Pi Foundation to see if we can come to some kind of mutually beneficial relationship
02:45:56  * Fishrock123quit (Quit: Leaving...)
04:56:22  <jbergstroem>rvagg: i think Fishrock123 did that (see comment above)
05:24:04  <rvagg>jbergstroem: I can't see how that could create so many jobs, there was over 100 of them
05:24:44  <jbergstroem>sorry, didn't imply any blame; but if you want to spend time debugging you might find some info in his first job
06:20:08  * node-ghjoined
06:20:08  * node-ghpart
09:05:39  * targosjoined
10:44:53  * thealphanerdquit (Quit: farewell for now)
10:45:24  * thealphanerdjoined
12:04:30  * rmgquit (Remote host closed the connection)
13:46:06  <ofrobots>I'm seeing an internal compiler error on one of the arm bots: https://ci.nodejs.org/job/node-test-commit-arm/3466/nodes=armv7-ubuntu1404/console
13:46:35  <ofrobots>any ideas?
13:47:22  <ofrobots>I suspect it might be because of 'make -j 8' and the machine running out of memory
13:47:24  <rvagg>ofrobots: has any of the code at that point changed or is it failing on code that it compiled fine before?
13:47:49  <ofrobots>rvagg: this is failing to build v8_inspector dependency, so this would be new code
13:48:22  <ofrobots>I guess next step might be for me to debug on the machine to test my hypothesis
13:49:47  <rvagg>oh, you're probably ofrobots, missing $JOBS on that machine, although it's got 8 cores, I've restricted it to 4 now
13:49:53  <rvagg>give it another go
13:50:09  <ofrobots>thanks!
13:50:17  <rvagg>explains why I've had a bit of trouble with that machine lately actually ..
14:02:01  <rvagg>ofrobots: I'm still seeing a `-j 8` on there I think, not sure why, ping here if it fails again .. I'm investigating `JOBS`
14:24:55  <ofrobots>rvagg: seems to have worked this time! thanks.
14:41:59  <ofrobots>So many 504s! Jenkins!!!!
16:01:25  * rmgjoined
16:45:31  * bret_quit
16:46:06  * bret_joined
16:46:23  * bret_changed nick to bret
18:44:38  * Fishrock123joined
18:57:52  <jbergstroem>at lesat this time jenkins didn't run out of memory
18:58:01  <jbergstroem>but i'm getting seriously concerned about 28Gb not being enough
19:10:22  <ofrobots>rvagg: getting more ICEs on the arm7-ubuntu1404 bot: https://ci.nodejs.org/job/node-test-commit-arm/3480/nodes=armv7-ubuntu1404/console. This code did compile fine on a previous build, so it look like memory issue. The job still runs make with `-j 8`.
19:10:37  <jbergstroem>ofrobots: let me check the jenkins build stuff
19:11:12  <jbergstroem>we don't win much by doing -j 8 seeing how we use ccache
19:11:54  <jbergstroem>yeah its using getconf _NPROCESSORS_ONLN
19:12:15  <jbergstroem>ofrobots: using -j2 now
19:12:53  <ofrobots>jbergstroem: thanks! I will provide feedback after the next build if it still fails.
19:12:58  <jbergstroem>ofrobots: cool ill be around
19:15:39  <jbergstroem>yeah, those errlols are oom's
19:57:39  * node-ghjoined
19:57:39  * node-ghpart
21:18:08  * Fishrock123quit (Remote host closed the connection)
21:28:54  * Fishrock123joined
21:31:10  * Fishrock123quit (Remote host closed the connection)
21:40:26  <ofrobots>What's the verdict when 'make cpplint' conflicts with the CI?
21:40:50  <ofrobots>VS2013 doesn't like snprintf. 'make cpplint' asserts that one *must* use snprintf
21:42:14  * Fishrock123joined
21:42:43  * Fishrock123quit (Remote host closed the connection)
21:53:48  <Trott>Looks like a stray process hogging common.PORT on freebsd on CI. Can someone (jbergstroem?) kill the process or give the machine a reboot or whatever the standard procedure is? https://ci.nodejs.org/job/node-test-commit-freebsd/2675/nodes=freebsd10-64/console https://ci.nodejs.org/job/node-test-commit-freebsd/2674/nodes=freebsd10-64/console
21:53:48  <Trott>https://ci.nodejs.org/job/node-test-commit-freebsd/2673/nodes=freebsd10-64/console
21:53:58  <jbergstroem>which one?
21:54:36  * Fishrock123joined
21:55:13  <jbergstroem>-1
21:56:04  <jbergstroem>done
22:07:42  <Trott>thx
22:23:15  <Trott>jbergstroem: What I said about the FreeBSD box? Also seems to be happening on test-joyent-smartos14-x64-1 as well. (This has *got* to be some test doing Bad Things.) https://ci.nodejs.org/job/node-test-commit-smartos/2693/nodes=smartos14-64/consoleFull https://ci.nodejs.org/job/node-test-commit-smartos/2691/nodes=smartos14-64/consoleFull
22:23:32  <Trott>Any chance you or someone else can give that host a nudge too?
22:34:32  <Trott>And, while you're at it, maybe test-softlayer-centos5-x64-2 too? https://ci.nodejs.org/job/node-test-commit-linux/3581/nodes=centos5-64/consoleFull
22:42:02  * Fishrock123quit (Quit: Leaving...)