00:00:03  * jenkins-monitor1quit (Remote host closed the connection)
00:00:14  * jenkins-monitorjoined
00:48:34  <jbergstroem>Trott: no oom whiel compiling, no segfaults. the gmake process says "defunct"
00:49:29  <Trott>OK, so next time, before terminating the task, I should log on and look for evidence of those possibilities?
00:49:49  <Trott>("terminating the task" meaning stopping the job in the Jenkins interface)
00:50:21  <jbergstroem>just poke around and see if you find anything out of the ordinary
01:54:45  <Trott>Thanks. Will do! Considering that the above job was launched by node-daily-master, a core file (if one was generated) might still be lying around....
02:12:48  <Trott>Oh, wait, no it's not, that's not the way it works....
04:28:28  <Trott>jbergstroem: test-digitalocean-freebsd10-x64-1 had the Jenkins stalling/not-communicating thing, or at least what looked like that.
04:28:48  <Trott>I logged on and there's a bunch of gmake processes from various times just kind of hanging around. Maybe deadlocked somehow?
04:29:01  <Trott>https://www.irccloud.com/pastebin/8IFQozj0/
04:33:26  * node-ghjoined
04:33:26  * node-ghpart
04:34:37  <Trott>I'm going to kill those processes.
04:36:55  * node-ghjoined
04:36:55  * node-ghpart
04:39:21  <Trott>OK, more information but moving it to https://github.com/nodejs/build/issues/525 instead of IRC
06:24:08  * not-an-aardvarkquit (Quit: Connection closed for inactivity)
11:51:49  * thealphanerdquit (Quit: farewell for now)
11:52:20  * thealphanerdjoined
11:57:33  * node-ghjoined
11:57:33  * node-ghpart
12:34:59  * node-ghjoined
12:34:59  * node-ghpart
12:53:13  * node-ghjoined
12:53:13  * node-ghpart
13:10:17  <jbergstroem>joaocgreis: hey, the rpi clean job has been stalling a lot lately. any ideas? https://ci.nodejs.org/job/git-rpi-clean/56/
13:10:46  <jbergstroem>ah, i think i found it: Configuration git-rpi-clean » test-nodesource_svincent-debian7-arm_pi2-3 is still in the queue: There are no nodes with the label ‘test-nodesource_svincent-debian7-arm_pi2-3’
13:13:30  * node-ghjoined
13:13:30  * node-ghpart
13:19:12  * node-ghjoined
13:19:12  * node-ghpart
13:20:06  <jbergstroem>joaocgreis: i can't figure out how you populate the list of workers/labels :\
13:36:55  <joaocgreis>jbergstroem: Just added all that were online at the time. If any of the slaves is not longer online, the job will hang, but should not be a problem because it doesn't block anything
13:37:14  <jbergstroem>joaocgreis: i've cancelled it a fair amount of times the last week
13:37:57  <joaocgreis>jbergstroem: you can just remove the slave when that's the cause, or ping me
13:38:12  <jbergstroem>joaocgreis: how does the list of workers get populated?
13:38:20  <jbergstroem>joaocgreis: i can't find it thorugh the job config
13:44:20  <joaocgreis>found the problem, fixing
13:47:33  <joaocgreis>jbergstroem: they're all under individual nodes, but disappeared becuase of the nodesource->requireio rename. I just had to change something there and all the old ones were automatically removed
13:47:55  <joaocgreis>checking if it's all good now with a test run, should be back to green
13:48:23  <jbergstroem>ah, that explains it.
13:55:21  <jbergstroem>Trott: these tests still seem to linger at all smartos machines: /home/iojs/build/workspace/node-stress-single-test/nodes/smartos14-32/out/Release/node --call-graph-size=10 /home/iojs/tmp/tmp.0/tick-processor.log /home/iojs/tmp/tmp.0/tick-processor.log
13:55:31  <jbergstroem>or perhaps thealphanerd ^
13:55:44  <jbergstroem>i've cleaned all machines
14:33:45  <Trott>That test itself is mostly the work of mattloring and (more recently) indutny. It's a tricky thing. I've contemplated suggesting that it ought to move the three tick-processor tests to their own test directory the way we have done with the inspector. They then won't be part of regular CI runs, though.
14:34:01  <Trott>I'll open an issue to discuss that.
14:35:52  <Trott>Actually, I'm loathe to do that because I'm afraid it will hide issues with the tick-processor stuff and doesn't actually solve the problem.
14:36:47  <Trott>But yeah, that was the issue when one of the freebsd nodes had a load of, like 18, and all the timer-dependent tests were failing.
15:00:03  * jenkins-monitorquit (Remote host closed the connection)
15:00:19  * jenkins-monitorjoined
15:41:35  * Trottquit (Ping timeout: 250 seconds)
15:42:28  * rvaggquit (Ping timeout: 250 seconds)
15:42:29  * mhdawsonquit (Ping timeout: 250 seconds)
15:42:29  * starefossenquit (Ping timeout: 250 seconds)
15:42:30  * phillipjquit (Ping timeout: 250 seconds)
15:42:30  * bzozquit (Ping timeout: 250 seconds)
15:42:31  * orangemochaquit (Ping timeout: 250 seconds)
15:42:53  * mattloringquit (Ping timeout: 250 seconds)
15:42:53  * joaocgreisquit (Ping timeout: 250 seconds)
15:42:53  * zkatquit (Ping timeout: 250 seconds)
15:44:24  * zkatjoined
15:44:55  * orangemochajoined
15:45:15  * phillipjjoined
15:45:28  * joaocgreisjoined
15:45:54  * rvaggjoined
15:46:05  * Trottjoined
15:46:41  * mattloringjoined
15:47:59  * mhdawsonjoined
15:48:04  * starefossenjoined
15:49:12  * bzozjoined
15:58:29  * rvaggquit (Ping timeout: 260 seconds)
16:02:33  * rvaggjoined
16:06:25  * orangemochaquit (Ping timeout: 260 seconds)
16:07:39  * orangemochajoined
16:16:41  * Trottquit (Ping timeout: 260 seconds)
16:20:46  * Trottjoined
16:22:17  * phillipjquit (Ping timeout: 260 seconds)
16:25:05  * phillipjjoined
17:26:07  * not-an-aardvarkjoined
17:26:19  * node-ghjoined
17:26:19  * node-ghpart
22:24:32  * node-ghjoined
22:24:32  * node-ghpart
22:26:59  * node-ghjoined
22:26:59  * node-ghpart
22:27:19  * node-ghjoined
22:27:19  * node-ghpart
22:45:32  * node-ghjoined
22:45:32  * node-ghpart