07:20:31  * sxaquit (Ping timeout: 258 seconds)
07:25:45  * sxajoined
07:36:18  * node-ghjoined
07:36:18  * node-ghpart
11:14:19  * not-an-aardvarkquit (Quit: Connection closed for inactivity)
11:24:34  <phillipj>what's the usual fix when Jenkins slaves has permgen space issues? increase JVM args?
11:24:37  <phillipj>https://ci.nodejs.org/job/node-test-commit-plinux/6073/nodes=ppcbe-ubuntu1404/console
11:25:09  * thealphanerdquit (Quit: farewell for now)
11:25:40  * thealphanerdjoined
11:28:27  <phillipj>there has been several build failures on that slave today cause of permgen :/
12:28:46  * node-ghjoined
12:28:47  * node-ghpart
12:39:30  <jbergstroem>phillipj: thinking it might be on the client end
12:39:48  <jbergstroem>phillipj: did you log in to the machine
12:45:42  <phillipj>jbergstroem: yupp
12:46:41  <phillipj>the machine seemed to be okey to me
12:46:50  <phillipj>didn't restart anything
13:03:14  <jbergstroem>phillipj: no lingering processes?
13:03:27  <jbergstroem>phillipj: perhaps try increasing the memory limit on the java process in the init script
13:03:35  <jbergstroem>128 seems to do us well but some os'es might need more
13:12:19  <phillipj>jbergstroem: processes looks alright, one single java process
13:14:15  <phillipj>jbergstroem: it's already running 192mb limit, "someone" has already tried that trick I guess
13:14:43  <phillipj>should I just restart the jenkins service then?
13:21:12  <jbergstroem>try 256
13:21:21  <jbergstroem>and retstart when idle?
13:33:58  <phillipj>okey, done
13:43:15  * node-ghjoined
13:43:15  * node-ghpart
14:09:51  * evanlucasjoined
15:02:41  <jbergstroem>what java version is it?
15:02:55  <jbergstroem>perhaps we can force a gc periodically or something
16:27:56  * not-an-aardvarkjoined
17:51:37  * not-an-aardvarkquit (Ping timeout: 240 seconds)
17:51:37  * mattloringquit (Ping timeout: 240 seconds)
17:51:53  * italoacasasquit (Ping timeout: 260 seconds)
17:52:14  * mattloringjoined
17:52:37  * ofrobotsquit (Ping timeout: 258 seconds)
17:52:59  * othiym23quit (Ping timeout: 245 seconds)
17:53:01  * orangemochaquit (Ping timeout: 258 seconds)
17:53:12  * mhdawsonquit (Ping timeout: 246 seconds)
17:53:23  * Trottquit (Ping timeout: 258 seconds)
17:55:19  * not-an-aardvarkjoined
17:56:57  * mhdawsonjoined
18:07:07  * ofrobotsjoined
18:08:00  * italoacasasjoined
18:08:03  * Trottjoined
18:09:58  * orangemocha_joined
19:27:48  <phillipj>jbergstroem: java7
19:57:55  <Trott>Stalled test processes on test-osuosl-aix61-ppc64_be-1. Just terminated them. Should be back to normal now.
20:01:19  <Trott>Same on test-digitalocean-freebsd11-x64-2 and test-digitalocean-freebsd10-x64-1
20:01:36  <Trott>Must have been one bad commit somewhere that caused some cluster test to not exit and hog ports.
20:04:51  * othiym23joined
20:05:01  <jbergstroem>perhaps the things you fixed in 7.x earlier but hasn't been ported to 6.x?