00:20:23  * Fishrock123quit (Remote host closed the connection)
00:27:16  <rvagg>ofrobots: you need to dig down into the bot itself: https://ci.nodejs.org/job/node-test-commit-freebsd/2640/nodes=freebsd10-64/console
00:27:50  <rvagg>for ones like this where there is only one machine, it's listed as "default" down the bottom of the page, i.e. https://ci.nodejs.org/job/node-test-commit-freebsd/2640
00:27:58  <rvagg>go to that and then poke at the console
00:49:17  <ofrobots>thanks
00:54:46  * Fishrock123joined
00:55:42  * Fishrock123quit (Remote host closed the connection)
00:56:23  <rvagg>4 new pi3's coming online soon
01:02:36  * Fishrock123joined
01:04:26  * Fishrock123quit (Remote host closed the connection)
01:12:37  * Fishrock123joined
01:12:48  * Fishrock123quit (Remote host closed the connection)
01:22:57  * Fishrock123joined
01:24:51  * Fishrock123quit (Remote host closed the connection)
01:29:33  <rvagg>actually dead
01:35:43  * Fishrock123joined
01:36:56  * Fishrock123quit (Remote host closed the connection)
01:47:43  * Fishrock123joined
01:49:45  * Fishrock123quit (Remote host closed the connection)
01:55:42  <rvagg>💥 4 new pi3's, total of 10, 1 more pending
01:55:58  <rvagg>(physically broken microsd card waiting for replacement)
01:56:19  * Fishrock123joined
01:58:24  * Fishrock123quit (Remote host closed the connection)
02:04:59  * Fishrock123joined
02:07:26  * Fishrock123quit (Remote host closed the connection)
02:16:39  * Fishrock123joined
02:24:08  * Fishrock123quit (Remote host closed the connection)
03:56:20  <jbergstroem>rvagg: sweet!
05:15:04  <rvagg>totally unrelated to my last comment ... if you force a Pi board into a case without checking to see if a MicroSD is in place, you'll snap it off and it won't work anymore
05:15:11  <rvagg>just FYI
05:17:17  <Trott>Any chance we can get the ppcbe-ubuntu1404 issue resolved somehow? /cc michael___ Looks like it's been failing for the last nine hours or so. Latest is https://ci.nodejs.org/job/node-test-commit-plinux/2626/nodes=ppcbe-ubuntu1404/console
05:17:47  <rvagg>`Caused by: java.lang.OutOfMemoryError: PermGen space`
05:17:55  <rvagg>this really triggers my Java PTSD
05:18:17  <rvagg>-Xmx256m -Xms256m or something like that
05:18:23  <rvagg>I'll see if I can poke around
05:18:46  <Trott>Sorry/not-sorry/thanks/sorry.
05:19:18  <Trott>Uh, when you're done with that, freebsd seems to be doing only slightly better: https://ci.nodejs.org/job/node-test-commit-freebsd/
05:20:06  <Trott>Looks like maybe freebsd just has a bunch of node processes that need to be killed? https://ci.nodejs.org/job/node-test-commit-freebsd/2643/nodes=freebsd10-64/consoleFull
05:20:45  <rvagg>will look at that first, I think it'll be easier than trying to remember how to get in to osuosl
05:23:05  <Trott>And if you need additional procrastinate-on-ppc material, Raspberry Pi 1 builds still look like they're still having fits. https://ci.nodejs.org/job/node-test-binary-arm/ or for a specific example: https://ci.nodejs.org/job/node-test-binary-arm/RUN_SUBSET=1,label=pi1-raspbian-wheezy/2278/console
05:24:03  <rvagg>Trott: I did a restart of all of the java processes on them all a few hours ago, is this recent?
05:24:11  <Trott>No disrespect to anyone, but the next time jbergstroem goes traveling, I might choose to take a Node.js hiatus myself. (looks at sky, clenches fist, "JENKINSSSSSSS")
05:24:20  <Trott>That last one is from a few minutes ago.
05:24:22  <rvagg>freebsd has been nuked, lots of node processes hanging on both
05:24:55  <rvagg>we need some more system nerds to help out with all of this, if you know of anyone with passion for *nix etc. then send them our way
05:24:58  <Trott>I'm trying to pester you (and whoever else is around) about it because I want yorkie's first CI run to go smoothly. :-)
05:25:25  <rvagg>whoa, that rpi error is an odd one
05:25:35  <jbergstroem>hi ho
05:25:38  <jbergstroem>we have stalling processes now? great!
05:26:01  <rvagg>I tend to talk about these machines (privately) by the name of the person that donated them, and ceej has been giving me a ton of grief in the last few months, bengl too
05:26:20  <rvagg>tho bengl-2 got a new microsd yesterday so should be very happy
05:27:08  <Trott>I have <mumble mumble> years of Unix systems administration experience in my past, but it was a long time ago and I wouldn't exactly say I have a passion for it. But uh, if someone is willing to mentor me, I'm willing to dive in. But if you're looking for what a recruiter might call a "seasoned pro", I'm not your guy.
05:27:10  <rvagg>yeah, ceej is messed up, I need to take it offline and reprovision that whole thing, too much drama on that machine (this is pi1p-7)
05:27:35  <Trott>I will be sure never to donate hardware. "Man, trott is KILLING me today."
05:27:43  <rvagg>Trott: I think I'd rather you be spending more of your time cleaning up code than cleaning up servers since you're doing a good job there
05:28:22  <Trott>I think I'd prefer that too, but (continuing with recruiter-speak) I'm a TEAM PLAYER willing to DO WHAT IT TAKES.
05:28:25  <rvagg>heh, it's ceej and bengl that I moan the most about, crossing my fingers on the latter, former still keeps on causing me grief
05:28:28  <bengl>all i saw right there at first was "ceej has been giving me a ton of grief in the last few months, bengl too" and i was confused, haha
05:28:43  <rvagg>bengl: heh, I saw you in here and knew that'd catch you
05:28:59  <rvagg>bengl: but rest assured, you have a brand new MicroSD so you're all good now .. I hope
05:29:00  <Trott>No troll like an rvagg troll.
05:31:16  <rvagg>taken that pi offline, let me know if you come across another pi bork, I'm hoping it's just that machine now
05:33:20  <jbergstroem>btw i've learnt my lesson will never sleep or go offline again
05:33:23  <jbergstroem>sorry :'(
05:33:54  <Trott>It's awfully rude and selfish of you.
05:35:08  <jbergstroem>seems like rod's done a great job though
05:36:40  <Trott>Hooray for Rod! Great job *and* great bengl-trolling.
05:38:03  <rvagg>jbergstroem, michael___: I dunno what the reasoning beind -Xmx=128m was on the osuosl ppcbe machine was but I've replaced it with -XX:MaxPermSize=512m -Xms256m -Xmx512m, I don't see a reason to be stingy on there and hopefully I'm not missing context
05:38:09  <rvagg>Trott: restarted ppcbe
05:38:15  <Trott>rvagg Are you sure you took the right one offline? p-7-ceejbot still seems to be blowing up.
05:38:20  <Trott>https://ci.nodejs.org/job/node-test-binary-arm/2279/RUN_SUBSET=1,label=pi1-raspbian-wheezy/
05:38:25  <jbergstroem>rvagg: because we have 2G ram and gcc likes ~1.5g every now and then
05:38:49  <jbergstroem>rvagg: and i hadn't seen any issues wrt jenkins salve and 128m up to here i ugess
05:38:54  <jbergstroem>rvagg: was that why?
05:38:57  <Trott>(Or maybe that's a second one that's blowing up in addtion to whatever one you got rid of already?)
05:39:02  <rvagg>jbergstroem: looks like 4G on this one
05:39:32  <Trott>freebsd looks good now too
05:39:47  <jbergstroem>rvagg: which test was hanging on fbsd?
05:42:06  <rvagg>Trott: jenkins still thinks its online, but it's sitting in front of me with nothing plugged in to it https://ci.nodejs.org/computer/node-nodesource-raspbian-wheezy-pi1p-7-ceejbot/
05:42:26  <rvagg>manually disconnecting
05:42:31  <rvagg>stupid piece of java
05:42:45  <rvagg>jbergstroem: whatever runs /usr/home/iojs/build/workspace/node-test-commit-freebsd/nodes/freebsd10-64/test/fixtures/clustered-server/app.js, there was a ton of those
05:43:50  <rvagg>like, a TON of them
05:43:56  <rvagg>literally 1000kg of them
05:44:13  <rvagg>second machine had ~10 of /usr/home/iojs/build/workspace/node-test-commit-freebsd/nodes/freebsd10-64/test/parallel/test-cluster-disconnect-handles.js hanging
05:44:40  <jbergstroem>rvagg: should probs file issues on nodejs/node
06:00:15  <Trott>We're still running CI tests sequentially, even in the parallel directory, right?
06:09:45  <jbergstroem>no
06:09:47  <jbergstroem>we're improving
06:09:53  <jbergstroem>most slaves have JOBS=$n
06:25:48  <Trott>Any chance the value of n for freebsd is 10 or greater?
06:27:25  <jbergstroem>yes its the amount of cores
06:27:26  <jbergstroem>=2
06:28:29  <jbergstroem>mots of the farm have JOBS=$cores
06:28:35  <jbergstroem>i haven't done windows or arm
06:29:23  <Trott>Wait, is it 2 or greater than 10?
06:29:36  <Trott>(on freebsd)
06:30:29  <Trott>Oh, wait, it's 2.
06:30:35  <jbergstroem>it is the same as the amount of cores and for freebsd that would be 2.
06:30:37  <Trott>It's just whatever gets passed to -j
06:30:41  <jbergstroem>yes
06:30:43  <Trott>OK, got it, thanks.
10:54:43  * thealphanerdquit (Quit: farewell for now)
10:55:13  * thealphanerdjoined
12:26:35  * rmgquit (Remote host closed the connection)
12:27:11  * rmgjoined
12:29:32  * targosjoined
12:31:21  * rmgquit (Ping timeout: 240 seconds)
13:58:57  * Fishrock123joined
15:17:20  * rmgjoined
15:38:56  <jbergstroem>its the new! https://www.redhat.com/en/about/press-releases/red-hat-debuts-ansible-21-network-automation-containers-microsoft-windows-and-azure
15:56:53  * targosquit (Quit: Leaving)
16:13:05  <Trott>Judging from failures like https://ci.nodejs.org/job/node-test-commit-arm/3454/nodes=armv7-ubuntu1404/consoleFull it seems that some test somewhere is not cleaning up after itself reliably, even when it succeeds. I imagine it would manifest as a stray node process running. I imagine this is a possible source of the issues rvagg reported several hours ago. Is
16:13:05  <Trott>there some build-infra way to identify the problem test?
16:20:49  <Trott>Actually, maybe one or both of the tests rvagg opened issues for are culprits rather than victims/symptoms....
16:45:37  * chorrelljoined
16:53:46  <jbergstroem>Trott: well there's always access if you're keen
16:54:02  <jbergstroem>can't blame the travel thing much longer, so i might have to get my hands dirty
17:15:29  * chorrellquit (Quit: Textual IRC Client: www.textualapp.com)
20:03:52  * chorrelljoined
20:18:53  * chorrellquit (Quit: Textual IRC Client: www.textualapp.com)
23:29:50  <Fishrock123>UMMMMM
23:29:56  <Fishrock123>I thnk I did something very wrong https://ci-release.nodejs.org/job/iojs+release/
23:30:26  <Fishrock123>I accedntly pressed enter after only having on field filled out to make an RC and the CI went wild
23:30:46  <Fishrock123>jbergstroem: maybe ^
23:30:49  <Fishrock123>brb
23:37:52  <Fishrock123>do I just cancel all of them?