00:15:00  <rmg>I happen to be playing around with it right now
00:15:15  <jbergstroem>oh, sweet
00:16:03  <rmg>I've got key based auth working for regular clients, but I'm not having much luck getting Jenkins to connect
00:17:04  <jbergstroem>what client
00:18:00  <rmg>works with ssh that ships with OS X
00:18:17  <rmg>Jenkins can connect and auths fine, but then the connection gets reset
00:18:19  <jbergstroem>what I mean is what ip/slave are you trying this out with
00:18:28  <jbergstroem>just locally?
00:18:40  <rmg>oh, sorry.. this is my own server and slave
00:18:45  <jbergstroem>gotcha
01:56:29  <jbergstroem>rvagg: regarding our gcc41 stuff -- is there possibly a better way to expose it than a new slave?
02:11:47  <jbergstroem>rvagg: this should be ready to go now: https://ci.nodejs.org/computer/nodejs-release-ibm-centos5-32-1/
02:11:51  <jbergstroem>i will retire the old one
03:01:38  <jbergstroem>rvagg: gcc41 -- how about adding an environmental thing that just adds to path when including the bot?
03:03:14  <rvagg>It needs to be a pristine environment, same old libs to compile against. If you can make it so libuv gets compiled as if it were a standard EL5 box with no devtools-2 on it then be my guest.
03:03:50  <rvagg>We have to build old Node with these too, old gcc for those as well I think so we get a stable environment compared to previous releases.
03:05:36  <jbergstroem>yes, all of that is fine
03:06:08  <jbergstroem>all we do now if devtools is enable is override path
03:06:16  <jbergstroem>(which is pretty much what scl does)
03:29:57  <jbergstroem>rvagg: can you test baking a release on the new centos5 32-bit machine at ibm?
03:30:57  <rvagg>Sure, give me a bit, trying to relax for a change.
03:32:05  <jbergstroem>request denied
03:32:35  <jbergstroem>when's the next nodejs global meetup? it'd be nice with one of those 8 hour hackathons to get stuff done
03:34:36  <jbergstroem>rvagg: we also need to test 0.12/0.120 releases on base13.3.1-release
03:36:09  <jbergstroem>and by that i actually meant 0.10/0.12
08:11:28  * dawsonmquit (Read error: Connection reset by peer)
08:12:03  * dawsonmjoined
10:44:45  <rvagg>joaocgreis: what's the status of the windows release machines? anything I should be doing?
10:49:41  <joaocgreis>rvagg: just finished release-1, signing is ok now. starting with release-2 now
10:51:18  <rvagg>nice, thanks, I'm testing builds now
10:53:20  <rvagg>had power problems today and there are 3 dodgy pi1p's that are not sorting themselves out
10:58:36  <rvagg>make that 2
10:59:45  <rvagg>jbergstroem: centos5-32 doesn't have the env vars set properly: https://ci.nodejs.org/job/iojs+release/258/nodes=centos5-release-32/console
10:59:49  <rvagg>ARCH is missing at least
11:00:36  <jbergstroem>you're right, not in here: https://github.com/nodejs/build/blob/master/setup/centos5/resources/jenkins.initd
11:00:52  <jbergstroem>we need more work on splitting the release jobs with test jobs
11:01:23  <jbergstroem>(or use https://github.com/nodejs/build/issues/255 to derive more info)
11:04:11  <joaocgreis>I moved the ARCH var to jenkins for the windows servers
11:09:47  <rvagg>joaocgreis: in the slave configuration?
11:10:07  <joaocgreis>yes
11:11:09  <rvagg>ok, maybe not a bad approach, there's a lot of places we're hiding config though
11:11:14  <rvagg>I'll do that for centos5 for now
11:13:48  <joaocgreis>this way we define ARCH in the same page where we define the labels (both must match in release servers), and the jenkins.bat in the slave is the same for all
11:18:08  <joaocgreis>jbergstroem rvagg when I rebuild the test servers from rackspace, can I delete the old vs2015 test server (is it being used?) and create the other servers more powerful?
11:19:21  <rvagg>joaocgreis: I think you can remove it, not sure about creating them more powerful, however, I get the impression that it's the linker that's slow regardless of how much power we throw at it and all we end up doing is stretching our friendship with rackspace on our spend there
11:20:55  <joaocgreis>I'll test it. Would be nice to bring compile time for vs2013 down, vs2015 takes half the time currently
11:45:20  <rvagg>joaocgreis: https://ci.nodejs.org/job/iojs+release/nodes=win2008r2-release-x64/260/console `Host key verification failed.` - can you do an `ssh -F \config node-www ` to make sure it logs in properly as 'staging' on the release machines please?
11:54:58  <joaocgreis>rvagg: fixed, had to add to known_hosts
11:55:31  <joaocgreis>jbergstroem: can you add to the firewall on master? iojs-iad-win2008r2-release-2
11:55:36  <rvagg>joaocgreis, jbergstroem: I've dobbed you both in here https://github.com/nodejs/node/pull/3736 to report on status when you have news, otherwise Jeremiah will just be hassling me and you guys are doing the majority of the real work
11:56:26  <rvagg>joaocgreis: I've added it
11:56:45  <joaocgreis>thanks!
12:56:35  <joaocgreis>windows-release servers ready!
12:56:51  <joaocgreis>secrets updated
12:57:17  <jbergstroem>rvagg: i will brign the old bot up and see whats in start.sh
21:19:00  <jbergstroem>i've added the missing environment variables to the 32-bit centos5 bot now
21:36:37  <rvagg>jbergstroem: nodejs-release-digitalocean-centos5-64-1 appears to be down
21:36:53  <jbergstroem>Perform task: General | Install required packages (y/n/c): y
21:36:56  <jbergstroem>50% there
21:37:33  <rvagg>oh, looks like iojs-digitalocean-centos5-release-64-1 is still around so this isn't really a problem?
21:38:34  <jbergstroem>exactly. i'm retiring it as quick as possible
21:38:52  <jbergstroem>btw the reason architecture was missing is because it was set through jenkins. will fix that.
21:38:53  <rvagg>ok, nevermind then, looks all good
21:39:24  <jbergstroem>hopefully we can use the new naming to figure all of that out. also looking at using the jenkins api to create a node
21:39:53  <jbergstroem>so you'd essentially pass a name and ip to a script and it would create everything for ou
21:40:42  <rvagg>https://ci.nodejs.org/job/iojs+release/261/nodes=centos5-release-32/console
21:40:49  <rvagg>g++: No such file or directory
21:41:48  <jbergstroem>ansible blew up halfway through packages, looking atit
21:41:58  <jbergstroem>(that state thing in ansible is worthless)
21:45:04  <jbergstroem>fixed
21:45:20  <jbergstroem>it apparently died halfway through installing devtoolset
21:47:28  <jbergstroem>the new 64-bit machine reported no errors
21:50:10  <jbergstroem>ok, it's up and running
21:56:22  <jbergstroem>retiring the old one.
22:02:53  <jbergstroem>rvagg: can you give them a go?
22:45:17  <jbergstroem>rvagg: are you ok with using a 64-bit smartos to cross-compile the 32-bit release?
22:45:29  <rvagg>jbergstroem: yeah, for release that's fine
22:45:41  <jbergstroem>rvagg: let me bake another joyent image for release builds then
22:45:51  <jbergstroem>rvagg: we have one for 0.12 that i'd like to see trialled
22:46:23  <rvagg>yeah, will get to that, that's the next priority beyond just getting CI working for v4+
22:47:13  <jbergstroem>arm taking its time -- any ccache issues? https://ci.nodejs.org/job/node-test-commit/1101/
22:48:23  <jbergstroem>rvagg: btw, so far we've been using gcc4.9 at smartos but what i've read up so far is that solaris (and smartos in general) seems to default around 4.8. Should we give that a go? jgi has been pushing going to 4.7 instead but I'm not sure what I feel about that.
22:48:41  <jbergstroem>(slightly ref https://github.com/nodejs/build/issues/222)
22:48:45  <rvagg>I saw something about that, I didn't think we could even use 4.7
22:49:03  <rvagg>I'm happy with using the lowest possible
22:49:48  <jbergstroem>well, v8 still says lowest is 4.8 and I wouldn't want to open that can.
22:50:16  <jbergstroem>what if they do some other c++ template magic in next v8 and break it again? also, downgrading compiler abi is probably a major in nodejs release terms?
22:50:45  <jbergstroem>perhaps do a test build on 4.8 and ping the people that has been involved in the discussion
22:54:13  <jbergstroem>the cpu load is still uncomfortably high at the jenkins host. spins up to ~90% cpu usage (over all cores) the second a job lands. feels like java is killing irq or something
22:55:22  <jbergstroem>i read somewhere that job history could affect it, but yeah. try googling "jenkins slow" and you get a million "insightful" answers (disable antivirus, clean up your disk, buy more ram)
23:04:40  <rvagg>so much instability on the windows slaves on azure https://ci.nodejs.org/job/node-test-binary-windows/262/
23:06:48  <joaocgreis>windows slaves are looking really bad
23:06:57  <joaocgreis>isn't this worst than last week?
23:13:42  <jbergstroem>what's happening? :/
23:15:26  <joaocgreis>I don't think I saw multiple windows failures like this before
23:15:43  <joaocgreis>in almost all jobs 1 or 2 slaves failed
23:17:01  <joaocgreis>but looking at the last few in https://ci.nodejs.org/job/node-test-binary-windows there are many where only a few slaves have green
23:17:41  <jbergstroem>rvagg: nodejs-release-joyent-smartos153-64-1 is now up and running, its intended purpose is to do v4+ smartos/solaris releases.
23:18:10  <rvagg>nice, thanks jbergstroem
23:19:27  <joaocgreis>just updated slave.jar in all azure machines
23:19:37  <jbergstroem>joaocgreis: is there a 2.53 out? :/
23:19:54  <jbergstroem>rvagg: i'd like to find a way to retire the gcc41 nodes. its basically path juggling and we just need to find a way of telling the node when to add a path or not; can one for instance check if a slave is called through a specific label?
23:19:58  <joaocgreis>maybe we should always update when we update jenkins
23:20:10  <jbergstroem>joaocgreis: that's a pretty monumental task unfortunately
23:20:23  <jbergstroem>joaocgreis: i downloaded 2.52 on all machines ~2w ago
23:20:53  <jbergstroem>ah there is indeed a 2.53.
23:20:55  <rvagg>jbergstroem: maybe .. it's tricky if we want to run both gcc41 and modern gcc on the same machine, do we do that with libuv at all? I don't recall
23:21:36  <jbergstroem>rvagg: i don't know if libuv uses it. all i'm saying is that if we use devtools; all the init script does is prepend path with /opt/rh/devtoolset-2/usr/bin
23:22:24  <joaocgreis>jbergstroem: https://github.com/janeasystems/build/blob/1813df4b4cbd2136da9815e8eb6b4d12e915f910/setup/windows/update-slave-jar-playbook.yaml
23:22:55  <jbergstroem>joaocgreis: that's true. would make it easier.
23:23:20  <jbergstroem>our ansible is so due a refactor its ridiculous :P
23:23:38  <jbergstroem>It's. My engrish is A++ today.
23:26:48  <joaocgreis>mines beat yous
23:27:13  <joaocgreis>restarted jenkins on all azure slaves
23:27:27  <joaocgreis>going zzz, let's see what happens during the night
23:30:10  <jbergstroem>sleep tight
23:30:36  <jbergstroem>rvagg: could you perhaps go through jenkins and remove all iojs- labels? I'm a bit confused as what to add to each job/slave.
23:31:01  <jbergstroem>I'm not very familiar with how labels are used through all jobs.
23:31:13  <rvagg>I think the libuv job(s) still use those labels
23:38:13  <jbergstroem>ok i'll get to replacing centos6 and 7 next.
23:41:22  <jbergstroem>we should look at testing x32 as well (since its part of our configure architectures)
23:42:07  <jbergstroem>very unfortunate of digitalocean to name their 32-bit archiectures x32.. or is it actually x32?
23:44:02  <jbergstroem>answer: nope.
23:44:44  <jbergstroem>can't find x32 on do, joyent, softlayer, rackspace or linode