03:16:25  * orangemochaquit (Read error: Connection reset by peer)
03:18:40  * orangemochajoined
04:55:11  * rmgjoined
05:00:08  * jbergstroemquit (Ping timeout: 246 seconds)
05:00:09  * rmg_quit (Ping timeout: 246 seconds)
05:01:12  * jbergstroemjoined
05:18:47  * joaocgreisquit (Ping timeout: 264 seconds)
05:40:29  * joaocgreisjoined
10:04:48  <rvagg>joaocgreis: well done on that jenkins job that runs individual tests multiple times, great idea!
10:15:52  <joaocgreis>rvagg: thanks!
10:31:47  <jbergstroem>joaocgreis: agree
17:43:25  * orangemocha_joined
17:44:53  * orangemochaquit (Ping timeout: 244 seconds)
17:52:55  * joaocgreisquit (Ping timeout: 240 seconds)
18:14:44  * thealphanerdjoined
19:37:33  * joaocgreisjoined
20:28:32  <jbergstroem>looks like we get compile failures while doing release builds on ppc; lets delay it to get 5.1 out.
20:50:41  <jbergstroem>rvagg: this seems to happen with the smartos13 slave: https://ci.nodejs.org/job/iojs+release/288/nodes=smartos13-release/console
20:51:01  <jbergstroem>rvagg: is that a result of some logic that avoids building for newer releases?
21:00:49  * jgijoined
21:01:08  <jbergstroem>jgi: 15 was latest stable from smartos, is all
21:01:30  <jbergstroem>jgi: so we should be using 14 series?
21:01:54  <jgi>jbergstroem: I’m not saying we shouldn’t have moved, I’m just trying to gather information :)
21:02:19  <jbergstroem>jgi: ok. btw, i'm really keen to sort this smartos vs solaris thing if possible.
21:02:23  <jgi>jbergstroem: so recently we moved to what image exactly?
21:02:31  <jbergstroem>jgi: smartos15.3.1
21:02:37  <jbergstroem>15.3.0 sorry
21:02:38  <jgi>jbergstroem: yes, I have a bit of time today to spend on this, maybe a couple hours :)
21:02:48  <jbergstroem>jgi: that'd be awesome. i'm around as well.
21:03:27  <jgi>jbergstroem: ok, and this was done in the hope that it would fix node sunos binaries not running on Solaris 11.2, or just so that we’d use reasonably recent software on our smartos build machines?
21:04:09  <jbergstroem>jgi: so far, this is our smartos stack: the two old smartos test slaves you set up that needs to be redeployed. they're running 14-something. we have a 13.3.1 intended for use with 0.10/0.12 releases (similar to your old machine) and a 15.3.0 (with gcc4.8) intended for newer releases (that requires 4.8 or newer).
21:04:25  <jbergstroem>jgi: it was done because we needed a smartos release machine
21:05:14  <jbergstroem>jgi: i'm happy to explore different routes with the release machines -- i just don't have enough vm's to test on (solaris, illumos, etc)
21:05:39  <jbergstroem>jgi: guessing solaris isn't available on joyent cloud?
21:05:48  <jgi>jbergstroem: I mean, did you upgrade the smartos build machine that built v4.2.2 at some point? Or has it been always the same machine (same pkgsrc, same image)?
21:06:35  <jbergstroem>jgi: i played around with one of the testers but reverted after we came to hte conclusion that it wouldn't help.
21:06:56  <jbergstroem>jgi: after the ci hiccup we just had to get a new release slave on board.
21:07:24  <jbergstroem>hiccup -> security measure (having logging enabled in here and all)
21:07:26  <jgi>jbergstroem: ok so that machine, the one using the 15.3.0 image has been used to build v4.2.1 and v4.2.2?
21:07:41  <jbergstroem>jgi: no, that was deployed only last week.
21:08:14  <jgi>jbergstroem: and what was used to build v4.2.1?
21:08:24  <jbergstroem>jgi: afaik one of the older smartos test machines
21:09:09  <jgi>ok, and I will probably sound dumb, because you may already have answered that question, but I’m still confused: why did we use a new machine instead of the old ones that we used to buid v4.2.1?
21:09:10  <jbergstroem>jgi: would a solaris toolchain-compiled node work on illumos?
21:09:37  <jbergstroem>jgi: because they are potentially compromised and additionally were used for testing
21:10:04  <jgi>again, not questioning the choices that have been made, just trying to understand the differences I’m seeing between builds :)
21:10:15  <jbergstroem>jgi: of course! we want the same thing :)
21:10:42  <jgi>jbergstroem: OK I understand, and these “older smartos test machines” are using which image(s)?
21:10:47  <jbergstroem>jgi: tbh i'm not sure why 4.2.1 and 4.2.2 came out different. it'd be great if i could blame it on me messing around but its probably not :/
21:11:16  <jbergstroem>14.3.0
21:11:21  <jgi>ok
21:11:41  <jgi>and we always use the default compilers when building releases on these machines (the older and new ones)?
21:13:18  <jbergstroem>jgi: new one: default (gcc48), old test ones are using gcc49
21:14:16  <jgi>ok, that’s very useful information, thank you :)
21:14:26  <jbergstroem>keep it coming :)
21:14:36  <jgi>jbergstroem: also to answer your question about Solaris VMs on Joyent’s Cloud, indeed they’re not available
21:14:54  <jgi>jbergstroem: what I’ve done is that I setup a Solaris 11.2 VM with VMWare
21:15:03  <jbergstroem>are there any vm providers that provides solaris?
21:15:32  <jgi>jbergstroem: Oracle most certainly, but I don’t know how to access their cloud. Maybe others, it would be worth it to investigate.
21:16:18  <jbergstroem>jgi: if you have a set of different smartos versions accessible to test on, could you try this binary? https://ci.nodejs.org/job/iojs+release/nodes=smartos15-release/
21:17:10  <jgi>jbergstroem: is it the binary for the upcoming v5.x release?
21:17:48  <jbergstroem>jgi: correct. built on smartos15.3.0 with gcc4.8. it'd be good to know if there are issues with older releases or similar. Happy to redeploy to older base versions if that improves the situation.
21:18:11  <jbergstroem>jgi: at this stage, I feel we kind of need to rename -solaris to -smartos.
21:19:19  <jgi>jbergstroem: I would be surprised if binaries built with gcc-4.8 worked on SmartOS machines that do not have gcc-4.8 installed. However I did that test quickly a couple weeks ago, and it seemed to work. The problem is I don’t know why it works :)
21:20:01  <jbergstroem>jgi: reckon libc/libstdc does't change that much
21:20:06  <jgi>jbergstroem: re: renaming -solaris to -smartos, I think it’s probably the best way forward. It would probably break nvm though, and maybe other popular tools, so we’d need a good plan to achieve that.
21:20:43  <jbergstroem>jgi: i can speak to ljharb. i owe him a favour regarding sni and ssl though -- should probably fix that first :X
21:21:04  <jgi>jbergstroem: in that case it’s not necessarily a version issue (although in that case there would be that problem too, since it would likely change the C++ runtime version needed), but more an issue with the path to the c/c++ runtime.
21:21:30  <jgi>jbergstroem: so yeah, C runtime doesn’t change that often, but c++ runtime does
21:21:58  <jgi>jbergstroem: that was the point of this PR: https://github.com/nodejs/node/pull/3391
21:22:14  <jbergstroem>jgi: apparently not enough then :D
21:22:19  <jgi>jbergstroem: but anyway, let’s not consider that for now, it’ll only confuse our discussion :)
21:22:31  <jgi>jbergstroem: not enough?
21:22:32  <jbergstroem>jgi: yeah i get you; that's just a pretty big step and will probably be a big sink
21:22:55  <jbergstroem>jgi: by not enough i was referring to the c++ runtime changes.
21:23:46  <jgi>jbergstroem: IIRC the binaries definitely advertise they require a newer C++ runtime when they’re built with gcc-4.8, it’s just that I think last time I tried to run them they still ran
21:24:11  <jgi>jbergstroem: anyway, I’m going to do more testing and report my findings to you
21:24:14  <jbergstroem>jgi: perfect
21:39:11  <jbergstroem>cloudsigma seems to be one provider that has solaris with a 7 day trial. i'll sign up and see if i can produce a binary.
22:47:48  * dawsonmjoined
22:48:15  <dawsonm>@jbergstroem you around ?
22:48:22  <jbergstroem>dawsonm: sure am
22:48:37  <jbergstroem>dawsonm: have an idea as to why the ppc bots exploded?
22:48:54  <dawsonm>no
22:48:57  <dawsonm>let me take a look
22:49:41  <dawsonm>I see one is off line, the others are online
22:50:36  <jbergstroem>dawsonm: failures here https://ci.nodejs.org/job/iojs+release/287/
22:55:06  <jbergstroem>joaocgreis: have you looked at the binary stuff we were talking about at the WG yet? (not passing the entire source through the gh repo)
22:55:17  <dawsonm>One of the release machines is not accessible so soft restart that one
22:55:36  <dawsonm>I'm guessing there was some sort of incident at osu-osl that might have affected the machines
22:57:49  <dawsonm>jbergstroem: do you think its the ip address limiting that is keeping the benchmark marchine from connecting ?
22:58:06  <dawsonm>I'd sent an email with some details earlier
22:58:30  <jbergstroem>dawsonm: no, we need to reprovision it and give it a new secret (the secret changed when we moved to the new ci host)
22:59:01  <dawsonm>I had changed the secret in the start file
22:59:28  <jbergstroem>dawsonm: ah, well then it probably is the firewall then. waht's the ip?
22:59:35  <dawsonm>just a sec
23:00:05  <dawsonm>50.23.85.254
23:00:05  <joaocgreis>jbergstroem: iirc, the problem was testing security fixes. Won't changing the temporary repo solve that? It's a parameter in both *-fanned jobs. It's simple to expose it in test-commit or even (if needed) test-pr
23:00:14  <dawsonm>I also plan to add another one to store data at least for now
23:00:19  <jbergstroem>joaocgreis: yeah that would probably be the quickest solution.
23:00:29  <dawsonm>so if you can add 50.97.245.4 as well that would be good
23:00:44  <jbergstroem>dawsonm: sure. what's the name of it?
23:01:03  <dawsonm>have not created it yet but was going to call it iojs-softlayer-benchmark-data,
23:01:06  <joaocgreis>jbergstroem: should I add it to test-pr also or is test-commit enough?
23:01:30  <jbergstroem>joaocgreis: not sure, check with rvagg or perhaps bnoordhuis since they've handled it prior.
23:03:23  <jbergstroem>dawsonm: done.
23:06:06  <jbergstroem>dawsonm: try restarting jenkins slave on iojs-softlayer-benchmark
23:06:37  <dawsonm>will do
23:07:59  <dawsonm>seems to be on-line now thanks
23:13:05  <joaocgreis>rvagg: I see you added GIT_ORIGIN_SCHEME to many jobs but not test-commit nor test-pr. So I added the TEMP_REPO parameter only to test-commit, let me know if you need it in test-pr (or just add it with the same description and pass it unchanged)
23:17:44  <joaocgreis>I like the jobs as they are now because reading everything from the same place keeps the jobs simpler. Someday I'll try to use git-rebase directly from test-commit, so that the test-* jobs do only test, and nothing of those rebases that they do now. Of course this can all change if there is a need!
23:36:15  * michael_joined