03:08:41  * jgiquit (Quit: jgi)
04:08:08  * rmg_joined
04:08:55  * rmgquit (Ping timeout: 244 seconds)
04:16:38  * rmg_quit (Remote host closed the connection)
04:17:12  * rmgjoined
04:22:07  * rmgquit (Ping timeout: 272 seconds)
04:46:22  * rmgjoined
04:58:23  <rvagg>thealphanerd: I was talking to Jeremiah about this today, the nightly release builder can be build custom branches, just point it at your own fork and it'll make a nightly for it
04:58:46  <rvagg>thealphanerd: the problem with that PR is that it requires additional tooling on the release machines, so I'll need to figure out what to do in order to make it ready
05:15:14  <jbergstroem>we still have the "old data" warnings in jenkins
05:16:43  <jbergstroem>isn't this great? http://i.imgur.com/9ZYrYl6.png
05:18:02  <rmg>jbergstroem: what "old data" warnings?
05:18:29  <jbergstroem>rvagg: ah -- case sensitive
05:18:42  <jbergstroem>rmg: something about matrix plugin and windows fanned jobs for instance
05:19:30  <rmg>sounds like a plugin was removed
05:19:40  <jbergstroem>rmg: can you relog on ci.nodejs.org and try starting a job?
05:19:44  <jbergstroem>its there
05:21:16  <rmg>jbergstroem: I see no build buttons - was that what you were checking?
05:21:40  <jbergstroem>rmg: no build with parameters here? https://ci.nodejs.org/job/node-test-pull-request/
05:21:47  <jbergstroem>rmg: btw are ou in the nodejs collaborators?
05:22:09  <rmg>I've seen similar sounding "old data" warnings when restoring job configs from one Jenkins server to another where the newer Jenkins instance didn't have the same plugins installed
05:22:29  <jbergstroem>rmg: yeah we had a ton of those when we migrated, but it was fixed by installing the plugins we used to have
05:22:38  <jbergstroem>these popped up a while after for no reason
05:22:43  <rmg>jbergstroem: I was nominated in the latest batch, but I don't think I have the collaborator bit yet
05:22:58  <jbergstroem>rmg: ok
05:34:24  * jgijoined
08:17:18  * jgiquit (Quit: jgi)
09:08:41  * rmgquit (Remote host closed the connection)
09:09:17  * rmgjoined
09:13:30  * rmgquit (Ping timeout: 250 seconds)
13:11:17  * rmgjoined
13:16:03  * rmgquit (Ping timeout: 265 seconds)
15:05:22  <orangemocha_>no CI alerts since Tuesday :)
15:12:20  * rmgjoined
15:16:45  * rmgquit (Ping timeout: 250 seconds)
16:15:07  * rmgjoined
17:06:34  * jgijoined
17:06:59  * jgiquit (Client Quit)
17:33:53  * jgijoined
18:47:53  <jbergstroem>nothing short of amazing
18:55:34  <jgi>jbergstroem: ?
18:55:56  <jbergstroem>jgi: [02:05:20] <orangemocha_> no CI alerts since Tuesday :)
18:56:08  <jgi>ah ok :)
19:02:46  <jgi>jbergstroem: did you have the time to take a look at what I wrote about SmartOS binaries not running on Solaris?
19:03:31  <jbergstroem>jgi: yeah planning on doing it this weekend (6am right now)
19:03:48  <jgi>jbergstroem: haha ok :) Well let me know if you have any questions
19:04:02  <jbergstroem>jgi: sure
19:04:49  <jgi>jbergstroem: also, I think it would be great to have a better understanding of our runtime requirements for all binaries, and to document that
19:04:59  <jgi>jbergstroem: that’s something I mentioned here: https://github.com/nodejs/node/pull/3391#issuecomment-158296902
19:05:32  <jgi>jbergstroem: unless that’s something we already have, I’ll create an issue in nodejs/build to propose something
19:18:35  * jgiquit (Quit: jgi)
19:23:30  * jgijoined
21:05:47  <jbergstroem>jgi: would your recommendation be to let smartos handle its own releases through pkgsrc?
21:06:04  <jbergstroem>i mean, this is bigger than just releases. npm/node-gyp would probably barf on the same issues, no?
21:07:53  <orangemocha_>any idea why jenkins pages are so slow to load?
21:07:57  <jgi>jbergstroem: what do you mean by “npm/node-gyp would probably barf on the same issues”? aren’t native modules built on the machine on which they’re installed? That should guarantee that the problem runtime is present for these binaries, shouldn’t it?
21:08:04  <orangemocha_>I am getting a few timeouts
21:10:30  <jbergstroem>orangemocha_: jenkins being jenkins
21:11:27  <jbergstroem>(using 1000% cpu)
21:12:33  <jbergstroem>wonder why this isn't alerted by the monitor? https://ci.nodejs.org/computer/iojs-ibm-ppcbe-fedora20-release-64-1/
21:14:11  <jbergstroem>i wonder if this is related: org.apache.http.NoHttpResponseException: github.com:443 failed to respond
21:19:49  <jbergstroem>fyi -- just confirmed that the new acl is working as intended.
21:23:53  <jbergstroem>jgi: yes but if you download a node binary that "happens" to work, then use a different toolchain to build your npm modules..
21:26:50  <jgi>jbergstroem: in this case I would think that unless the compilers used to build node and the native modules are incompatible, there shouldn’t be a problem
21:26:59  <jgi>jbergstroem: but that is valid for any platform/setup
21:27:44  <jbergstroem>jgi: yeah -- it's been working pretty well with an old glibc on linux (centos5)
21:27:49  <jgi>jbergstroem: the issue I described in my document is different: it’s about running binary that was built with a newer version of the toolchain in mind
21:28:03  <jgi>s/running binary/running a binary/
21:28:09  <jbergstroem>jgi: yes, i'm aware.
21:28:27  <jbergstroem>jgi: now we just need to set a path for execution
21:28:51  <jbergstroem>jgi: would you still consider introducing -smartos and building it on $oldest_possible_toolchain being the way forward (like we mentioned the other day)?
21:29:03  <jbergstroem>(then exploring doing -sunos on a solaris machine)
21:29:32  <jgi>jbergstroem: that’s a possibility, althoug it’s unlikely the project will move from g++ 4.8 to g++ 4.7 as a minimal requirement
21:29:58  <jgi>jbergstroem: so for users of SmartOS’ pkgsrc LTS version (2014Q4), they would still need to install g++ 4.8’s runtime
21:30:21  <jbergstroem>jgi: yeah. i think thats slightly better than base15 with native 48 though?
21:30:24  <jgi>jbergstroem: we could still build smartos binaries with a different compiler than for other platforms
21:30:38  <jgi>jbergstroem: but I’m not sure I like that
21:31:29  <jgi>jbergstroem: so I would lean towards maybe just deprecating standalone binaries and trying to have SmartOS users install node only from packages
21:32:20  <jgi>jbergstroem: but I’m currently discussing with SmartOS devs to get their feedback on that
21:32:27  <jgi>jbergstroem: and I’ll keep you updated
21:32:27  <jbergstroem>jgi: i'm not against that; would need more people signing on though (as well as probably replacing current -sunos with "real" sunos builds)
21:32:33  <jbergstroem>jgi: great
21:32:49  <jgi>jbergstroem: oh yeah, I don’t plan to make that decision just myself :)
21:35:22  <jbergstroem>jgi: of course not :) just saying. the philosophy of blessing releases outside of nodejs camp is probably not shared amongst all collaborators (see the similar freebsd discussion).
21:41:26  * orangemochajoined
21:42:35  * orangemocha_quit (Ping timeout: 265 seconds)
21:58:19  * jgiquit (Quit: jgi)
22:48:31  * michael_quit (Ping timeout: 252 seconds)
22:50:27  * joaocgreisquit (Ping timeout: 272 seconds)
22:55:25  * jgijoined
22:57:56  <thealphanerd>hey all. I just pushed up a PR to citgm. There is a new command citgm-all which will iterate across the entire lookup table It is currently doing everything in series and as such is very slow. I am just running through a bunch of suites to make sure stuff is working locally and I’ll publish a decent json to run it with
22:57:59  <thealphanerd>https://github.com/nodejs/citgm/pull/27
23:01:11  <thealphanerd>it also has a fairly robust unit test suite now, to make sure citgm itself runs as expected on different versions