00:00:01  * ircretaryquit (Remote host closed the connection)
00:00:10  * ircretaryjoined
00:01:55  <hueniverse>finally have two servers with slightly different config that are leaking memory twice as fast, so we have our first real clue where to look...
00:02:04  <hueniverse>never been this happy about faster leaking
00:02:12  <MI6>libuv-master-gyp: #199 FAILURE windows-x64 (3/195) windows-ia32 (3/195) http://jenkins.nodejs.org/job/libuv-master-gyp/199/
00:04:57  <tjfontaine>hueniverse: what's the context?
00:05:06  <tjfontaine>hueniverse: I'll be doing a new 0.11 release tomorrow
00:07:32  * TooTallNatequit (Quit: Computer has gone to sleep.)
00:08:41  * defunctzombiechanged nick to defunctzombie_zz
00:09:17  <hueniverse>tjfontaine: our production servers are leaking about 20mb a day of RSS (they go up about 40 and down 20 at night every day)
00:09:30  <hueniverse>tjfontaine: could not figure it out for a long time
00:10:03  <tjfontaine>oh, how related is that to concurrent connections?
00:10:04  * st_luke_joined
00:10:14  <hueniverse>tjfontaine: we changed 2 servers to use out http client code more freq and those servers are leaking 43mb a day
00:10:47  <tjfontaine>interesting
00:10:59  <hueniverse>tjfontaine: we cannot find any obvious correlation
00:11:02  * kenperkinsjoined
00:11:12  <hueniverse>not to connections, disconnects, etc
00:11:17  <hueniverse>we know more traffic makes is worse
00:11:31  <hueniverse>but little traffic keeps it flat or rss goes down
00:12:13  <tjfontaine>hueniverse: do you have any dtrace running watching for gc-start/gc-stop as well, to get some ideas how often it's actually firing?
00:12:19  * st_luke_quit (Remote host closed the connection)
00:14:58  <hueniverse>tjfontaine: we count gc runs
00:15:52  <hueniverse>they pick at 4 collections a second
00:15:54  <hueniverse>peak
00:16:14  <tjfontaine>presumably during peak traffic
00:16:32  <hueniverse>yes
00:16:39  <hueniverse>total heap used is never over 80mb
00:16:55  <hueniverse>servers start at around 150mb
00:17:09  <hueniverse>after about a week, they are now all at 350-450mb
00:18:01  <tjfontaine>interesting
00:22:01  <hueniverse>the offending code is most likely around here: https://github.com/spumko/hapi/blob/master/lib/client.js
00:22:42  <tjfontaine>are these predominately tls or http?
00:23:09  <hueniverse>looking
00:24:36  <hueniverse>so on the slower leaking machines, 28rps http, 15rps tls
00:25:38  <hueniverse>on the faster leaking machines, same ratio but with additional http calls (not sure how many more)
00:26:30  <tjfontaine>Collector looks interesting
00:26:51  <tjfontaine>are you able to gcore these and watch object counts that way?
00:28:00  <hueniverse>tjfontaine: yeah
00:28:02  <hueniverse>we can
00:28:12  <hueniverse>so far we were not able to make sense of it
00:28:26  <hueniverse>but now we got numbers showing how we can speed up leaking
00:30:58  * defunctzombie_zzchanged nick to defunctzombie
00:37:01  <tjfontaine>hueniverse: ya, I mean, I would just core on the hour, and cache the output of "echo '::findjsobjects ! sort -k2' | mdb core > object_counts"
00:41:50  <hueniverse>tjfontaine: can't imagine this will be super useful. I'm pretty sure we're leaking slabs for buffers
00:41:55  * defunctzombiechanged nick to defunctzombie_zz
00:41:56  <hueniverse>since heap is total stable
00:42:38  <tjfontaine>hueniverse: there's no slab in 0.11
00:42:41  * st_lukejoined
00:43:00  <tjfontaine>hueniverse: there's a buffer pool for buffers made in js, but that's only 8k at a time
00:43:47  <hueniverse>we're still 0.10 in production
00:44:00  <tjfontaine>oh, I thought we were talking about 2 servers running 0.11
00:44:00  <hueniverse>cause your 0.11 keep giving me a hard time!! :-)
00:44:11  <hueniverse>tjfontaine: sorry, no
00:44:15  <hueniverse>these are all .10
00:44:15  <tjfontaine>hehe, well I was waiting to release a new 0.11 until I had your uglify issue fixed
00:44:30  <hueniverse>the two servers just do more client calls
00:44:45  * EhevuTovquit (Quit: This computer has gone to sleep)
00:45:00  <tjfontaine>I see, ok, and the average response times for these calls are pretty quick, and request sizes are how large?
00:45:30  * st_luke_joined
00:51:20  <tjfontaine>anyway if they are slabs, we should see some amount of similar increase by the end of the week in buffer counts, we could also inspect and aggregate the buffer counts by length and offsets
00:51:27  * st_luke_quit (Remote host closed the connection)
00:52:00  * tellnesquit (Ping timeout: 260 seconds)
00:58:07  * tellnesjoined
00:58:41  * st_lukequit (Remote host closed the connection)
01:10:28  * inolenquit (Quit: Leaving.)
01:15:49  * amartensquit (Quit: Leaving.)
01:29:23  <hueniverse>tjfontaine: that will just prove we are leaking some buffers... once we prove that, how do we go about finding those fuckers?
01:33:38  * AvianFlujoined
01:37:02  * abraxasjoined
01:41:46  * dapquit (Quit: Leaving.)
01:44:37  * kenperkinsquit (Ping timeout: 248 seconds)
01:44:40  * kenperkins_joined
01:50:32  * groundwaterquit (Quit: groundwater)
01:51:04  * groundwaterjoined
02:10:29  * groundwaterquit (Quit: groundwater)
02:12:57  * mikealjoined
02:16:07  * inolenjoined
02:17:33  * c4milojoined
02:44:09  * AvianFluquit (Remote host closed the connection)
02:57:15  * amartensjoined
03:07:37  * mikealquit (Quit: Leaving.)
03:08:27  * EhevuTovjoined
03:13:24  * julianduquequit (Ping timeout: 252 seconds)
03:15:48  * julianduquejoined
03:15:48  * julianduquequit (Client Quit)
03:16:01  * mikealjoined
03:18:02  * inolenquit (Quit: Leaving.)
03:19:25  * groundwaterjoined
03:21:48  * AvianFlujoined
03:23:38  * AvianFluquit (Remote host closed the connection)
03:27:33  * inolenjoined
03:32:05  * AvianFlujoined
03:42:28  * indexzeroquit (Quit: indexzero)
03:52:36  * st_lukejoined
03:54:29  * kazuponjoined
03:54:36  * kazuponquit (Remote host closed the connection)
03:54:43  * kazuponjoined
03:59:14  * indexzerojoined
04:02:27  * AvianFluquit (Remote host closed the connection)
04:21:23  * st_luke_joined
04:23:00  * dap1joined
04:25:11  * st_lukequit (Remote host closed the connection)
04:27:25  * st_luke_quit (Remote host closed the connection)
04:33:55  * dap1quit (Quit: Leaving.)
04:36:52  * dap1joined
04:40:34  * defunctzombie_zzchanged nick to defunctzombie
04:42:56  * inolenquit (Quit: Leaving.)
04:52:35  * EhevuTovquit (Quit: This computer has gone to sleep)
04:56:11  * st_lukejoined
04:56:56  * dap1quit (Quit: Leaving.)
05:00:42  * octetcloudquit (Ping timeout: 264 seconds)
05:03:57  * c4miloquit (Remote host closed the connection)
05:09:43  * brsonquit (Ping timeout: 245 seconds)
05:13:55  * defunctzombiechanged nick to defunctzombie_zz
05:17:40  * dominictarrjoined
05:23:42  * c4milojoined
05:31:51  * inolenjoined
05:32:27  * amartensquit (Quit: Leaving.)
05:32:46  * c4miloquit (Remote host closed the connection)
05:43:00  * inolenquit (Quit: Leaving.)
05:45:17  * paddybyersjoined
05:56:31  <othiym23>trevnorris: I was able to get all your asyncListener error test cases to pass against the polyfill with https://gist.github.com/othiym23/6724661
05:56:57  <othiym23>it's actually the non-error tests that are a pain in the butt, and I'm getting *more* events tracked than I expect, rather than less
05:58:27  * dominictarrquit (Quit: dominictarr)
05:58:40  * EhevuTovjoined
06:02:30  * dominictarrjoined
06:13:36  * st_lukequit (Remote host closed the connection)
06:14:05  * groundwaterquit (Quit: groundwater)
06:23:31  * st_lukejoined
06:25:46  * felixgejoined
06:25:46  * felixgequit (Changing host)
06:25:46  * felixgejoined
06:26:10  * paddybyersquit (Quit: paddybyers)
06:27:05  * st_lukequit (Remote host closed the connection)
06:30:34  * paddybyersjoined
06:43:14  <MI6>nodejs-v0.10-windows: #230 UNSTABLE windows-ia32 (7/600) windows-x64 (7/600) http://jenkins.nodejs.org/job/nodejs-v0.10-windows/230/
06:49:47  * amartensjoined
06:55:38  * bnoordhuisjoined
06:57:18  * c4milojoined
06:58:13  * dominictarrquit (Quit: dominictarr)
07:07:26  * dsantiagoquit (Quit: Leaving...)
07:07:57  * amartensquit (Quit: Leaving.)
07:08:23  * inolenjoined
07:09:12  * rendarjoined
07:09:36  * c4miloquit (Remote host closed the connection)
07:12:52  * dsantiagojoined
07:18:10  <MI6>joyent/node: Jeff Switzer master * 2e13d0c : fs: remove duplicate !options case - http://git.io/PhVDKg
07:20:16  * EhevuTovquit (Quit: This computer has gone to sleep)
07:26:31  * EhevuTovjoined
07:27:40  <MI6>nodejs-master: #582 UNSTABLE smartos-x64 (6/643) http://jenkins.nodejs.org/job/nodejs-master/582/
07:37:25  <MI6>nodejs-master-windows: #375 UNSTABLE windows-x64 (22/643) windows-ia32 (20/643) http://jenkins.nodejs.org/job/nodejs-master-windows/375/
07:40:53  * paddybyers_joined
07:41:00  * paddybyersquit (Ping timeout: 268 seconds)
07:41:00  * paddybyers_changed nick to paddybyers
07:49:54  * bnoordhuisquit (Ping timeout: 256 seconds)
07:56:48  * bnoordhuisjoined
08:11:28  * kazuponquit (Remote host closed the connection)
08:12:06  * kazuponjoined
08:55:48  * inolenquit (Quit: Leaving.)
08:59:16  * hzjoined
09:02:51  * inolenjoined
09:03:49  * dominictarrjoined
09:20:41  <bnoordhuis>https://github.com/joyent/node/issues/6271#issuecomment-25232635 <- looks like we hit a linux kernel bug
09:23:22  * dominictarrquit (Quit: dominictarr)
09:28:11  * inolenquit (Quit: Leaving.)
09:29:38  * felixgequit (Quit: felixge)
09:32:43  * inolenjoined
09:39:19  <rendar>bnoordhuis: i saw that, i think using EPOLLET/EPOLLONESHOT for 0,1,2 fds would solve the problem, at least for libuv, right?
09:44:56  * bnoordhuisquit (Ping timeout: 256 seconds)
09:47:34  * bradleymeckquit (Quit: bradleymeck)
10:11:27  * dominictarrjoined
10:14:07  * kazuponquit (Remote host closed the connection)
10:18:00  * hzquit
10:22:06  * hzjoined
10:31:07  * hzquit (Disconnected by services)
10:31:10  * hzjoined
10:38:49  <indutny>heya
10:40:56  <rendar>does libuv have only UV_CHANGE and UV_RENAME for file notifications?
10:46:34  * Kakerajoined
10:46:36  <MI6>nodejs-v0.10: #1504 UNSTABLE smartos-x64 (3/600) http://jenkins.nodejs.org/job/nodejs-v0.10/1504/
10:46:58  * bnoordhuisjoined
10:49:33  <rendar>bnoordhuis: i saw that linux bug, using EPOLLET/EPOLLONESHOT for 0,1,2 fds would solve the problem, at least for libuv, right?
10:51:33  * dominictarrquit (Quit: dominictarr)
10:58:17  <bnoordhuis>rendar: correct
10:59:18  <bnoordhuis>'EPOLLONESHOT and rearm' is probably easiest to implement in v0.10. it's not as if fds 0-2 require ultra-high performance
11:05:10  <indutny>bnoordhuis: another epoll bug
11:05:15  <indutny>and you tell kqueue is wicked :)
11:05:24  * dominictarrjoined
11:07:00  <bnoordhuis>bug-for-bug, event ports is still in the lead
11:09:22  <rendar>bnoordhuis: i see
11:10:09  <rendar>bnoordhuis: as far i understood, the bug presents only when someone actually closes fds 0,1,2...so also not closing them would prevent that?
11:23:57  * dominictarrquit (Quit: dominictarr)
11:29:57  <bnoordhuis>rendar: yes. but libuv needs to be a little more robust than that
11:33:25  <indutny>bnoordhuis: god
11:33:31  <indutny>bnoordhuis: this guy doesn't want to sign CLA :)
11:33:55  * abraxasquit (Remote host closed the connection)
11:38:25  <indutny>isaac: yt?
11:41:00  <indutny>isaacs: ^
11:41:15  <indutny>I wonder if github comment may act as a CLA signature
12:05:15  * dominictarrjoined
12:46:39  * indexzeroquit (Quit: indexzero)
12:53:02  * piscisaureus_joined
13:04:32  * defunctzombie_zzchanged nick to defunctzombie
13:21:17  * jmar777joined
13:25:41  * roxlujoined
13:25:47  * M28quit (Read error: Connection reset by peer)
13:26:25  * M28joined
13:28:41  <roxlu>hi guys! does libuv have a functions that strips the filenme from a filepath?
13:31:09  <rendar>roxlu: dunno, but it seems a pretty trivial operation! you can just take the last '\\' +1 as the filename string
13:31:32  * EhevuTovquit (Quit: This computer has gone to sleep)
13:32:40  <roxlu>yeah, but would be nice if libuv had it :)
13:35:02  * abraxasjoined
13:36:10  * EhevuTovjoined
13:39:42  * abraxasquit (Ping timeout: 264 seconds)
13:43:36  <saghul>roxlu it doesn't
13:43:50  <saghul>also, I guess that doesn't hold true for those weird Windows paths
13:53:29  * piscisaureus_quit (Ping timeout: 256 seconds)
14:00:52  * bnoordhuisquit (Ping timeout: 268 seconds)
14:02:29  * AvianFlujoined
14:09:10  * EhevuTovquit (Quit: Leaving)
14:21:09  * AvianFluquit (Remote host closed the connection)
14:21:12  <mmalecki>indutny: hey, how was that sticky cluster session module of yours called
14:21:43  <indutny>hey man
14:21:47  <indutny>sticky-session
14:21:49  <indutny>I think
14:31:04  <mmalecki>heh, thanks indutny, I found `sticky` on your github which appears to be a game :)
14:32:45  <indutny>ahhaha
14:47:30  * dominictarrquit (Quit: dominictarr)
14:50:35  * groundwaterjoined
14:52:42  * groundwaterquit (Client Quit)
15:01:18  * mikealquit (Quit: Leaving.)
15:01:20  * octetcloudjoined
15:02:49  * AvianFlujoined
15:07:01  * bnoordhuisjoined
15:07:22  * octetcloudquit (Quit: WeeChat 0.4.0)
15:11:54  * bnoordhuisquit (Ping timeout: 256 seconds)
15:17:39  <MI6>nodejs-master: #583 UNSTABLE smartos-x64 (6/643) http://jenkins.nodejs.org/job/nodejs-master/583/
15:17:45  * octetcloudjoined
15:22:19  * mikealjoined
15:27:22  <roxlu>I'm trying to compile a project which provides a "libyuv.gyp" file (note the 'y', not libuv ^.- ) but no other working info on how to compile this
15:27:45  <roxlu>As libuv uses gyp too, can someone maybe explain a bit how I can compile this project .. or generate a project file?
15:28:03  * octetcloudquit (Quit: WeeChat 0.4.1)
15:31:18  * bnoordhuisjoined
15:35:07  * c4milojoined
15:40:19  * octetcloudjoined
15:41:50  * c4miloquit (Remote host closed the connection)
15:42:31  * wwicksjoined
15:51:20  * piscisaureus_joined
15:56:54  * `3rdEdenchanged nick to `3E|GONE
16:02:10  * octetcloudquit (Quit: WeeChat 0.4.1)
16:04:34  * octetcloudjoined
16:05:11  <rendar>does libuv have only UV_CHANGE and UV_RENAME for file notifications?
16:05:22  <bnoordhuis>rendar: yes
16:06:09  <rendar>bnoordhuis: why? it seems all other systems to support also creation and deletion of files
16:07:05  <bnoordhuis>rendar: solaris and the bsds don't
16:07:19  <rendar>bnoordhuis: hmm i see, kqueue doesn't support that?
16:07:37  <bnoordhuis>rendar: no. it's inode based rather than path based
16:07:47  <rendar>oh...i see
16:08:12  <rendar>so the only way to see for a deletion or a creation of file is to polling the directory listing?
16:08:18  <rendar>each tot seconds..
16:08:41  <bnoordhuis>yep, correct
16:08:55  <rendar>thats sucks..
16:08:59  <rendar>:(
16:09:13  * c4milojoined
16:09:33  * octetcloudquit (Quit: WeeChat 0.4.1)
16:09:36  <bnoordhuis>lots of things do. a stoic accepts that and moves on
16:09:42  <bnoordhuis>and cries only when other people don't see it
16:09:44  <rendar>eheh
16:09:47  <rendar>great words :)
16:10:34  <rendar>bnoordhuis: btw even if kqueue is inode based instead of path based, you could hash map the unique inode to a path string, if you receive notifies as by inodes, or there is more?
16:11:15  * defunctzombiechanged nick to defunctzombie_zz
16:12:23  * amartensjoined
16:15:56  <bnoordhuis>rendar: yeah, there's tricks you can do like stat() the path and check if the inode is still the same
16:16:24  <bnoordhuis>we may implement that some day
16:16:39  * AvianFlu_joined
16:19:34  * AvianFluquit (Disconnected by services)
16:19:37  * AvianFlu_changed nick to AvianFLu
16:19:40  * AvianFLuchanged nick to AvianFlu
16:23:19  * octetcloudjoined
16:23:45  * julianduquejoined
16:28:52  * TooTallNatejoined
16:29:08  <trevnorris>morning all
16:30:45  <tjfontaine>hello
16:31:04  <tjfontaine>hueniverse: if you know which buffers are leaking you can `<addr>::findjsobjects -r` to see who is still referencing them
16:31:14  <tjfontaine>hueniverse: I'm going to be doing a blog post this weekend about it
16:31:14  * wavdedjoined
16:31:32  * dapjoined
16:33:58  <bnoordhuis>morning tj
16:34:11  <tjfontaine>morning ben
16:34:14  <tjfontaine>how goes?
16:35:28  <bnoordhuis>all's well. lazy friday. how about you?
16:36:17  <tjfontaine>I was hoping to be a lazy day, but I woke up to find a meeting invite in my mailbox for a meeting that happened in 30 mins, so I am guessing the rest of my day will follow the hectic pattern ... mostly because I'm a pessimist
16:36:31  * groundwaterjoined
16:36:31  <tjfontaine>bnoordhuis: how's the latest noordhuis doing?
16:38:54  <bnoordhuis>tjfontaine: he's always hungry but i guess that's natural
16:39:31  <bnoordhuis>re: meetings, i've found that once you start showing up smelling of booze, invitations get fewer and fewer
16:40:15  <tjfontaine>had I been there in person, that effect would have been accomplished :)
16:41:22  <bnoordhuis>apropos nothing, is constructor() a new global in newer versions of js?
16:41:33  <tjfontaine>so on #6214, when do you think we should be doing the readStop?
16:41:37  <bnoordhuis>d8> new constructor()
16:41:38  <bnoordhuis>{print: function print() { [native code] }, write: function write() { [native code] }, read: function read() { [native code] }, readbuffer: function readbuffer() { [native code] }, readline: function readline() { [native code] }, load: function load() { [native code] }, quit: function quit() { [native code] }, version: function version() { [native code] }, Realm: {shared: undefined, current: function current() { [native code]
16:41:45  <bnoordhuis>^ that's with the current HEAD of v8
16:41:53  <tjfontaine>huh, interesting
16:42:23  <bnoordhuis>yeah, i was thinking of upgrading but then all tests started failing because of unknown global 'constructor'
16:42:40  <bnoordhuis>re #6214, i think (but am not sure) it's related to how we do pipelining
16:43:25  <bnoordhuis>but if you put a heap profiler on it, you'll find that the incoming and outgoing arrays keep filling up
16:43:33  <bnoordhuis>well, the incoming array in particular
16:49:58  * M28quit (Read error: Connection reset by peer)
16:50:16  * dapquit (Quit: Leaving.)
16:50:38  * mikealquit (Quit: Leaving.)
16:50:55  * M28joined
16:51:00  <trevnorris>othiym23: still haven't completely traced why that one test is failing. i'm going to concentrate on finishing integrating w/ the remaining MakeCallbacks. Should be done by EOD. then I'll focus on that test.
16:53:08  * M28quit (Read error: Connection reset by peer)
16:53:11  <trevnorris>bnoordhuis: not sure you're going to like how I'm finishing integrating AsyncWrap, but your WeakObject patch is working brilliantly. :)
16:55:27  <bnoordhuis>that's good to hear :)
16:56:18  * M28joined
16:59:31  <hueniverse>bnoordhuis: "i've found that once you start showing up smelling of booze, invitations get fewer and fewer" LOL. you do know this is public :-)
17:00:29  * c4miloquit (Remote host closed the connection)
17:02:01  * M28quit (Read error: Connection reset by peer)
17:03:09  * M28joined
17:14:52  * saghulquit (Ping timeout: 264 seconds)
17:15:12  <hueniverse>tjfontaine: when is the new 0.11 build coming?
17:15:19  <hueniverse>I'd like to test it
17:16:11  <tjfontaine>hueniverse: I'm running tests on another v8 upgrade -- once that looks clean and I get a lgtm from bnoordhuis or indutny, I'll cut the new release so you should have it for the weekend
17:16:34  * M28quit (Read error: Connection reset by peer)
17:17:26  <bnoordhuis>tjfontaine: i was thinking of upgrading to 3.21
17:17:40  <bnoordhuis>now that v8 is putting out 3.22 releases
17:17:41  * M28joined
17:17:48  <bnoordhuis>seems to be running solid apart from the constructor thing
17:17:53  <tjfontaine>bnoordhuis: it feels a little late in the cycle?
17:18:11  <tjfontaine>but, if it takes out some of that perf hit we were seeing it seems reasonable
17:18:15  <bnoordhuis>well, the second-to-last branch is supposed to be the most stable
17:18:31  <tjfontaine>nod
17:18:32  <bnoordhuis>i'm kind of worried we'll get stuck with an unsupported version like the 3.14 in v0.10
17:18:46  <isaacbw>bnoordhuis: how's the babby
17:18:58  <tjfontaine>ya, I just worry about the unknowns
17:19:22  <bnoordhuis>isaacbw: belligerent and always hungry
17:19:56  * M28quit (Read error: Connection reset by peer)
17:20:25  <isaacbw>proper!
17:20:57  <bnoordhuis>he's also quite cute :)
17:21:02  <bnoordhuis>takes after his dad
17:21:04  * M28joined
17:22:45  * saghuljoined
17:23:39  <bnoordhuis>tjfontaine: https://github.com/joyent/node/pull/6279 <- for your consideration
17:28:18  * M28quit (Read error: Connection reset by peer)
17:29:28  * M28joined
17:30:03  * AvianFluquit (Remote host closed the connection)
17:31:42  * M28quit (Read error: Connection reset by peer)
17:37:07  * M28joined
17:37:39  * TooTallNatequit (Quit: Computer has gone to sleep.)
17:38:06  * AvianFlujoined
17:39:57  * st_lukejoined
17:41:35  <trevnorris>bnoordhuis: think Environment::GetCurrent(isolate); has much overhead? just curious.
17:43:14  <othiym23>trevnorris: sounds good re: finishing the integration before tackling my bug -- should I file an issue on your fork of node for tracking?
17:43:26  * M28quit (Read error: Connection reset by peer)
17:43:53  <trevnorris>othiym23: sure. or just post a comment in the PR. either way works for me.
17:44:01  <trevnorris>othiym23: then you can let me know if you find anything else out.
17:44:33  * M28joined
17:44:45  <trevnorris>othiym23: I've just been having a problem wrapping my head around how the boilerplate works in conjunction with the async listeners.
17:45:14  <othiym23>trevnorris: https://github.com/othiym23/async-listener/blob/master/glue.js#L5-L38 makes all the #6011 error tests pass under 0.10, and would do the same for 0.8 if I filter out all the setImmediate usage
17:45:46  <othiym23>trevnorris: if I have time (unlikely today, more likely tomorrow) I'll see if I can come up with an example that doesn't depend on all the CLS-related crud
17:46:14  <trevnorris>wow. impressive you're actually getting a shim to resemble async listeners. but guess your use case is mainly cls.
17:46:41  <trevnorris>so it's more focused than what async listeners allow you to do. that's nice.
17:46:46  <othiym23>yeah, the context stuff is all bullshit right now, and I'm not sure how much of that I can jam in without resorting to C++
17:46:55  <trevnorris>hah, yeah.
17:47:40  * groundwaterquit (Quit: groundwater)
17:47:47  * M28quit (Read error: Connection reset by peer)
17:47:51  <othiym23>{create,add,remove}AsyncListener have pretty much the same behavior in the polyfill that they do in the real code, since I ninja'd a bunch of it off the PR
17:48:17  <trevnorris>awesome.
17:48:31  <othiym23>this weekend I'm going to take a closer look at the _asyncStack stuff and see if it makes sense to try to copy that behavior with all the closures I'm creating
17:48:56  * M28joined
17:49:05  * c4milojoined
17:49:05  * c4miloquit (Remote host closed the connection)
17:49:39  * groundwaterjoined
17:49:58  * c4milojoined
17:51:40  <trevnorris>coolio
17:54:15  <MI6>libuv-master: #260 UNSTABLE windows (3/195) smartos (2/194) http://jenkins.nodejs.org/job/libuv-master/260/
17:57:46  * mikealjoined
18:01:40  * M28quit (Read error: Connection reset by peer)
18:02:26  * bradleymeckjoined
18:02:47  * M28joined
18:02:52  * dominictarrjoined
18:03:24  * kenperkins_quit (Quit: Textual IRC Client: http://www.textualapp.com/)
18:05:07  * M28quit (Read error: Connection reset by peer)
18:06:14  * M28joined
18:06:51  * M28quit (Read error: Connection reset by peer)
18:07:34  <MI6>libuv-node-integration: #243 UNSTABLE smartos-x64 (6/643) http://jenkins.nodejs.org/job/libuv-node-integration/243/
18:07:59  * M28joined
18:08:42  * M28quit (Read error: Connection reset by peer)
18:09:49  * M28joined
18:11:16  * bnoordhuisquit (Ping timeout: 245 seconds)
18:11:16  * M28quit (Read error: Connection reset by peer)
18:12:09  * M28joined
18:12:56  * bnoordhuisjoined
18:13:28  * bradleymeckquit (Quit: bradleymeck)
18:14:22  * M28quit (Read error: Connection reset by peer)
18:15:19  * indexzerojoined
18:17:20  <tjfontaine>bnoordhuis: I'm not opposed to it, how about I do this small bump to to fix this 64bit regression, cut the release, and then immediately land 3.21
18:17:31  * M28joined
18:19:11  * brsonjoined
18:19:14  * groundwaterquit (Ping timeout: 256 seconds)
18:19:42  * indexzeroquit (Client Quit)
18:20:17  * groundwaterjoined
18:21:15  * M28quit (Read error: Connection reset by peer)
18:22:22  * M28joined
18:26:58  * sh1mmerjoined
18:27:05  * M28quit (Read error: Connection reset by peer)
18:27:46  * TooTallNatejoined
18:27:46  <sh1mmer>Sorry for the dumb question. Is a baton just a struct to pass information back and forth between threads and the event loop?
18:27:57  <sh1mmer>I can't find the term anywhere else other than people talking about libuv
18:28:33  <tjfontaine>sh1mmer: a baton, like the thing you use in a relay race :)
18:28:40  <sh1mmer>Ha, I know that
18:29:06  <sh1mmer>I found a few people talking about a "baton pattern"
18:29:09  <tjfontaine>sh1mmer: but yes, it's just a term to describe how the struct is being used, to yes pass information to/from the async mechanism
18:29:43  <sh1mmer>Ok, I wanted to make sure I wasn't missing some larger context
18:29:49  <tjfontaine>nope
18:29:50  <sh1mmer>Thanks
18:30:32  <tjfontaine>yup
18:31:46  * sh1mmerquit (Quit: sh1mmer)
18:34:16  * M28joined
18:36:30  * M28quit (Read error: Connection reset by peer)
18:37:37  * M28joined
18:39:26  * M28quit (Read error: Connection reset by peer)
18:40:34  * M28joined
18:40:43  * groundwaterquit (Ping timeout: 260 seconds)
18:46:06  <tjfontaine>trevnorris, bnoordhuis, TooTallNate, piscisaureus_, indutny: with isaac not being reachable at the moment to find and do the npm fix, I'm going to have to cut a new 0.10 with a revert to the version of npm that was shipped in 0.10.18
18:46:29  <piscisaureus_>tjfontaine: ok, noted
18:46:31  <indutny>ok
18:46:37  <piscisaureus_>tjfontaine: what is the NPM problem btw?
18:47:00  <tjfontaine>there are a ton of people unable to `npm install` or `npm upgrade` because of the "cb() not fired" error
18:47:08  <tjfontaine> I haven't actually root caused it at the moment
18:47:20  * M28quit (Read error: Connection reset by peer)
18:47:41  <tjfontaine>but even if I did, I wouldn't be comfortable floating a patch on npm since we don't have publish rights
18:47:50  * groundwaterjoined
18:48:02  <piscisaureus_>tjfontaine: well, we can float patches on externally maintained libraries :)
18:48:27  * M28joined
18:48:30  <tjfontaine>piscisaureus_: yes, but this isn't exactly the same thing, since it's a module published, if this were an internal library we absolutely would
18:48:49  <tjfontaine>piscisaureus_: otoh we'd be shipping a non-canonical version of npm, whcih makes me feel quite icky
18:49:37  <tjfontaine>especially difficult since there would be a discrepency in what was published in the registry, vs what we actually shipped
18:49:45  <Domenic_>when is isaac coming back :(
18:50:42  <tjfontaine>he's supposed to be traveling back next week, but his availability is on the 7th as I understand it
18:50:52  * jmar777quit (Remote host closed the connection)
18:52:33  * jmar777joined
18:53:07  <octetcloud>piscisaureus: is there anything on windows that could be used to get an equivalent to unix domain sockets? client-server IPC, with server named by a path?
18:53:16  * M28quit (Read error: Connection reset by peer)
18:54:12  <piscisaureus_>octetcloud: you mean, with node, with libuv?
18:54:24  * M28joined
18:54:29  * groundwaterquit (Ping timeout: 248 seconds)
18:54:51  <piscisaureus_>octetcloud: you can use named pipes already
18:55:41  <piscisaureus_>octetcloud: but you won't be able to send FDs over them, except if you spawn a process with an IPC pipe to it
18:56:52  * groundwaterjoined
18:56:55  <piscisaureus_>octetcloud: however the pipe name should start with \\.\pipe\
18:56:58  <octetcloud>piscisaureus: with node, having .listen("/some/path") work, but I assume that node doesn't have it because luv doesn't have it because windows doesn't have it
18:57:11  <piscisaureus_>octetcloud: oh, yes, that works
18:57:16  <octetcloud>aren't pipes one-way?
18:57:35  <piscisaureus_>octetcloud: just .listen("\\\\.\\pipe\\some\\path")
18:57:43  <piscisaureus_>octetcloud: not on windows
18:58:24  <octetcloud>http://nodejs.org/docs/latest/api/net.html#net_server_listen_path_callback
18:58:53  <octetcloud>Are you saying that this is a lie? Its not a unix socket server, necessarily, and that it works on windows?
18:59:30  <piscisaureus_>octetcloud: it is on unix. It's a named pipe on windows.
19:00:45  * M28quit (Read error: Connection reset by peer)
19:01:25  <octetcloud>piscisaureus: ok, so without the string escaping, a path of \\.\pipe\clusterctl would be an address in the current working dir?
19:01:36  <trevnorris>sweet loving goodness. actually got the thing running and passing all the tests. now, to clean it all up.
19:01:44  * M28joined
19:03:28  * M28quit (Read error: Connection reset by peer)
19:04:36  <octetcloud>piscisaureus: and \\Apps\That\pipe\clusterctl would be an abs path to a pipe?
19:06:36  * M28joined
19:07:00  <piscisaureus_>octetcloud: it's doesn't become an actual fule
19:07:02  <piscisaureus_>*file
19:08:01  <piscisaureus_>octetcloud: but you could create a pipe called '\\.\pipe\apps\that\pipe\clusterctl' for sure
19:09:03  <octetcloud>ok, I see, so all path names exist in a special \\.\pipe\ namespace. which you can choose to mirror actual fs namespace, if you want, but don't have to.
19:09:50  * M28quit (Read error: Connection reset by peer)
19:10:38  <octetcloud>and node doesn't implicitly add the \\.\pipe\ to the front of the listen path? I assume, because my code was EINVALing on .listen("path") on windows, which I assumed just meant there was no unix socket support
19:10:58  * M28joined
19:11:35  <piscisaureus_>octetcloud: true that it doesn't auto-prepend \\.\pipe
19:11:59  <piscisaureus_>octetcloud: we could auto-prepend but it'd hide the fact that the path can not be relative to the current directory
19:13:02  <octetcloud>path.resolve()?
19:14:34  <octetcloud>seems better to prepend, than to have a fabulous but undocumented feature.
19:15:00  <piscisaureus_>Maybe change the error
19:15:10  <trevnorris>othiym23: can you try running your tests against my latest?
19:15:11  * M28quit (Read error: Connection reset by peer)
19:15:29  <piscisaureus_>"Error: EINVAL. But if you prepend with \\.\pipe\ it'd be fine"
19:16:39  * M28joined
19:17:00  * groundwaterquit (Ping timeout: 252 seconds)
19:17:35  * hzquit
19:18:38  * c4miloquit (Remote host closed the connection)
19:18:49  * groundwaterjoined
19:20:02  <MI6>nodejs-master-windows: #376 UNSTABLE windows-x64 (23/643) windows-ia32 (26/643) http://jenkins.nodejs.org/job/nodejs-master-windows/376/
19:21:28  * M28quit (Read error: Connection reset by peer)
19:24:09  * dominictarrquit (Quit: dominictarr)
19:24:27  <trevnorris>bnoordhuis: ping
19:24:37  * M28joined
19:24:51  <bnoordhuis>trevnorris: pong
19:25:27  <bnoordhuis>tjfontaine: re v8 upgrade, sure, fine by me
19:25:33  <trevnorris>bnoordhuis: i'm finding my tests are not deterministic. for example, sometimes it runs the "before" callback an extra time.
19:25:54  <bnoordhuis>trevnorris: could be because of quantum
19:26:03  <trevnorris>yeah.
19:26:08  * M28quit (Read error: No route to host)
19:26:14  <bnoordhuis>unless you're one of those types who believes we live in a deterministic universe
19:26:26  <trevnorris>haha, sure. why not. :)
19:26:46  <bnoordhuis>what are you testing? asyncwrap?
19:26:52  <trevnorris>yeah.
19:27:01  * julianduquequit (Ping timeout: 248 seconds)
19:27:03  * hzjoined
19:27:04  * M28joined
19:27:17  <bnoordhuis>and you're seeing the before callback getting invoked twice before the actual callback runs?
19:27:44  <trevnorris>right now i'm keeping counters. and yeah. sometimes the counter is > than expected
19:28:06  <trevnorris>but it's only 1/20-30 runs that it happens
19:28:49  <bnoordhuis>okay. then your job is to find out when and why that extra counter tick happens :)
19:28:58  <trevnorris>hah, yeah. working on that. :)
19:29:10  <trevnorris>pretty sure it has to do with the way timers work.
19:29:30  * c4milojoined
19:29:36  * jmar777quit (Remote host closed the connection)
19:29:50  <trevnorris>since, timers that are set to run at the same time are grouped in a loop, like nextTick, but otherwise they call out to ReqWrap.
19:30:09  <bnoordhuis>eh? not sure i follow
19:30:11  * `3E|GONEchanged nick to `3rdEden
19:30:17  <bnoordhuis>node only has a single timer object really
19:31:27  <indutny>tjfontaine: hey man
19:31:31  <indutny>tjfontaine: isaacs is out
19:31:36  <indutny>so probably I may ask you
19:31:56  <indutny>tjfontaine: what can we do in following situation https://github.com/joyent/node/pull/6270#issuecomment-25239052 ?
19:34:23  <bnoordhuis>indutny: just fix it yourself
19:34:50  <bnoordhuis>and do a hat tip in the commit log because we're decent people
19:35:40  <bnoordhuis>admittedly the CLA is kind of silly, esp. for small changes like this
19:36:09  <bnoordhuis>and even for bigger changes it's unlikely that a CLA will grant you more immunity
19:36:27  <bnoordhuis>(IANAL but i spoke to one about that)
19:38:17  * M28quit (Read error: Connection reset by peer)
19:38:30  <indutny>bnoordhuis: ok, I'll do it the other way around then
19:39:25  * M28joined
19:39:45  <bnoordhuis>a regression test would be nice btw :)
19:39:51  <indutny>bnoordhuis: no way :)
19:39:55  <indutny>tjfontaine: please wait for me :)
19:40:02  <indutny>tjfontaine: if you're going to release v0.10 today
19:43:38  * M28quit (Read error: Connection reset by peer)
19:44:45  * M28joined
19:44:51  * wavdedquit (Quit: Hasta la pasta)
19:46:21  <trevnorris>oy, so I get how setImmediate works. i'm too brain dead to see how we're calling out to run setTimeout.
19:46:50  <indutny>trevnorris: pure magic, I told you
19:46:55  <trevnorris>hah, yeah
19:47:19  <trevnorris>ah, freak. ok I see it. thought a couple of those native callbacks were js callbacks.
19:48:30  <trevnorris>bnoordhuis: ok, so if you setTimeout the first time it's going to create a new TimeWrap instance, but any setTimeout's schedules to run at the same time are going to piggy-back on the first.
19:48:39  <bnoordhuis>trevnorris: correct
19:49:02  <bnoordhuis>even better, other setTimeout timers also use that single TimeWrap
19:49:17  <bnoordhuis>it's only when you start unref'ing timers that new TimeWraps get created
19:49:33  <indutny>bnoordhuis: please review https://github.com/joyent/node/pull/6283
19:49:34  <trevnorris>yeah, so I have to detect whether that's happening and either let the async listener be handled by AsyncWrap::AsyncWrap, or if I have to handle it in js
19:52:14  <bnoordhuis>indutny: i don't think that fixes it
19:52:47  <bnoordhuis>what if read==0?
19:54:06  <indutny>bnoordhuis: its ok
19:54:12  <indutny>there's condition for it
19:54:13  <trevnorris>bnoordhuis: so, if I run two setTimeouts immediately, sometimes it makes two calls to MakeCallback, and sometimes it doesn't.
19:54:20  * M28quit (Read error: Connection reset by peer)
19:54:40  * st_lukequit (Remote host closed the connection)
19:55:14  * M28joined
19:55:20  <tjfontaine>indutny: I'm going to spend more time on this npm thing, so I won't release 0.10 without you
19:59:59  <bnoordhuis>indutny: oh right, so there is
20:02:53  <indutny>bnoordhuis: thanks
20:02:57  <indutny>bnoordhuis: waiting for tests
20:03:27  <MI6>joyent/node: Fedor Indutny v0.10 * 671b5be : tls: fix sporadic hang and partial reads - http://git.io/q5cZ2A
20:04:38  * st_lukejoined
20:05:40  <othiym23>trevnorris: sure, just gimme a minute
20:07:22  * piscisaureus_quit (Quit: ~ Trillian Astra - www.trillian.im ~)
20:07:31  <trevnorris>race conditions are just the best to debug!
20:08:27  <othiym23>isn't that why we started using Node in the first place?
20:08:28  * M28quit (Read error: Connection reset by peer)
20:08:46  <tjfontaine>:)
20:09:13  * M28joined
20:09:19  * c4miloquit (Remote host closed the connection)
20:10:05  <trevnorris>othiym23: fyi, running those tests are more just so I know how they're doing. still a lot of cleanup needs to be done there.
20:11:05  <othiym23>trevnorris: it's not a problem, not expecting perfection yet ;)
20:11:10  <trevnorris>bnoordhuis: so, all i've been able to deduce so far is that if you run two setTimeout's next to each other w/ 0 timeout. sometimes TimerWrap::OnTimeout runs onces, and sometimes twice.
20:11:10  <MI6>nodejs-v0.10: #1505 UNSTABLE linux-x64 (1/600) smartos-x64 (2/600) osx-ia32 (1/600) osx-x64 (1/600) http://jenkins.nodejs.org/job/nodejs-v0.10/1505/
20:11:41  <trevnorris>bnoordhuis: is that something you might consider a problem? if not, then i'll just have to code around it.
20:11:48  <trevnorris>since I thought it would be more deterministic.
20:12:42  <othiym23>if the system is OK with that it seems like there's some nondeterminism getting papered over
20:12:59  <trevnorris>that's what i'm trying to figure out.
20:13:00  * M28quit (Read error: Connection reset by peer)
20:13:15  <trevnorris>because the actual callback only ever gets run the correct number of times.
20:13:52  <othiym23>right
20:14:10  * M28joined
20:14:17  <indutny>tjfontaine: pushed fix
20:14:23  <tjfontaine>indutny: nod
20:14:28  <indutny>tjfontaine: you're ready to go, once you'll finish your thing ;)
20:14:37  <othiym23>trevnorris: all of CLS's tests pass
20:14:41  <othiym23>trevnorris: let me try the agent
20:14:59  <tjfontaine>indutny: is anything ever done? :)
20:16:50  <othiym23>trevnorris: and the agent's tests are in the same place, with the same mix of crashes due to modules being incompatible with 0.11 and that issue I found yesterday
20:17:38  <othiym23>I'm pretty psyched that it looks like 0.12 isn't going to move my cheese as far as error tracing is concerned
20:17:43  * M28quit (Read error: Connection reset by peer)
20:18:34  * M28joined
20:19:42  * defunctzombie_zzchanged nick to defunctzombie
20:22:01  * M28quit (Read error: Connection reset by peer)
20:23:11  * st_lukequit (Remote host closed the connection)
20:23:12  * M28joined
20:26:09  <bnoordhuis>trevnorris: does that happen with master, i.e. with no asyncwrap? if yes, a test case would be appreciated
20:27:20  <trevnorris>bnoordhuis: yes, and test case is simple enough. problem is to detect I fprintf in OnTimeout
20:27:37  <trevnorris>bnoordhuis: so i'm still figuring out a way to detect it externally
20:27:52  <othiym23>drop a dtrace probe in there?
20:27:56  <trevnorris>othiym23: coolio. good to hear.
20:29:33  * M28quit (Read error: Connection reset by peer)
20:30:11  <trevnorris>bnoordhuis: ah, ok. so it seems like sometimes they're on the same linked list, and other times they're not.
20:30:20  <trevnorris>no idea why though
20:30:40  * M28joined
20:31:09  <bnoordhuis>trevnorris: let's talk about it more tomorrow. i'm signing off for tonight
20:31:18  <trevnorris>bnoordhuis: sounds good. night.
20:31:30  <bnoordhuis>sleep tight :)
20:31:35  * mikealquit (Quit: Leaving.)
20:31:53  <tjfontaine>trevnorris: you mean the same node bucket or libuv linked list?
20:32:42  <trevnorris>tjfontaine: from lib/_linkedlist
20:32:55  * st_lukejoined
20:33:37  <tjfontaine>trevnorris: ok, so you're seeing two setTimeout()'s scheduled on the same turn in separate linked lists?
20:34:53  <MI6>nodejs-v0.10-windows: #231 UNSTABLE windows-ia32 (8/600) windows-x64 (9/600) http://jenkins.nodejs.org/job/nodejs-v0.10-windows/231/
20:34:55  <trevnorris>tjfontaine: no. that is you run setTimeout twice, right next to each other, sometimes they are on the same linked list, and other times they both instantiate a new Timer. at least that's the way it looks. still debugging it though.
20:35:29  <tjfontaine>trevnorris: yes, setTimeout() setTimeout() two scheduled in the same turn
20:35:51  * bnoordhuisquit (Ping timeout: 245 seconds)
20:36:24  * mikealjoined
20:36:26  * M28quit (Read error: Connection reset by peer)
20:36:47  * st_lukequit (Remote host closed the connection)
20:37:34  * M28joined
20:39:28  * M28quit (Read error: Connection reset by peer)
20:40:29  * defunctzombiechanged nick to defunctzombie_zz
20:40:36  * M28joined
20:40:42  <trevnorris>tjfontaine: ah, think I have it. so in the while() in listOnTimeout there's a var diff = now - first._idleStart. then it dos a if (diff < msec) { }. I think that's supposed to be an if (diff < msec && diff > 0) { }
20:40:56  <trevnorris>on the later if() it fixes the problem.
20:41:10  * st_lukejoined
20:41:20  * st_lukequit (Remote host closed the connection)
20:41:34  * mikealquit (Quit: Leaving.)
20:43:45  * M28quit (Read error: Connection reset by peer)
20:44:01  <trevnorris>wtf. but that's causing test-child-process-fork-dgram to fail. how... what....
20:44:30  <trevnorris>ooh, and a lot of other problems as well.
20:45:33  * M28joined
20:46:35  <trevnorris>tjfontaine: what do you think? if diff == 0 in listOnTimeout then it'll cause two setTimeout's to run in separate uv_run loops. but the simple fix in the if () causes a bunch of other stuff to break.
20:48:17  * M28quit (Read error: Connection reset by peer)
20:48:20  <tjfontaine>I'm sorry I can't context switch out atm
20:48:31  * c4milojoined
20:48:35  <trevnorris>ah yeah. npm.
20:48:39  <trevnorris>good luck w/ that :)
20:49:18  * wwicksquit (Ping timeout: 264 seconds)
20:51:25  * M28joined
20:53:24  * wwicksjoined
20:53:48  * wwickschanged nick to Guest94840
20:54:32  <trevnorris>ah, I see. so my patch is doing exactly what it was supposed to be doing. I just didn't realize it. :P
20:55:33  <trevnorris>othiym23: so the timer has it's own callback, which gets registered to asyncListener, but also each callback passed to the timer. so when it has to make a second, unexpected, call to MakeCallback it adds one to the counter.
20:55:42  * Guest94840quit (Client Quit)
20:56:14  * M28quit (Read error: Connection reset by peer)
20:56:42  <othiym23>trevnorris: that makes sense, but that seems like that could be hazardous for listeners if they're not aware of it
20:57:14  <trevnorris>yeah. i'm trying to write a test case. kinda hard though.
20:59:23  * M28joined
21:00:10  <trevnorris>othiym23: because even running nextTick in setTimeout will add just enough time to run the next setTimeout in the same tick.
21:01:16  <othiym23>is there a reason we're using ms accuracy there instead of hrtime?
21:01:27  <trevnorris>i dunno
21:01:49  <othiym23>I'll admit I'm not super familiar with that chunk of the code
21:05:28  * mikealjoined
21:06:01  <trevnorris>ah suck it. was wondering why I couldn't reproduce. it was because my patch for fix was still in there. :P
21:07:35  * defunctzombie_zzchanged nick to defunctzombie
21:08:25  <trevnorris>yes!!!!
21:08:32  <trevnorris>was finally able to produce a test case.
21:16:56  <trevnorris>tjfontaine, othiym23: https://github.com/joyent/node/issues/6285
21:17:00  * julianduquejoined
21:17:14  <trevnorris>ircretary: tell bnoordhuis finally was able to create a test case: https://github.com/joyent/node/issues/6285
21:17:15  <ircretary>trevnorris: I'll be sure to tell bnoordhuis
21:19:12  * M28quit (Read error: Connection reset by peer)
21:19:13  <trevnorris>only took me 3 hours, but i'm proud of that one :)
21:19:48  <othiym23>nice bug d00d
21:20:34  <trevnorris>thanks
21:20:36  <othiym23>couldn't you just add an assert to the end to make sure tracking == [1, 2, 3, 4]?
21:20:58  <trevnorris>guess so.
21:21:22  <trevnorris>problem is it still wouldn't do much as an actual test, because it very very racey.
21:21:39  <othiym23>yeah, but your little shell loop takes care of that
21:21:52  <othiym23>there are way more subtle race conditions out there
21:22:02  * EhevuTovjoined
21:22:03  <trevnorris>heh, don't remind me :P
21:22:09  * wwicks_joined
21:22:19  * M28joined
21:22:21  <othiym23>was reading in aphyr's latest post (on Cassandra) that the chubby team came up with a test case that took 3 weeks of running under heavy load to fail
21:22:24  <othiym23>but it eventually failed!
21:22:25  * EhevuTovquit (Remote host closed the connection)
21:22:38  <trevnorris>othiym23: seriously though. if you want to see how much async is really going on in the background, create a counter in the async listener, run some stuff then print the counter value on process exit.
21:22:47  <tjfontaine>I'm not really sure I care about the determinism of this :/
21:23:11  <trevnorris>othiym23: hahaha, that's insane.
21:23:44  <trevnorris>tjfontaine: only reason it's bother me is because it's causing my async listener tests to fail. since I can't be deterministic about how many times the lister is going to fire.
21:24:19  <trevnorris>tjfontaine: i mean, I agree that if you're writing async code, then the ordering should be non-deterministic in nature
21:24:19  * M28quit (Read error: Connection reset by peer)
21:24:29  <tjfontaine>I will need to think on it more
21:24:36  <tjfontaine>for now back into npm worl
21:24:38  <tjfontaine>d
21:24:48  * wwicks_quit (Client Quit)
21:24:54  <trevnorris>yeah. you focus on that. much more important to actually releasing something.
21:25:05  <trevnorris>othiym23: how did they even figure out there was a race condition to test?
21:25:09  * M28joined
21:25:56  <othiym23>trevnorris: they didn't, but they were testing a quorum protocol and knew it needed to be tested hard
21:26:19  <trevnorris>interesting.
21:26:34  <othiym23>it's in here, which is a fascinating paper: http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/paxos_made_live.pdf
21:27:12  <trevnorris>tjfontaine: well, even if we decide not to fix it (which I'm fine with btw) i'd at least like to add a comment to the code. but you focus on npm!
21:27:37  <trevnorris>othiym23: thanks. i'll take a read once I finish this patch. ;)
21:27:46  <othiym23>:thumbsup:
21:28:09  <tjfontaine>mother fucking ansi color codes
21:28:22  <trevnorris>haha, what's up?
21:28:23  <tjfontaine>dtrace -Z -wn '::write:entry/execname == "node" && strstr(this->str = copyinstr(arg1), "npm ERR! cb() never called!") == this->str/{ printf("stopping process %d\n", pid); stop(); }'
21:28:34  <tjfontaine>I was going to stop the node process when it hits the error
21:28:47  <tjfontaine>but that doesn't work IF NPM WANTS TO DRAW FUCKING COLORS
21:28:59  <tjfontaine>LOUDBOT: FUCK IT ALL
21:29:22  <trevnorris>tjfontaine: and don't tell me, there's no option to run w/o colors. :-/
21:29:27  * M28quit (Read error: Connection reset by peer)
21:30:20  <tjfontaine>there is I just have to find it
21:30:22  <tjfontaine>pissing me off
21:31:22  <TooTallNate>tjfontaine: pipe to cat
21:31:41  <tjfontaine>TooTallNate: not for what I'm trying to achieve, I am using dtrace to stop the process when it's at that state so I can gcore
21:31:52  <tjfontaine>TooTallNate: so I need the exact string that npm is sending to the write syscall
21:31:55  <trevnorris>tjfontaine: --color=false
21:32:07  <tjfontaine>trevnorris: npm set color false
21:32:16  <trevnorris>npm <comment> --color=false
21:32:21  <trevnorris>*<command>
21:32:29  * M28joined
21:32:40  <TooTallNate>same diff guise :p
21:32:51  <trevnorris>:)
21:32:55  <tjfontaine>not really, one is ephemeral
21:32:55  <tjfontaine>anyway
21:33:24  <trevnorris>tjfontaine: or if you do npm config set color false
21:33:28  <trevnorris>then it'll always be off
21:33:48  <tjfontaine>thanks for catching up trevor :)
21:34:13  * rendarquit
21:34:44  <trevnorris>tjfontaine: yeah... do that quite often. see a question, find the answer and don't bother to check if it's been answered already.
21:39:50  * M28_joined
21:39:50  * M28quit (Read error: Connection reset by peer)
21:40:15  * M28_quit (Read error: Connection reset by peer)
21:43:22  * M28joined
21:44:39  <trevnorris>tjfontaine: don't know if you're getting this issue on master, but right now when I run npm on master when downloading a lot of packages it'll do
21:44:39  <trevnorris>../src/node_zlib.cc:101: void node::ZCtx::Close(): Assertion `!write_in_progress_ && "write in progress"' failed.
21:44:39  <trevnorris>Aborted (core dumped)
21:44:46  <trevnorris>is that part of the problem you're trying to fix?
21:45:24  <tjfontaine>that's happening on 0.10.19?
21:45:59  <trevnorris>tjfontaine: whatever version of npm is in master
21:46:45  <tjfontaine>that error you're seeing there has more to do master than with npm
21:46:50  <trevnorris>yeah, figured.
21:48:05  * M28quit (Read error: Connection reset by peer)
21:48:08  * defunctzombiechanged nick to defunctzombie_zz
21:49:41  * wolfeidau_joined
21:52:19  * M28joined
21:52:43  * LeftWing__joined
21:52:50  * Ralt_joined
21:53:01  * LeftWingquit (Read error: Connection reset by peer)
21:54:23  * AvianFluquit (Ping timeout: 260 seconds)
21:56:09  * wwicksjoined
21:57:57  * M28_joined
21:57:57  * M28_quit (Read error: Connection reset by peer)
21:57:57  * Raltquit (Quit: Bye)
22:01:32  * st_lukejoined
22:02:21  * M28quit (Read error: Connection reset by peer)
22:02:46  * c4miloquit (Remote host closed the connection)
22:03:27  * c4milojoined
22:05:02  * M28joined
22:06:23  * defunctzombie_zzchanged nick to defunctzombie
22:07:15  * M28quit (Read error: Connection reset by peer)
22:08:24  * M28joined
22:13:01  * ecrjoined
22:13:02  * ecrquit (Client Quit)
22:14:49  * c4miloquit (Remote host closed the connection)
22:15:13  * st_lukequit (Remote host closed the connection)
22:15:34  * st_lukejoined
22:16:02  * st_lukequit (Remote host closed the connection)
22:16:18  * c4milojoined
22:24:45  <trevnorris>othiym23: btw, you might notice that the async listener runs >= before/after run. that's because not all async requests result in a MakeCallback.
22:26:16  * ecrjoined
22:26:32  <othiym23>yeah, that makes sense
22:26:48  <othiym23>my goal now is to make the polyfill run the listener the correct number of times
22:26:59  <othiym23>it has a tendency to overshoot now, and I'm not sure why
22:27:16  <trevnorris>mine or yours?
22:33:58  * hzquit
22:50:12  <othiym23>mine, yours works fine
22:50:22  <othiym23>there's a lot that could be going on with the polyfill
22:51:01  <othiym23>I actually do need to figure out if the polyfill has bugs that affect CLS, because I need to nail down all the ways CLS and New Relic's transaction tracer can fail to explain some weirdness I'm weeing
22:52:48  * defunctzombiechanged nick to defunctzombie_zz
22:56:16  * c4miloquit (Remote host closed the connection)
23:04:42  * ecrquit (Quit: ecr)
23:27:15  * octetcloudquit (Ping timeout: 252 seconds)
23:29:09  <trevnorris>othiym23: makes sense. the polyfill you're writing is definitely on the difficult/abstract side.
23:29:23  <trevnorris>to be honest I'm surprised that my code is already passing your cls tests
23:33:34  * AvianFlujoined
23:37:06  * Benviequit (Ping timeout: 245 seconds)
23:38:02  * mikealquit (Quit: Leaving.)
23:38:28  * Kakeraquit (Ping timeout: 246 seconds)
23:38:55  * mikealjoined
23:39:07  * mikealquit (Client Quit)
23:39:32  <othiym23>trevnorris: the tests mostly exercise code paths that touch MakeCallback to ensure that the async listener is pushing CLS across the async gaps
23:39:36  <othiym23>they're comprehensive but not super deep
23:39:50  <othiym23>CLS is a pretty simple application of the asyncListener API
23:39:57  <trevnorris>well, it's a good start :)
23:40:17  * Benviejoined
23:42:05  <trevnorris>othiym23: i'll spend some more time on that test case this weekend. I haven't been able to figure out yet why this.active in Namespace#enter() is just an {}.
23:42:24  <trevnorris>w/o state: { transaction: { id: 1337 } } } set
23:53:09  <trevnorris>tjfontaine: how's npm treating you?