00:00:31  <piscisaureus_>I like to work late again for a change
00:00:57  <piscisaureus_>reminds me of good old times
00:01:05  * perezdjoined
00:01:06  <piscisaureus_>it's sad that ryah and igorzi are no longer with us
00:04:26  <piscisaureus_>tjfontaine: is MI6 dead?
00:07:22  * mikealjoined
00:09:40  <MI6>joyent/node: Bert Belder master * 26a50cb : windows: fix normalization of UNC paths (+1 more commits) - http://git.io/FBcaHw
00:09:46  <piscisaureus_>ah there it is
00:13:27  <piscisaureus_>hmm
00:13:40  <piscisaureus_>now sblom's patch shows my name
00:14:00  <piscisaureus_>probably because I squashed it with some minor fixup
00:17:19  <TooTallNate>piscisaureus_: you forgot --author
00:17:26  <sblom>piscisaureus_: you can have credit for it if you want. :) do you want me to sign another CLA with you instead of Joyent as the licensee? :-p
00:17:28  <TooTallNate>piscisaureus_: not too late for a force-push
00:19:24  * dapquit (Quit: Leaving.)
00:20:00  <piscisaureus_>haha
00:20:03  <piscisaureus_>I suppose I can fp
00:20:26  * dapjoined
00:20:26  * dapquit (Client Quit)
00:20:53  * dapjoined
00:22:12  <MI6>joyent/node: piscisaureus created branch +master - http://git.io/ACd9mw
00:22:43  <MI6>joyent/node: Scott Blomquist master * f657ce6 : windows: add tracing with performance counters Patch by Henry Rawas and (+1 more commits) - http://git.io/iQIs-w
00:22:57  <piscisaureus_>^-- sblom: fixed
00:23:41  * mikealquit (Quit: Leaving.)
00:57:25  <bnoordhuis>piscisaureus created branch +master <- wut?
00:57:38  <piscisaureus_>bnoordhuis: yeah ctrl+left fail
00:57:45  <piscisaureus_>bnoordhuis: fixed it already
00:59:43  <bnoordhuis>piscisaureus_: https://github.com/bnoordhuis/node/commit/551001f <- review?
00:59:53  <piscisaureus_>last thing for the day
01:00:02  <piscisaureus_>bnoordhuis: are you coming to amsterdam at some point this week?
01:00:06  <bnoordhuis>piscisaureus_: yes
01:00:27  <piscisaureus_>bnoordhuis: you can come friday but I have to leave early that day
01:00:31  <piscisaureus_>otherwise it doesn't matter
01:00:48  <bnoordhuis>how is friday in that respect different from other days?
01:00:48  <TooTallNate>what's ForceDelete()?
01:00:53  * piscisaureus_quit (Read error: Connection reset by peer)
01:01:12  * piscisaureus_joined
01:01:14  <bnoordhuis>TooTallNate: forces property deletion, bypasses accessors an whatnot
01:01:20  <bnoordhuis>*and
01:01:52  <TooTallNate>bnoordhuis: does it go up the prototype chain if the prop isn't present on the object?
01:02:14  <bnoordhuis>TooTallNate: yes. it traverses GetPropertyNames()
01:02:23  <bnoordhuis>that includes prototype properties
01:05:28  <tjfontaine>piscisaureus_: no?
01:05:50  <piscisaureus_>tjfontaine: nvm. there just seemed to be a hiccup (if this is re: MI6)
01:05:58  <tjfontaine>indeed
01:06:06  <tjfontaine>probably github being awesome
01:06:20  <TooTallNate>they seem a lot slower lately
01:06:51  <TooTallNate>notification emails taking a long time, etc.
01:08:38  <bnoordhuis>okay, signing off for the day
01:08:53  <bnoordhuis>piscisaureus_: i'll probably drop by thursday unless there's a company meeting again this friday
01:08:59  <piscisaureus_>there aint
01:09:08  <bnoordhuis>okay, thursday it is then
01:09:22  <bnoordhuis>sleep tight all
01:09:54  <piscisaureus_>bnoordhuis: patch lgtm
01:10:04  <piscisaureus_>bnoordhuis: I am very curious why this is necessary tho
01:10:12  <piscisaureus_>bnoordhuis: (it passes the tests too)
01:11:35  * abraxasjoined
01:11:42  * bnoordhuisquit (Read error: Operation timed out)
01:12:35  * lohkeypart
01:12:51  * lohkey_joined
01:12:51  * lohkey_quit (Client Quit)
01:19:45  * kristatejoined
01:23:38  <mraleph>circular reference? whaaaa? mark sweep is not susceptible to cycles.
01:27:52  <piscisaureus_>haha
01:28:10  <piscisaureus_>mraleph: the problem was this test case: https://github.com/bnoordhuis/node/blob/551001fc5e473256c0e73e9e0ac8b605b26e514c/test/pummel/test-vm-memleak-circular.js
01:32:55  <piscisaureus_>Ok, heading out too
01:33:04  <mraleph>file a bug against V8. if this happens then there is some hidden bs going on.
01:35:02  <piscisaureus_>goodbye friends
01:35:12  <mraleph>cya
01:35:46  <piscisaureus_>mraleph: you should go to bed
01:35:59  <piscisaureus_>mraleph: your employer aways you at 9am tomorrow
01:36:11  * piscisaureus_quit (Quit: ~ Trillian Astra - www.trillian.im ~)
01:52:52  * deoxxa[cookies]quit (Ping timeout: 248 seconds)
01:53:52  * deoxxa_joined
01:54:14  * sblomquit
01:54:45  * c4miloquit (Remote host closed the connection)
01:59:54  * piscisaureus_joined
02:01:52  * kazuponquit (Remote host closed the connection)
02:05:10  * kristatequit (Ping timeout: 260 seconds)
02:08:49  * piscisaureus_quit (Ping timeout: 256 seconds)
02:09:18  * kristatejoined
02:25:38  * bradleymeckjoined
02:29:08  * kazuponjoined
02:34:00  * joshthecoderquit (Quit: Leaving...)
02:44:33  * lohkeyjoined
02:52:40  * kazuponquit (Remote host closed the connection)
02:52:41  * joshthecoderjoined
02:55:44  * lohkeyquit (Quit: lohkey)
02:55:54  * brsonquit (Ping timeout: 264 seconds)
02:59:54  * dapquit (Quit: Leaving.)
03:01:59  * TooTallNatequit (Quit: Computer has gone to sleep.)
03:07:02  * lohkeyjoined
03:09:00  * lohkeyquit (Client Quit)
03:23:10  * brsonjoined
03:53:12  * joshthecoderquit (Quit: Leaving...)
03:55:20  * jmar777joined
03:58:14  * xaqjoined
03:58:17  * kazuponjoined
04:00:56  * perezdquit (Quit: perezd)
04:14:16  * TooTallNatejoined
04:17:10  * TooTallNatequit (Client Quit)
04:17:18  * perezdjoined
04:20:41  * bradleymeckquit (Quit: bradleymeck)
04:34:43  * AvianFluquit (Remote host closed the connection)
05:00:51  * xaqquit (Remote host closed the connection)
05:11:47  * jmar777quit (Remote host closed the connection)
05:12:21  * jmar777joined
05:16:41  * warzquit
05:16:54  * jmar777quit (Ping timeout: 264 seconds)
05:18:27  * c4milojoined
05:26:43  * mikealjoined
05:26:52  * jmar777joined
05:43:00  * c4miloquit (Remote host closed the connection)
05:47:40  * jmar777quit (Remote host closed the connection)
05:48:13  * jmar777joined
05:50:47  * mikealquit (Quit: Leaving.)
05:52:14  * jmar777quit (Ping timeout: 240 seconds)
05:53:08  * mikealjoined
06:04:17  * benoitcquit (Excess Flood)
06:04:35  * mikealquit (Quit: Leaving.)
06:06:40  * benoitcjoined
06:49:10  * joshthecoderjoined
06:56:58  * brsonquit (Ping timeout: 246 seconds)
06:57:11  * loladiroquit (Quit: loladiro)
07:02:46  * brsonjoined
07:04:17  * brson_joined
07:04:17  * brsonquit (Read error: Connection reset by peer)
07:07:32  * brson_quit (Client Quit)
07:12:13  * loladirojoined
07:20:02  * perezdquit (Quit: perezd)
07:20:10  * rendarjoined
07:32:33  * `3rdEdenjoined
07:37:31  * loladiroquit (Quit: loladiro)
08:41:19  * kazuponquit (Remote host closed the connection)
08:43:47  * kazuponjoined
08:47:11  * kazuponquit (Remote host closed the connection)
08:48:18  * kazuponjoined
08:52:30  * kazuponquit (Remote host closed the connection)
09:03:18  * c4milojoined
09:04:37  * c4miloquit (Read error: Operation timed out)
09:06:28  * benoitcquit (Excess Flood)
09:06:42  * benoitcjoined
09:19:25  * janjongboomjoined
09:28:58  * `3rdEdenchanged nick to `3rdEDEN|BRB
09:42:23  * janjongboomquit (Remote host closed the connection)
09:42:40  * janjongboomjoined
09:54:28  * `3rdEDEN|BRBchanged nick to `3rdEden
09:56:27  * benoitcquit (Excess Flood)
10:01:36  * kristatequit (Remote host closed the connection)
10:02:42  * benoitcjoined
10:41:21  * joshthecoderquit (Quit: Leaving...)
10:50:34  * abraxasquit (Remote host closed the connection)
11:14:23  * mikealjoined
11:55:39  * `3rdEdenquit (Ping timeout: 240 seconds)
12:13:21  * sgallaghjoined
12:18:19  * `3rdEdenjoined
12:28:41  * Benviequit (Read error: Connection reset by peer)
12:28:59  * Benviejoined
12:45:00  * bnoordhuisjoined
12:51:09  * abraxasjoined
12:56:11  * abraxasquit (Ping timeout: 265 seconds)
13:02:58  * hzjoined
13:07:16  * roxlujoined
13:11:50  <MI6>joyent/node: Shigeki Ohtsu master * 11a5119 : build: disable use of thin archive Thin archive needs binutils >= 2.19, - http://git.io/5-JcwA
13:27:11  * c4milojoined
13:29:32  * chilts_joined
13:30:11  * Gottox_joined
13:32:28  * rje`mac_joined
13:33:15  * rphillips_joined
13:34:30  * philips_quit (*.net *.split)
13:34:30  * dscapequit (*.net *.split)
13:34:30  * rje`macquit (*.net *.split)
13:34:30  * rphillipsquit (*.net *.split)
13:34:30  * Gottoxquit (*.net *.split)
13:34:30  * chiltsquit (*.net *.split)
13:34:32  * c4miloquit (Write error: Broken pipe)
13:36:20  * philips-joined
13:41:10  * loladirojoined
13:41:59  * piscisaureus_joined
13:44:36  * loladiroquit (Client Quit)
13:45:12  * benoitcquit (Read error: Connection reset by peer)
13:46:56  * AvianFlujoined
13:49:19  * benoitcjoined
13:49:57  * bradleymeckjoined
13:51:04  * bradleymeckquit (Client Quit)
13:53:26  * dscapejoined
13:56:14  * jmar777joined
14:02:25  * loladirojoined
14:04:27  * AvianFluquit (Remote host closed the connection)
14:17:45  * loladiroquit (Quit: loladiro)
14:29:36  * loladirojoined
14:30:25  * bnoordhuisquit (Read error: Operation timed out)
14:41:33  * c4milojoined
14:45:14  * bradleymeckjoined
14:48:48  * loladiroquit (Quit: loladiro)
14:48:57  * bnoordhuisjoined
14:52:01  <bnoordhuis>piscisaureus_: yo bertje, https://gist.github.com/4125212
14:59:04  <bnoordhuis>btw, me coming to the office tomorrow is contingent on trains going to 020
15:00:52  * kristatejoined
15:03:33  * loladirojoined
15:03:38  * sergimjoined
15:04:56  <bnoordhuis>sergim: want to do a talk at kings of code?
15:05:12  * loladiroquit (Client Quit)
15:16:36  <piscisaureus_>bnoordhuis: https://github.com/joyent/node/pull/4301 <-- uhhhh ???
15:17:03  <bnoordhuis>i know right?
15:17:27  <piscisaureus_>it's like, WTF this patch has nothing to do with the problem that isnt even a problem
15:18:09  <bnoordhuis>it might be a language barrier thing. let's see what he says
15:18:22  <piscisaureus_>memory barrier
15:19:30  <piscisaureus_>bnoordhuis: v8 test dcase ; nice
15:20:15  <piscisaureus_>bnoordhuis: afaict there are trains tomorrow
15:20:17  <bnoordhuis>piscisaureus_: took me a while to replicate :/ do you see any obviously stupid things in there?
15:20:51  <bnoordhuis>there's apparently a storm coming tonight
15:21:01  <bnoordhuis>and you know how the ns is when there's leaves on the rails
15:21:54  <piscisaureus_>ah right
15:27:53  <piscisaureus_>bnoordhuis: is there no reference from from context_ object to wrapper_?
15:28:30  <piscisaureus_>have to run for a while, bb in 20 minutes
15:29:01  * janjongboomquit (Quit: janjongboom)
15:29:53  <bnoordhuis>piscisaureus_: no. the WrappedContexts get collected as well
15:30:13  <bnoordhuis>actually, the number of created and deleted WrappedContexts matches exactly
15:31:48  * sj26quit (Ping timeout: 245 seconds)
15:32:56  * piscisaureus_quit (Ping timeout: 245 seconds)
15:36:12  * sj26joined
15:37:11  * joshthecoderjoined
15:46:10  * joshthecoderquit (Quit: Leaving...)
15:49:14  * piscisaureus_joined
15:49:56  <piscisaureus_>back
15:51:10  <piscisaureus_>bnoordhuis: then what *does* leak?
15:51:19  <piscisaureus_>bnoordhuis: what are the objects that retain in memory?
15:54:57  * bnoordhuisquit (Ping timeout: 240 seconds)
15:55:12  * bnoordhuisjoined
15:57:21  * TheJHjoined
16:03:05  * janjongboomjoined
16:04:58  <txdv>it's funny, even the codestyle in these 3 linesi s bad
16:06:09  * `3rdEdenchanged nick to `3E|FOODING
16:07:54  * kristatequit (Ping timeout: 240 seconds)
16:16:16  * warzjoined
16:17:23  * `3E|FOODINGquit (Remote host closed the connection)
16:22:47  * stephankquit (Ping timeout: 256 seconds)
16:22:54  * janjongboomquit (Quit: janjongboom)
16:23:45  * stephankjoined
16:26:26  * janjongboomjoined
16:26:48  * janjongboomquit (Read error: Connection reset by peer)
16:27:59  <piscisaureus_>bnoordhuis: weird. A heap dump shows only 2 objects
16:28:13  <piscisaureus_>but in the raw file there are many objects
16:29:42  * janjongboomjoined
16:32:29  * loladirojoined
16:33:40  * janjongboomquit (Client Quit)
16:39:05  * bradleymeckquit (Quit: bradleymeck)
16:53:08  * joshthecoderjoined
16:53:56  * rphillips_changed nick to rphillips
17:00:58  * AvianFlujoined
17:09:36  * TheJHquit (Ping timeout: 245 seconds)
17:22:12  * dapjoined
17:25:45  <piscisaureus_>mraleph: hey
17:36:41  * CoverSlide|TPFRchanged nick to CoverSlide
17:42:07  * stagasjoined
17:57:11  * sgallaghchanged nick to sgallagh_afk
18:01:49  * mmaleckichanged nick to mmalecki[off]
18:05:04  <bnoordhuis>make[1]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule. <- i fscking hate it when projects do that
18:05:31  <bnoordhuis>fix your build deps, don't make me compile at -j1
18:06:02  <bnoordhuis>and to add more insult, the build then fails at link time :/
18:06:27  * sergimquit (Quit: Computer has gone to sleep.)
18:07:27  * joshthecoderquit (Quit: Leaving...)
18:12:08  * TooTallNatejoined
18:16:11  * c4miloquit (Remote host closed the connection)
18:21:52  * TooTallNatemisses isaacs
18:26:05  <piscisaureus_>good
18:26:12  <piscisaureus_>it would be too bad if isaacs was shot
18:26:33  <CoverSlide>i missed isaacs too, swerved just in time
18:29:17  <TooTallNate>lulz, you word twisters :p
18:40:07  * stagas_joined
18:51:17  * sergimjoined
18:51:28  * chilts_changed nick to chilts
18:56:19  * Ralt_joined
19:13:39  * sergimquit (Quit: Computer has gone to sleep.)
19:15:17  * lohkeyjoined
19:18:04  * TheJHjoined
19:19:04  * sergimjoined
19:20:56  * brsonjoined
19:25:36  * Ralt_quit (Ping timeout: 252 seconds)
19:26:59  * Ralt_joined
19:28:36  <creationix>I'm not sure this new readable stream interface where we pull data is a good idea
19:29:12  * sergimquit (Quit: Computer has gone to sleep.)
19:29:28  <TooTallNate>creationix: what specifically?
19:29:31  <creationix>after having used it a lot, it's really cumbersome
19:29:57  <creationix>I find myself creating massive stream utility classes just to create simple streams
19:30:05  <creationix>(not node specifically, I'm experimenting in lua too)
19:30:35  <creationix>the main problem with pull is you need a recursive loop to consume a stream
19:30:53  <creationix>with push style, your listener just gets called multiple times, that's a lot simpler to implement
19:31:00  * c4milojoined
19:31:29  <creationix>and a recursive loop isn't enough when the data emits sync, it builds up the stack if you don't implement some sort of trampoline
19:31:42  <creationix>though I guess node's while loop pattern solves that part
19:31:53  <TooTallNate>i've been really liking having a single _read/_write function to implement and having the base class take care of the rest
19:32:55  <TooTallNate>i've found the Transform interface not so great
19:32:58  * Ralt_quit (Ping timeout: 246 seconds)
19:33:01  <TooTallNate>especially when writing a parser
19:33:21  <creationix>yeah, I'm working with parsers with both file based and tcp based inputs
19:33:28  * c4miloquit (Read error: No route to host)
19:33:30  * c4milo_joined
19:33:34  <TooTallNate>specifically because you don't benefit from read() and being able to specify the # of bytes per callback
19:33:35  <creationix>and the parsers change protocols making me need to put some data back in the stream sometimes
19:33:51  <creationix>well, my protocol I don't know how many bytes to pull, so that doesn't help
19:34:11  <TooTallNate>creationix: i've been thinking a peek() function would be beneficial
19:34:28  <TooTallNate>so you can get bytes but they aren't consumed from the stream
19:34:39  <creationix>read, but don't dequeue, and then later pop it off
19:34:43  <creationix>that could be useful
19:34:45  <TooTallNate>right
19:34:48  <TooTallNate>C-style, haha
19:34:54  <creationix>or only pop of part of it
19:34:54  <TooTallNate>this problem has been solved for years
19:35:06  <creationix>right, this is just a data queue
19:35:07  <creationix>nothing more
19:35:11  <creationix>not a new thing
19:35:41  * Ralt_joined
19:36:35  <creationix>ok, so the big gain with pull-style read streams is the implicit pause and resume right?
19:36:40  <creationix>and no lost events
19:36:52  <TooTallNate>right, lost data was the main problem
19:37:07  <creationix>but that could have been easily solved without changing the interface
19:37:15  <creationix>just make pause buffer the couple stray packets
19:38:18  * c4milo_quit (Remote host closed the connection)
19:38:27  <creationix>ok, so remember how write's return value tells the writer if it should pause and the drain event told the writer when to resume
19:38:54  <TooTallNate>ya that's still the case with the new api
19:39:04  * sergimjoined
19:39:07  <TooTallNate>(the Writable interface hasn't changed)
19:39:12  <creationix>that could be mirrored in the ondata handler, the callback's return value would tell it to stop emitting, and some other api would tell it to resume emitting
19:39:44  <creationix>write() method, resume() method, "data" event, "resume" event
19:40:08  <creationix>and write's return value eventually gets "data"'s callback return value
19:40:31  <creationix>and the stream itself could have internal high-water and low-water marks
19:42:03  * c4milojoined
19:42:42  <TooTallNate>ya i mean there's really lots of ways to represent the API… what if "data" events had a callback, and until it was called that's the implicit "pausing"
19:43:10  <TooTallNate>that's what i'm doing for node-ogg actually, since it's not a regular node "stream" in the sense that it doesn't output raw data but framed "packets" instead
19:43:18  <creationix>hmm, that's interesting too
19:43:32  <TooTallNate>https://github.com/TooTallNate/node-ogg/blob/master/t.js#L16-L19
19:44:06  <creationix>so instead of a resume method, there is just a resume function passed to the ondata handler?
19:44:24  <TooTallNate>yup
19:44:28  <creationix>and it won't emit till you call the resume function
19:44:43  <TooTallNate>right, only one *active* packet event at a time
19:45:10  <creationix>I wonder if that's too slow
19:45:20  <TooTallNate>creationix: i wrote this little async-emit helper thingy to do it, haha https://github.com/TooTallNate/node-ogg/blob/master/lib/asyncEmit.js
19:45:21  <creationix>I guess if you call the resume function all the time, it will emit full speed
19:45:38  <TooTallNate>creationix: or you can omit the callback, which means it's sync
19:46:04  <creationix>oh, arity checking right
19:46:07  <creationix>I don't think I have that in lua
19:47:44  <creationix>I'm trying to find the simplest possible interface for streams for a lua project
19:47:55  <creationix>I'd like to avoid a helper library if possible and just have a simple interface
19:48:20  <creationix>though a single generic stream helper would be ok (for the internal low-water, high-water stuff)
19:50:11  * bradleymeckjoined
19:53:31  * AvianFluquit (Remote host closed the connection)
19:54:51  * sergimquit (Quit: Computer has gone to sleep.)
20:00:18  <creationix>My thoughts written out... https://gist.github.com/4127047
20:01:53  <TooTallNate>creationix: be sure to cc isaacs on that… he's kinda the ring-leader with this streams2 stuff
20:02:00  * lohkeyquit (Quit: lohkey)
20:04:24  <TooTallNate>creationix: nice doc though :)
20:05:20  <creationix>I'm trying to keep it abstract
20:06:23  * AndreasMadsenjoined
20:07:01  * perezdjoined
20:08:12  * lohkeyjoined
20:09:28  * TooTallNatequit (Read error: Connection reset by peer)
20:09:51  * Ralt__joined
20:10:46  * Ralt_quit (Ping timeout: 246 seconds)
20:14:24  * tomshredsjoined
20:17:57  * Ralt__quit (Ping timeout: 256 seconds)
20:22:42  <piscisaureus_>bnoordhuis: hey
20:22:57  <bnoordhuis>piscisaureus_: ho
20:23:03  <piscisaureus_>bnoordhuis: about that v8 test case
20:23:12  <piscisaureus_>bnoordhuis: it doesn't matter if I disable all that copying etc
20:23:19  <piscisaureus_>and setting of a ContextScope
20:23:28  <piscisaureus_>bnoordhuis: it just seems that the WeakCallback is never called
20:23:49  <piscisaureus_>bnoordhuis: I wonder if a Context just always is a gc root
20:24:08  <bnoordhuis>piscisaureus_: are you testing it with v8's bleeding_edge?
20:24:19  <piscisaureus_>bnoordhuis: well, not right now but I did before :-)
20:24:26  <piscisaureus_>bnoordhuis: I suppose I can switch back
20:24:42  <piscisaureus_>bnoordhuis: (had to go back because the snapshot format changed again, sigh)
20:26:06  <bnoordhuis>piscisaureus_: because i see the WeakCallback and the destructor getting calld
20:26:08  <bnoordhuis>*called
20:26:18  <bnoordhuis>the number of news and deletes matches up
20:26:38  * warzquit
20:27:22  <piscisaureus_>bnoordhuis: latest trunk, still not seeing the WrappedContext destructor called...
20:28:21  <bnoordhuis>piscisaureus_: os/arch?
20:28:26  <piscisaureus_>linux x64
20:28:47  <piscisaureus_>commit f4f9499877ee748c6caf1d4fd7d845bf4f825e00
20:28:58  <piscisaureus_>(from google's git mirror)
20:29:57  <bnoordhuis>ah, i'm using the svn mirror on github
20:30:04  <bnoordhuis>different sha hashes :)
20:30:11  <piscisaureus_>yeah super annoying
20:30:16  * sergimjoined
20:30:20  <piscisaureus_>they should make it deterministic
20:30:27  <piscisaureus_>I suppose git-svn takes patches
20:30:34  <bnoordhuis>i'm at the current HEAD though
20:30:48  <piscisaureus_>which revision?
20:30:55  <bnoordhuis>it's interesting that even an empty Context.prototype.copy method triggers the OOM error
20:31:54  <piscisaureus_>bnoordhuis: I wonder if it would help if I'd the context explicitly
20:32:09  <bnoordhuis>piscisaureus_: i think you accidentally the context
20:32:22  <piscisaureus_>bnoordhuis: I think that's an ellipsis
20:33:17  <indutny>:)
20:38:53  * sergimquit (Quit: Computer has gone to sleep.)
20:41:00  <bnoordhuis>piscisaureus_: ah, found a bug in the test cas
20:41:04  <bnoordhuis>*test case
20:41:13  <bnoordhuis>contex_ = Persistent<Context>::New(Context::New())
20:41:32  <bnoordhuis>Context::New() already returns a Persistent<Context>
20:43:07  <piscisaureus_>ah yes that makes the leaking stop
20:43:22  <indutny>so you guys are messing with contexts
20:43:33  <indutny>:)
20:43:50  <indutny>if this fixes a problem that it looks like context is exists off-heap
20:44:02  <indutny>and a handle should always be held for it
20:44:06  <indutny>either implicitly or explicitly
20:44:56  * bradleymeckquit (Quit: bradleymeck)
20:45:39  <bnoordhuis>indutny: the issue is that contexts seem to leak memory in node but i have a hard time reproducing it in a standalone test
20:45:48  <indutny>well
20:45:52  <indutny>it could be this
20:45:57  <indutny>let me see how it looks from inside
20:47:01  <bnoordhuis>indutny: https://gist.github.com/15bee6a68eaca4f70b10 <- js test case
20:47:13  <indutny>ah
20:47:17  <indutny>so context itself is leaking
20:47:19  <bnoordhuis>if you run it with --max_old_space_size=200, it dies withing seconds
20:47:22  <indutny>what if you'll call gc()?
20:47:26  <bnoordhuis>doesn't help
20:47:30  <indutny>ok
20:47:35  <indutny>it's very interesting!
20:48:04  <bnoordhuis>indutny: what does help is this -> https://github.com/bnoordhuis/node/commit/551001f
20:48:56  <bnoordhuis>leak or no leak, v8 wins no prizes for speed
20:49:15  <bnoordhuis>create 100k contexts and all it's doing is gc'ing
20:50:17  <indutny>:)
20:50:49  <indutny>cyclic references?
20:50:51  <indutny>are you kidding
20:51:16  <indutny>is this what helps?
20:51:30  <indutny>or does placing values in Local<> handles?
20:52:02  <bnoordhuis>indutny: yeah. cleaning out the context fixes it
20:52:25  <indutny>wut?!
20:52:33  <indutny>interesting
20:52:41  <indutny>I suppose it triggers some odd edge-case
20:53:30  <bnoordhuis>but try reproducing it outside of node :)
20:54:35  <indutny>what about debug builds?
20:54:41  <indutny>does it throw or anything?
20:56:06  <indutny>for now it seems to be very obscure to me
20:57:17  <bnoordhuis>it's interesting that the nosnapshot build is so much slower...
20:57:18  <indutny>bnoordhuis: there're tons of --verify flags
20:57:27  <bnoordhuis>yeah, i know
20:57:32  <bnoordhuis>lots of --trace flags as well :)
20:57:39  <indutny>--verify-native-context-separation
20:57:43  <indutny>and other stuff with contexts
21:00:34  * tomshredsquit (Quit: Linkinus - http://linkinus.com)
21:00:51  <indutny>btw
21:00:56  <indutny>it may work without snapshots
21:05:01  * AndreasMadsenquit (Remote host closed the connection)
21:05:21  <bnoordhuis>i guess it must be a node bug
21:05:45  <bnoordhuis>without that commit i linked to, the WrappedContext destructor never gets called
21:07:06  <indutny>is it wrapped with ObjectWrap?
21:07:21  <indutny>yeah, it is
21:07:30  <indutny>well, probably you're right
21:08:14  * sergimjoined
21:08:55  <bnoordhuis>also interesting is that vm.Script.runInContext('obj = null', ctx) avoids the OOM
21:09:11  * sergimquit (Client Quit)
21:09:20  <bnoordhuis>and allows the WrappedContext destructor to run
21:10:14  <bnoordhuis>same for 'obj.ctx = null'
21:10:31  <bnoordhuis>it must be a cycle but what exactly happens is anyone's guess...
21:12:08  <indutny>yeah
21:12:11  <indutny>it's interesting
21:12:21  <indutny>actually
21:12:28  <indutny>I suppose it's quite simple
21:12:45  <indutny>i.e. everything in context is always marked as live
21:12:55  <indutny>so nothing should be destroyed if context is live
21:13:07  <indutny>and context itself is checked from native_contexts_list()
21:13:31  <indutny>well, probably not this list
21:13:35  <indutny>mraleph: any comments? ;)
21:14:07  <indutny>yeah
21:14:09  <indutny>it seems like so
21:14:20  <indutny>bnoordhuis: checkout AddToWeakNativeContextList method
21:14:47  <indutny>so, to fix it, one need to make sure that context won't be marked when traversing from itself
21:14:53  <indutny>i.e. detect loops
21:15:18  <indutny>but I don't really understand why it should be placed in that list, honestly
21:17:04  <mraleph>that least is called weak for a reason, ya know.
21:17:33  <mraleph>it kinda indicates that it is, ahm, weeeeeeak
21:17:50  <mraleph>s/least/list/
21:18:02  <indutny>:)
21:18:10  <indutny>well, it visits all items in that list during marking
21:18:14  <mraleph>also there is no special "mark everything in the context" thingy.
21:18:27  <indutny>so, at least, can you confirm that all contexts are getting in that list?
21:18:37  <mraleph>yeah they do
21:18:43  <indutny>ok, good to know
21:18:45  <mraleph>where does it visit it in the marking?
21:18:53  <indutny>mark-compact.cc
21:18:58  <indutny>ProcessMapCaches
21:19:53  <mraleph>well if you read it you will see that it iterates and asks IsMarked(context)
21:20:01  <mraleph>it does not mark contexts itself.
21:20:06  <indutny>hm...
21:20:11  <indutny>right
21:20:20  <indutny>I'm just searching for explanation
21:20:47  <indutny>also
21:20:53  <indutny> updating_visitor.VisitPointer(heap_->native_contexts_list_address());
21:20:56  <mraleph>I don't think anything especially trivial is happening in this case. At least I talked to my guts and they could not guess. That is why I suggest filing an issue.
21:21:04  <indutny>in void MarkCompactCollector::EvacuateNewSpaceAndCandidates()
21:21:10  <mraleph>this visitor is called updating for a reason, ya know.
21:21:14  <mraleph>it *updates*
21:21:18  <indutny>ook
21:21:19  <mraleph>not marks :-)
21:21:20  <bnoordhuis>mraleph: i'm having a hard time replicating it in a standalone test
21:21:21  <indutny>so it just follows marks
21:21:32  <mraleph>bnoordhuis: try filing just as is.
21:21:46  <mraleph>bnoordhuis: it is pretty self contained.
21:22:01  <bnoordhuis>yeah. i guess i'll do that if i can't get it working tonight
21:22:08  <bnoordhuis>where working == replicating
21:22:09  <indutny>:)
21:22:14  <mraleph>it's just beneficial to have this sort of things on file. who can guarantee that it can't happen with iframes e.g.?
21:24:51  <indutny>ok, I give up
21:24:54  <indutny>it's really non-trivial
21:25:02  <indutny>mraleph: ++
21:29:33  * bradleymeckjoined
21:32:14  * TooTallNatejoined
21:33:00  * bradleymeckquit (Client Quit)
21:43:04  * bradleymeckjoined
21:43:41  <creationix>TooTallNate, ok, I think my only real beef with .read() style streams is the need for a trampoline in certain cases. I can solve that in the generic stream library. :)
21:44:17  <creationix>also I don't think node's style needs a trampoline ever, but I'm designing one that more friendly to coroutines.
21:45:08  <TooTallNate>creationix: so basically that would be the case if a callback is recursively parsing a buffer for many loops?
21:45:43  <creationix>not sure what that means
21:46:08  <creationix>my case was I had mock data and my stream emitted all it's data syncly
21:46:12  <creationix>and it broke all my code
21:46:25  <creationix>I had to complicate everything with trampolines to handle it safetly
21:46:39  <creationix>I don't want users having to write trampolines, they are hard
21:47:16  <creationix>and I can't tell people to just use nextTick because it doesn't exist in the minimal prerequesites
21:47:58  <TooTallNate>creationix: can't we just assume that streams, at some point, are going to be async?
21:48:34  <creationix>I'm pretty sure my generic stream library solves that problem. So I'm not worried
21:48:44  <creationix>just know it bit me bad last night when I was implementing websockets for moonslice
21:59:59  <TooTallNate>creationix: but isn't a tcp stream always gonna be async?
22:00:14  <TooTallNate>i.e. for implementing websockets...
22:06:12  * skmpyjoined
22:11:06  * rendarquit
22:12:16  * c4miloquit (Remote host closed the connection)
22:16:18  * hzquit
22:21:16  * TooTallNatequit (Ping timeout: 245 seconds)
22:23:12  * TooTallNatejoined
22:23:29  * skmpyquit (Quit: leaving)
22:34:19  <TooTallNate>creationix: a sync read() is a little strange isn't it… i kinda like the callback version in your gist…
22:34:41  <TooTallNate>damn man, streams are so opinionated… *sigh*
22:37:44  * bradleymeckquit (Quit: bradleymeck)
22:53:28  * TheJHquit (Read error: Operation timed out)
22:54:03  <creationix>yep
22:54:20  <creationix>I really need callback closure style so I can make coroutine sugar optional
22:55:03  <creationix>repeat local chunk = await(stream.read()); await(stream.write(chunk)); until not chunk
22:55:13  <creationix>That's a full pump function complete with back-pressure
22:55:22  <creationix>or the js equivalent
22:56:01  <creationix>do { var chunk = await(stream1.read()); await(stream2.write(chunk)); } while (chunk);
22:56:46  <creationix>await suspends the coroutine, passes a callback to it's argument (the closure callback) which is called in the main thread that resumes this coroutine and returns the result.
22:57:14  <creationix>I'm pretty sure I can do the same when JS lands harmony generators
23:11:12  <mraleph>piscisaureus_: no this is not a fortunate side effect, well it is, but the new algorithm is not susceptible to such failure.
23:11:42  <piscisaureus_>mraleph: maybe that was a little strong-worded :-)
23:12:10  <mraleph>piscisaureus_: also it's a duplicate. the issue is already somewhere in the bug tracker; you waited for too long :-)
23:12:23  <mraleph>piscisaureus_: and it is already closed as fixed.
23:12:29  <piscisaureus_>mraleph: oh, I could not find it.
23:12:38  <piscisaureus_>mraleph: I just wanted to post it there to get it off my todo list
23:12:45  <piscisaureus_>mraleph: so they are not going to backport any fixes?
23:13:26  * dedis4joined
23:13:35  <mraleph>well you should ask them :-)
23:14:02  * bnoordhuisquit (Ping timeout: 260 seconds)
23:14:49  <piscisaureus_>mraleph: http://code.google.com/p/v8/issues/detail?id=2401&can=1&sort=-id&colspec=ID%20Type%20Status%20Priority%20Owner%20Summary%20HW%20OS%20Area%20Stars <-- right
23:15:00  <piscisaureus_>apparently yang saw the node issue :-)
23:23:17  <piscisaureus_>mraleph: but I am happy that i can apparently make v8 people look at the node issue tracker :-)
23:29:21  <KiNgMaR>it appears that the uint64_t values for the CPU times in node's os.cpus() wrap around because of JS's missing 64 bit integer support. node_os.cc uses Integer::New, which according to the v8 docs on izs.me does not have a uint64_t constructor. Should that be Number::New(static_cast<float>(...)) instead?
23:29:38  <piscisaureus_>that'd work
23:29:52  <piscisaureus_>use <double> then :-)
23:29:58  <KiNgMaR>I'm just surprised nobody has noticed
23:30:04  <KiNgMaR>oh, yeah, double :)
23:30:17  <TooTallNate>KiNgMaR: i think most people don't have that many cpus :D
23:30:17  <piscisaureus_>I would like to remove the counters from os.cpus()
23:30:28  <TooTallNate>oh, it's for cpu times, nvm :)
23:30:30  <piscisaureus_>TooTallNate: no it's about the clock cycle counters
23:30:38  <piscisaureus_>KiNgMaR: do you really use these number?
23:30:41  <piscisaureus_>s
23:30:45  <KiNgMaR>well, I was about to use them
23:30:50  <piscisaureus_>for what?
23:30:58  <piscisaureus_>I would like to kill them
23:30:59  <KiNgMaR>sending them to graphite
23:31:11  <KiNgMaR>I can just read them from /proc though
23:31:35  * mmalecki[off]changed nick to mmalecki
23:32:20  <TooTallNate>mmalecki[on]
23:32:59  <mmalecki>TooTallNate: THIS IS HOW I ROLL SON
23:45:06  * stagas_quit (Ping timeout: 276 seconds)
23:48:20  <creationix>What would cause: luajit: src/unix/stream.c:801: uv_shutdown: Assertion `stream->io_watcher.fd >= 0' failed.
23:48:41  <creationix>I'm implementing a TCP echo server and am telling the handle to shutdown
23:49:22  <piscisaureus_>creationix: the handle is already closed, or no socket has been opened yet (so not bound/connected/accepted)
23:49:30  <creationix>so I'm calling uv_shutdown in response to getting after UV_EOF in my on_read
23:50:06  <piscisaureus_>creationix: well, that's not necessary anyway :-)
23:50:17  <creationix>ok, so nc sends EOF to my echo server, I get that in on_read...
23:50:23  <creationix>if my socket now closed or something?
23:50:26  <creationix>*is
23:50:44  <piscisaureus_>creationix: obviously the libuv backend just changed so there could be bugs in in
23:50:45  <piscisaureus_>*it
23:50:58  <creationix>this is a pretty recent libuv I'm running
23:51:00  <piscisaureus_>creationix: but on UV_EOF the socket should not auto close
23:51:20  <creationix>I'm running 190db15638ef7f9eebf0b5160313991898480d7f from the 19th
23:51:21  <piscisaureus_>creationix: yep. And the libev removal was also landed pretty recently :-)
23:52:35  <piscisaureus_>creationix: so - this *could* be a libuv bug
23:52:46  <creationix>let me try 0.8 and see if the problem goes away
23:52:53  <piscisaureus_>creationix: what is the value of stream->io_watcher.fd at that point
23:53:16  <piscisaureus_>creationix: if it is -1 then we probably closed the socket somewhere. If it is like -54235234 then we're looking at memory corruption
23:54:02  <creationix>ok, I'm in gdb and it broke at the assertion
23:54:06  <creationix>how do I check the value?
23:54:21  <piscisaureus_>p handle->io_watcher.fd
23:54:35  <creationix>No symbol "handle" in current context.
23:54:37  <piscisaureus_>ah
23:54:38  <piscisaureus_>heh
23:54:45  <creationix>stream
23:54:46  <piscisaureus_>p stream->io_watcher.fd
23:54:58  <piscisaureus_>maybe also just do p *stream,
23:55:07  <piscisaureus_>and p stream->io_watcher
23:55:11  <piscisaureus_>and show the results
23:55:25  <creationix>can't find stream either, maybe I need to break sooner somehow
23:55:30  <piscisaureus_>no
23:55:35  <piscisaureus_>creationix: enter "bt"
23:55:45  <piscisaureus_>you probably broke somewhere in abort()
23:56:04  <piscisaureus_>creationix: you want to look up the frame number for the topmost uv function (probably uv__stream_io)
23:56:12  <creationix>yep, frame #3
23:56:20  <piscisaureus_>creationix: ok, select that frame with "f 3"
23:56:23  <piscisaureus_>then try again
23:57:35  <creationix>now that's cool
23:57:35  <creationix>https://gist.github.com/ae537b694b4c01aed2ba
23:57:42  * creationixshould learn gdm some day
23:57:45  <creationix>*gdb
23:58:00  <piscisaureus_>looks like the fd has been closed
23:58:02  <piscisaureus_>let me look at the flags
23:58:52  <piscisaureus_>flags = 0x2061