00:00:00  * ircretaryquit (Remote host closed the connection)
00:00:07  * ircretaryjoined
00:01:04  <trevnorris>isaacs: refining it more. one sec.
00:01:05  <indutny>isaacs: nvm
00:02:21  <indutny>isaacs: I can't stand anymore :) time to sleep
00:02:45  <indutny>I would be really pleased if you'll try running `NODE_DEBUG=tls ./node test/simple/test-https-drain.js` with https://github.com/indutny/node/compare/feature-crypto-tls-streams2
00:02:56  <indutny>I think some streams2 black magic is happening there
00:03:05  <indutny>but I can't figure it out right now
00:03:42  * indexzerojoined
00:05:56  * paddybyersquit (Ping timeout: 248 seconds)
00:07:19  <isaacs>indutny: ok, i'll dig into it
00:07:35  <isaacs>indutny: sorry, i was supposed to review this weekend, but didn't get to i
00:07:37  <isaacs>t
00:07:37  <indutny>it looks like it stops reading data
00:07:41  <indutny>np
00:07:42  <isaacs>hm.
00:07:45  <indutny>yeah
00:07:51  <indutny>so it sends data properly
00:07:53  <indutny>but on other side
00:07:55  <indutny>it fails to read it
00:08:03  <indutny>well, it has some queued data
00:08:10  <indutny>but it tries to pull it only few times
00:08:19  <indutny>and buffer is still full after those tries
00:09:25  <indutny>emittedReadable is true
00:09:33  <indutny>ok
00:09:36  * indutny&
00:10:12  <TooTallNate>nice one
00:10:15  <TooTallNate>LOUDBOT:
00:10:24  <trevnorris>whoot! got the regression set: https://gist.github.com/b25221576277c907d1a9
00:11:34  <tjfontaine>hm "Move MakeCallback to JS" seems like his gut may have been right
00:12:06  <indutny>hahahaha!
00:12:07  <indutny>I knew it
00:12:13  <indutny>I predicted it
00:12:17  <indutny>noone believed me
00:12:22  <indutny>trevnorris: are you sure?
00:12:28  * joshthecoder_joined
00:12:37  <trevnorris>indutny: ran the test 6 times over every commit in that range.
00:12:45  <trevnorris>those commits are the cause for regression in raw
00:13:02  <indutny>I seen Array::New() calls in flamegraph
00:13:07  <indutny>and they lead to the makecallback
00:13:17  <indutny>so I think its messing with a lot of stuff
00:13:23  <trevnorris>makecallback is really hurting right now. especially using 'splice'
00:13:27  <trevnorris>and the try finally
00:13:34  <trevnorris>both are kicking the crap out of performance.
00:13:39  <indutny>ok
00:13:41  <indutny>time to sleep
00:13:47  <indutny>sorry for spontaneous emotions
00:14:05  * indutny> /dev/null &
00:20:04  * CoverSlide/
00:20:06  <isaacs>trevnorris: i'm suspicious that it's not as big an effect as it seems.
00:20:40  <trevnorris>isaacs: try checkout out and running http_simple at those two commit points and let me know what you find.
00:22:10  <isaacs>kk
00:22:47  <trevnorris>isaacs: now it's totally possible that so much has happened since then, reverting those changes won't do anything.
00:22:52  <isaacs>of course
00:22:57  <isaacs>but it's still useful info
00:23:13  <isaacs>also, if we have multiple regressions, then fixing one might not fix all of it
00:23:17  <isaacs>or might even make the other worse.
00:23:50  <isaacs>trevnorris: what's the incantation you used with tcp_raw?
00:25:00  <trevnorris>isaacs: updated the gist with it
00:25:26  <trevnorris>though those results are with the experimental changes of spawning child processes. but you should still see a difference.
00:26:28  * perezdjoined
00:26:29  * indexzeroquit (Quit: indexzero)
00:28:23  <isaacs>trevnorris: which gist?
00:28:29  <trevnorris>https://gist.github.com/b25221576277c907d1a9
00:28:33  <isaacs>trevnorris: you should use public gists so i can just click your name :)
00:28:47  <isaacs>when i accidentally close tabs :)
00:28:53  <trevnorris>yeah. just feel bad for my followers. freaking githup posts everytime I update one.
00:29:04  <trevnorris>(and there are a lot)
00:29:20  <isaacs>meh. whatever. that's githubs problem
00:29:25  <trevnorris>lol ok
00:29:32  <isaacs>i paste like a third of all my copies to gist.github.com
00:30:04  <isaacs>they can unfollow you if it's a problem for them.
00:32:16  <isaacs>trevnorris: so... for those commits in that range, a bunch don't build on smartos
00:32:47  <trevnorris>isaacs: can you build 430d94e and 8973c3d
00:32:53  <trevnorris>those are the two important ones
00:33:00  <trevnorris>(and only 5 commits in between.
00:33:12  * karupaneruraquit (Excess Flood)
00:33:35  * karupanerurajoined
00:35:16  <isaacs>yeah, none of them build
00:35:21  <trevnorris>really? strange.
00:35:27  <isaacs>so, i can try on linux, i guess
00:35:31  <isaacs>just... linux is so weird.
00:35:35  <trevnorris>lol
00:35:45  <tjfontaine>what's the error? it was a gyp thing if I recall?
00:36:02  <trevnorris>well, to save you the trouble, before it did 8767.42 [#/sec] (mean) and after it did 8085.72 [#/sec] (mean)
00:36:04  <isaacs>tjfontaine: it's some ld error
00:36:08  <trevnorris>so not huge, but it's there.
00:36:09  <tjfontaine>right
00:36:10  <isaacs>trevnorris: ok
00:36:16  <isaacs>trevnorris: that's huge
00:36:19  <isaacs>that's 10%
00:37:07  <isaacs>trevnorris: does https://github.com/isaacs/node/tree/make-callback-in-js-revert fix it?
00:39:01  * loladiroquit (Quit: loladiro)
00:39:40  * perezdquit (Quit: perezd)
00:41:32  <trevnorris>isaacs: https://gist.github.com/b25221576277c907d1a9
00:41:36  <trevnorris>fixes it half way
00:42:09  <isaacs>wow, i'm getting super terrible http_simple numbers on ubuntuu
00:42:16  <isaacs>like, unbelievable bad
00:42:32  <tjfontaine>isaacs: I bet your smartos fix is 5a5e1281
00:42:34  <isaacs>2781.39
00:42:34  <isaacs>2406.13
00:42:34  <isaacs>2542.47
00:42:34  <isaacs>2779.43
00:42:34  <isaacs>2613.58
00:42:37  <isaacs>2934.24
00:43:21  <isaacs>tjfontaine: that looks likely
00:44:07  <trevnorris>isaacs: now, we should note that the only test that shows serious regression is raw_c2s.js, pipe and s2c are almost the same
00:44:27  <isaacs>trevnorris: if it only fixes it halfway, then i suspect that part of the problem is the immediate-nexttick stuff
00:44:37  <isaacs>trevnorris: ie, calling tickCallback on every tick
00:44:42  <isaacs>s/tick/makeCallback
00:44:52  <trevnorris>ah, yeah.
00:49:59  <trevnorris>isaacs: i'm having a hard time understanding why only client to server writing is causing such a large regression compared to the others.
00:51:53  * c4miloquit (Remote host closed the connection)
00:51:57  <isaacs>trevnorris: so, in http-simple at least, i'm seeing <1% change from make-callback-in-js-revert. and in either direction randomly.
00:52:10  <isaacs>trevnorris: as for c->s vs s->c, i don't know.
00:52:37  <trevnorris>isaacs: hm, one sec.
00:54:07  <trevnorris>isaacs: think that's because of the js. just ran the same c2s with _net_ and there's only 0.4 Gb/sec difference
00:54:15  * EhevuTovquit (Ping timeout: 260 seconds)
00:54:26  <isaacs>trevnorris: oh, interesting.
00:54:38  <isaacs>trevnorris: so the mc-in-js is causing the regression in tcp-raw
00:54:41  <isaacs>i guess that kinda maeks sense.
00:54:44  <isaacs>it doesn't do muche lse
00:55:14  <isaacs>trevnorris: also, i think my linux http.sh tests are getting limited by some os thing.
00:55:19  <trevnorris>yeah. that does make sense. calling cc functions from js takes longer.
00:55:21  <isaacs>trevnorris: because these numbers are just way way too slow.
00:55:36  <isaacs>trevnorris: this would be a case of calling js functions from cc
00:55:38  <trevnorris>are you running in a vm?
00:55:39  <isaacs>right?
00:55:44  <isaacs>trevnorris: yeah. in kvm
00:55:57  <isaacs>but i mean.. it's crazy slow.
00:56:03  <trevnorris>isaacs: exactly. huge overhead for calling cc from js
00:56:06  <isaacs>2k q/s?
00:56:15  <trevnorris>isaacs: running the vm in mac?
00:56:18  <mmalecki>doesn't kvm try to scale dynamically?
00:56:20  <isaacs>trevnorris: no, on jpc
00:56:23  <mmalecki>that's never good for benchmarks
00:56:39  <isaacs>mmalecki: kvm running inside of a zone on smartos, of course.
00:56:41  <isaacs>mmalecki: jpc
00:56:58  <isaacs>so it has effectively 100% of the "machine" already
00:57:15  <mmalecki>ah.
00:57:33  <mmalecki>isaacs: we're talking about that 80 GB global zone?
00:57:46  <isaacs>mmalecki: no, i'ts a 8GB zone
00:59:13  <trevnorris>isaacs: don't know if this is possible, but seems the most optimal solution would be to call any method for makeCallback from C (since that's where they're coming anyways) then send off to js land to process everything in nextTick
00:59:21  <trevnorris>(though that's probably how it was being done before)
01:00:40  <isaacs>trevnorris: before, nextTick *always* came in off the tick spinner
01:00:55  <trevnorris>ah, ok. so wasn't a problem.
01:01:21  <isaacs>we moved MakeCallback to JS specifically to avoid all the C++ ugliness of doing the immediate nextTick stuff
01:02:49  <trevnorris>isaacs: and why was the try finally added since v0.8?
01:06:21  <isaacs>trevnorris: so that our stacks are not stupid
01:06:39  <isaacs>trevnorris: ie, so that you see the actual call site, rather than seeing the place in our catch block where we rethrow
01:06:53  <isaacs>trevnorris: but we sometimes have stuff we have to clean up if you DID throw
01:07:24  <trevnorris>ah, bummer. if I could figure out a work around for that one thing, pretty confident I could make them faster and leave in all the domain stuff.
01:09:01  * bnoordhuisquit (Ping timeout: 240 seconds)
01:10:04  * piscisaureus_joined
01:10:07  * piscisaureus_quit (Client Quit)
01:11:29  * perezdjoined
01:13:13  * sblomjoined
01:14:32  <isaacs>trevnorris: again, i'm not convinced that we're actually spending a considerable amoutn of time in tickcallback. yes, it's getting optimzied and deopted, but so what? on the --prof output, and in the smartos flamegraphs, it doesn't even register as a time sink
01:15:14  <isaacs>so, this is interesting:
01:15:17  <isaacs>on centos:
01:15:23  <isaacs>v0.8.19-pre: tcp-raw-c2s : min: 2.5485 avg: 2.9853 max: 4.0903 med: 2.9217
01:15:42  <isaacs>master: tcp-raw-c2s : min: 2.8480 avg: 3.2122 max: 3.8631 med: 3.2024
01:16:02  <isaacs>mc-in-js-revert: tcp-raw-c2s : min: 2.8834 avg: 3.5522 max: 4.2324 med: 3.7408
01:16:11  <trevnorris>hm. very interesting.
01:16:14  <isaacs>so, master is already faster, but the mc-in-js-revert makes it WAY faster.
01:16:18  <isaacs>this is client-to-server only
01:16:18  <trevnorris>we need to get someone to run these tests on windows.
01:16:35  <trevnorris>so is it something on linux only?
01:20:06  * trevnorrisquit (Quit: Leaving)
01:20:40  * perezdquit (Quit: perezd)
01:20:56  <isaacs>i'm not sure
01:32:51  * sblomquit
01:52:24  * loladirojoined
02:10:18  * lohkey_joined
02:10:18  * lohkey_quit (Client Quit)
02:12:52  * lohkeyquit (Ping timeout: 248 seconds)
02:31:55  * AvianFlujoined
02:35:20  * dapquit (Quit: Leaving.)
02:36:53  * c4milojoined
02:47:16  * toothrotquit (Ping timeout: 248 seconds)
02:50:49  * toothrjoined
02:55:54  * kazuponjoined
02:56:15  * kazuponquit (Remote host closed the connection)
02:56:32  * kazuponjoined
03:08:59  * indexzerojoined
03:13:11  * hzquit
03:21:09  * TooTallNatequit (Quit: Computer has gone to sleep.)
03:24:26  * stagas_joined
03:26:30  * stagasquit (Ping timeout: 264 seconds)
03:26:36  * stagas_changed nick to stagas
03:28:42  <rvagg>isaacs: any idea what causes this with npm: https://travis-ci.org/daleharvey/pouchdb/builds/4585202/#L173
03:29:01  <rvagg>isaacs: second time I've seen this, errno@'errno@~0.0.3' instead of errno@~0.0.3
03:29:30  <rvagg>doesn't happen for me but was reported by dale harvey a while back on his machine, an npm cache cleaned it up, now it's happening on travis!
03:29:54  <rvagg>isaacs: this is a shrinkwrapped package if that makes a difference
03:37:22  * mikealquit (Quit: Leaving.)
03:39:39  * mikealjoined
03:39:40  * lohkeyjoined
03:49:54  <rvagg>publishing without shrinkwrap has made it go away (for now), perhaps something going on with different versions of npm used for publishing vs fetching..
03:51:13  * c4miloquit (Remote host closed the connection)
03:52:31  * brsonquit (Ping timeout: 260 seconds)
03:53:26  * Chip_Zeroquit (*.net *.split)
03:55:42  * toothrchanged nick to toothrot
03:59:23  * lohkeyquit (Quit: lohkey)
04:00:30  * lohkeyjoined
04:02:32  * AvianFluquit (Remote host closed the connection)
04:06:31  * indexzeroquit (Quit: indexzero)
04:22:14  * loladiroquit (Quit: loladiro)
04:23:39  * lohkeyquit (Quit: lohkey)
04:25:01  * loladirojoined
04:37:10  * indexzerojoined
04:39:41  * brucemquit (Ping timeout: 255 seconds)
04:43:43  * brucemjoined
04:43:59  * trevnorrisjoined
05:07:36  * mikealquit (Quit: Leaving.)
05:07:49  * AvianFlujoined
05:08:57  * mikealjoined
05:29:53  * loladiroquit (Quit: loladiro)
05:33:29  * TooTallNatejoined
05:35:19  <trevnorris>TooTallNate: have a moment for a streams2 question?
05:35:35  <TooTallNate>trevnorris: possibly
05:35:38  <TooTallNate>i'm 2 pints deep
05:35:52  <trevnorris>lol ok
05:36:30  <TooTallNate>what's the ?
05:36:55  <trevnorris>in _stream_readable.js, function onread. before it increments the state length by the chunk length it is is wrapped in a conditional making sure the chunk doesn't exist.
05:37:07  <trevnorris>so that will never be reached if a chunk is passed.
05:37:10  <trevnorris>am I missing something?
05:38:28  <TooTallNate>trevnorris: can you link me?
05:39:22  <trevnorris>TooTallNate: http://git.io/vLYyrw
05:39:51  <trevnorris>that logic must be wrong.
05:39:54  <trevnorris>isaacs: you home?
05:40:58  <TooTallNate>what's state.decoder.end()?
05:41:05  <TooTallNate>it seems to be overwriting `chunk`
05:41:42  <trevnorris>TooTallNate: it will never get there since the chunk must exist for it to be overwritten, but it can only reach it if the chunk doesn't exist.
05:42:06  * Chip_Zerojoined
05:43:07  <trevnorris>oh wait. mother freaking.
05:43:23  <trevnorris>so it's setting chunk if it doesn't exist.
05:43:52  <TooTallNate>ya i was about to say
05:44:43  <TooTallNate>trevnorris: but i do agree with the guy in that thread
05:44:49  <TooTallNate>trevnorris: i agree with both of you actually
05:45:09  <trevnorris>TooTallNate: how about this:
05:45:10  <trevnorris>https://gist.github.com/trevnorris/4712498
05:45:13  <TooTallNate>trevnorris: but see https://github.com/TooTallNate/node-vorbis/blob/master/test/decoder.js#L25-L26
05:45:28  <trevnorris>you have to trigger .read() before the first readable event is emitted.
05:45:44  <trevnorris>that's because of the conditional logic that I showed you.
05:45:58  <trevnorris>ah, yeah. exactly
05:46:18  <TooTallNate>i mean i definitely think there's a bug right now
05:46:29  <TooTallNate>as denoted by by … in the comment :p
05:46:35  <trevnorris>ok, so it's not just me. thanks for the sanity check.
05:46:51  * wolfeidauquit (Read error: Connection reset by peer)
05:47:02  * wolfeidaujoined
05:47:11  <TooTallNate>trevnorris: like, ditch .push() and place the buffer in the _read() callback function
05:47:16  <TooTallNate>then i think you'll repro it
05:48:08  <trevnorris>well. honestly i'm confused by Readable stream, when you can technically write (push) to it. but that's for another time.
05:49:05  <TooTallNate>trevnorris: ya i'm confused about .push() bth as well
05:49:16  <TooTallNate>it doesn't make a lot of sense to me
05:49:31  <TooTallNate>s/bth/tbh
05:52:54  <trevnorris>TooTallNate: thanks. time to get into isaacs head and understand why it was implemented this way.
05:54:10  <TooTallNate>trevnorris: i've brought it up to him before… i've said that Readable streams should have *1* implicit _read() call upon nextTick()
05:54:22  <TooTallNate>trevnorris: but i think that ruined a lot of bechmarks or something
05:54:58  <trevnorris>honestly I don't get the whole _read() thing anyways.
05:55:26  <TooTallNate>trevnorris: that's just the backing implementation
05:56:12  <TooTallNate>trevnorris: like, here's a Transform example https://github.com/TooTallNate/node-lame/blob/master/lib/decoder.js#L71-L151
05:56:19  <TooTallNate>but it's basically the same idea
05:56:26  <TooTallNate>oh actually
05:56:29  <TooTallNate>i have a better example
05:56:54  <TooTallNate>trevnorris: https://github.com/TooTallNate/node-speaker/blob/master/examples/sine.js#L32-L61
05:58:03  <trevnorris>TooTallNate: so it just allows you to do transformations on the data before it's passed to .read()?
05:58:15  <TooTallNate>trevnorris: well that's what Transform is for
05:58:22  <TooTallNate>trevnorris: look at the sine wave example
05:58:39  <TooTallNate>trevnorris: the idea is that that's the callback function that you return the data from the "backing store"
05:58:45  <TooTallNate>whatever that store may be
06:00:15  <TooTallNate>so for the SineWave generator, the "backing store" is the sine wave algorithm
06:02:25  <trevnorris>aaah.... wtf. ok. so .read() calls onread() which may call _read() which calls onread()?
06:08:11  <trevnorris>ok scratch that. it's read() -> _read() -> onread()
06:09:27  <trevnorris>but push() calls onread to, and the state.length is only updated by the chunk.length when read() is called.
06:09:40  <trevnorris>s/to/also/
06:15:07  * wolfeidauquit (Remote host closed the connection)
06:23:35  <trevnorris>TooTallNate: yeah, so still don't understand the use of _read. https://gist.github.com/trevnorris/4712498
06:23:53  <trevnorris>ran that w/o the counter for the first time...
06:24:47  <TooTallNate>i wish push() wasn't there
06:25:13  <TooTallNate>i don't think intermingling them is a good idea
06:26:05  <TooTallNate>trevnorris: i don't get what's not to get, haha
06:26:20  <TooTallNate>it get's called, you read some data from *something*
06:26:25  <TooTallNate>and then invoke the callback
06:27:25  <trevnorris>so basically _read()/invoke callback should be equivalent to .on('readable')/push()?
06:27:33  <trevnorris>no no.
06:27:36  <TooTallNate>no
06:27:43  <TooTallNate>forget about .push()
06:27:46  <TooTallNate>it fucking suck
06:27:47  <TooTallNate>s
06:27:52  <TooTallNate>idk why it was added
06:27:56  <TooTallNate>it wasn't part of the original spec
06:28:12  <trevnorris>ok. forgotten.
06:28:21  <TooTallNate>the idea is "readable" gets emitted, you call .read(), rinse and repeat
06:28:35  <TooTallNate>not where does this data come from? that's where _read() come in
06:28:55  <TooTallNate>s/not/now
06:29:37  <TooTallNate>Readable is an abstract class
06:29:52  <TooTallNate>so the implementor implements _read()
06:29:56  <TooTallNate>and the user calls .read()
06:30:55  <TooTallNate>trevnorris: my only gripe with Readable streams ATM is what this guy is talking about https://github.com/joyent/node/issues/4695#issuecomment-13113979
06:31:59  <trevnorris>TooTallNate: yeah, ok. understand how push() was confusing me. it was allowing me to "write" data from remote locations.
06:32:26  <trevnorris>(basically giving me the ability to define what data is returned in _read())
06:32:50  <TooTallNate>trevnorris: ya, i tried using .push() one time, and then i was like "wtf is _read() for now?!"
06:32:54  <TooTallNate>so i get your confusion
06:33:06  <trevnorris>so can you help explain: https://gist.github.com/trevnorris/4712498
06:33:21  * paddybyersjoined
06:33:37  <trevnorris>if I don't call read() the first time, then nothing gets displayed.
06:33:54  <TooTallNate>trevnorris: well _read() gets called until you do fn(null, null)
06:33:57  <TooTallNate>which emits "end"
06:34:20  <TooTallNate>trevnorris: so it's getting called twice by the time you're calling read() in the "readable" callback function
06:34:32  <TooTallNate>trevnorris: the bug is that the .read(0) call is required at the bottom
06:34:35  <trevnorris>TooTallNate: ok, and I just ran it removing the 0 from stream.read(0), and that caused an infinite output loop thing.
06:34:39  <TooTallNate>in order for the "readable" to happen
06:35:02  <TooTallNate>so we're on the same page i think?
06:35:24  <TooTallNate>trevnorris: keep a counter and call fn(null, null) in the _read() callback after a few times
06:35:40  <TooTallNate>so it's not an infinite stram
06:35:42  <TooTallNate>stream
06:36:32  * mikealquit (Quit: Leaving.)
06:37:30  <trevnorris>um. in your sine.js example there isn't any logic to return fn(null, null) anywhere. oh, but that's why you have the nextTick event.
06:37:45  <TooTallNate>oh right, i do it manually there
06:37:53  <TooTallNate>so you could do that instead as well
06:37:59  <TooTallNate>i should update that example though
06:38:19  <TooTallNate>cause I don't think i knew about returning null/undefined to "end" the stream
06:38:21  <trevnorris>wow. ok I think I mostly understand. problem i'm having now is understanding the why behind the implementation.
06:38:32  <trevnorris>I like the api. but the implementation is confusing the crap out of me.
06:38:55  <TooTallNate>ya i haven't dug too deep into it
06:39:06  <TooTallNate>it's "black magic" as indutny says
06:39:14  <trevnorris>lol seriously
06:39:18  * wolfeidaujoined
06:39:28  <TooTallNate>but the .read(0) thing *does* bug me
06:39:43  <TooTallNate>ok i'm gonna crash
06:39:47  <TooTallNate>'gnight
06:39:50  * TooTallNatequit (Quit: ["Textual IRC Client: www.textualapp.com"])
06:39:53  <trevnorris>night
06:41:49  * indexzeroquit (Quit: indexzero)
06:47:47  * perezdjoined
07:05:38  * AvianFluquit (Remote host closed the connection)
07:13:07  * stagasquit (Quit: ChatZilla 0.9.89-rdmsoft [XULRunner])
07:23:16  * paddybyersquit (Ping timeout: 248 seconds)
07:24:17  * kazuponquit (Remote host closed the connection)
07:25:15  * felixgejoined
07:25:15  * felixgequit (Changing host)
07:25:15  * felixgejoined
07:27:21  * paddybyersjoined
07:28:49  * kazuponjoined
07:29:39  * rendarjoined
07:30:18  * felixgequit (Ping timeout: 276 seconds)
07:31:15  * indexzerojoined
07:32:52  * indexzeroquit (Client Quit)
07:34:31  * `3rdEdenjoined
07:36:13  * trevnorrisquit (Quit: Leaving)
07:47:14  * Raltjoined
07:51:09  * mikealjoined
07:58:36  * stagasjoined
08:04:45  * toothrotquit (Ping timeout: 248 seconds)
08:06:04  * indexzerojoined
08:06:19  * toothrjoined
08:14:17  * kazuponquit (Remote host closed the connection)
08:17:43  * kazuponjoined
08:19:11  * paddybyersquit (Ping timeout: 245 seconds)
08:34:05  * paddybyersjoined
08:44:15  <indutny>morning
08:57:32  * paddybyers_joined
08:59:57  * indexzeroquit (Quit: indexzero)
09:00:01  * paddybyersquit (Ping timeout: 245 seconds)
09:00:01  * paddybyers_changed nick to paddybyers
09:01:35  * Chip_Zeroquit (Changing host)
09:01:36  * Chip_Zerojoined
09:04:49  * felixgejoined
09:04:50  * felixgequit (Changing host)
09:04:50  * felixgejoined
09:36:00  * felixgequit (Quit: felixge)
09:47:15  * kazuponquit (Remote host closed the connection)
09:52:37  * othiym23quit (Read error: Operation timed out)
09:53:06  * othiym23joined
09:53:07  * Raltquit (Ping timeout: 248 seconds)
10:03:59  * DrPizzaquit (Ping timeout: 260 seconds)
10:04:07  * DrPizzajoined
10:43:40  * perezd_joined
10:44:28  * perezdquit (Read error: Connection reset by peer)
10:44:28  * perezd_changed nick to perezd
11:19:45  * bnoordhuisjoined
11:41:44  <bnoordhuis>tjfontaine: re: https://github.com/joyent/node/pull/3872 - what was your proposal again?
11:54:06  <MI6>joyent/node: Ben Noordhuis v0.8 * 5fe0546 : doc: don't suggest to reuse net.Socket objects Using Socket.prototype.co - http://git.io/6RdRdA
11:56:37  * hzjoined
12:07:52  * abraxasquit (Remote host closed the connection)
12:13:57  * abraxasjoined
12:43:55  * sgallaghjoined
12:50:53  * Raltjoined
12:55:31  * abraxasquit (Remote host closed the connection)
13:19:30  * stagasquit (Ping timeout: 240 seconds)
13:20:32  * loladirojoined
13:22:12  * stagasjoined
13:24:53  * jmar777joined
13:27:28  * AvianFlujoined
13:53:28  * jmar777quit (Remote host closed the connection)
14:06:09  * qmx|awaychanged nick to qmx
14:18:25  * piscisaureus_joined
14:25:09  <tjfontaine>bnoordhuis: https://github.com/joyent/node/pull/3872#issuecomment-7804408
14:33:02  * AvianFluquit (Remote host closed the connection)
14:39:46  * loladiroquit (Quit: loladiro)
14:47:39  * TheJHjoined
14:49:03  * loladirojoined
14:53:45  * stagasquit (Read error: Connection reset by peer)
14:55:24  <isaacs>rvagg: what version of npm?
14:55:58  * AvianFlujoined
14:56:26  * loladiropart
14:59:31  <isaacs>ircretary: tell tootallnate If you're using push(), then _read is "You may now push more data" after push() has returned false. It's important when you have a backing stream that is old-style, like a handle wrap with readStart and readStop and ondata()
14:59:31  <ircretary>isaacs: I'll be sure to tell tootallnate
15:13:41  * bradleymeckjoined
15:17:19  <tjfontaine>bnoordhuis: I can do a proper pr if you'd like
15:18:20  <bnoordhuis>ah right, i remember what the deal was
15:18:28  <bnoordhuis>isaacs, piscisaureus_: https://github.com/joyent/node/pull/3872#issuecomment-8005650 <- comment svp
15:18:44  <isaacs>bnoordhuis: is svp plz in frnch?
15:18:49  <bnoordhuis>isaacs: yes
15:18:51  <isaacs>kewl
15:19:21  <isaacs>hm. i don't really have strong opinions about this at all.
15:19:22  * stagasjoined
15:19:35  <isaacs>setImmediate should be basically what nextTick was before the "immediate nexttick" changes
15:20:09  <tjfontaine>I do agree with shigeki, there was an issue that came through last night that made me wonder if the 1ms delay was to blame
15:21:03  * c4milojoined
15:21:15  <tjfontaine>https://github.com/joyent/node/issues/4699#issuecomment-13112565
15:22:07  <bnoordhuis>i lean towards a check handle, it's easy to reason about
15:22:38  <bnoordhuis>tjfontaine: your opinion?
15:22:46  <tjfontaine>I'm fine with any, I just don't want to see another .9 with the 1ms delay :)
15:23:35  <indutny>uv_check sounds good to me
15:23:53  <indutny>it should be called right after returning from js to libuv, right?
15:24:30  <bnoordhuis>right after leaving the poll syscall
15:24:49  <tjfontaine>his flow chart is quite helpful in that regard
15:24:53  <bnoordhuis>or rather, after uv__io_poll is done
15:25:48  <indutny>ok
15:25:51  <indutny>well
15:26:02  <indutny>if we're in js-land we are either called from uv__io_poll
15:26:10  <indutny>or from uv_prepare/uv_check
15:26:16  <bnoordhuis>indutny: https://github.com/joyent/libuv/blob/master/src/unix/core.c#L288-298
15:26:27  <tjfontaine>indutny: or from a timer
15:26:36  <indutny>tjfontaine: right
15:26:40  <bnoordhuis>for posterity: https://github.com/joyent/libuv/blob/8311390/src/unix/core.c#L288-298
15:26:42  <indutny>or from idle
15:26:45  <indutny>haha
15:26:54  <indutny>or from close callback
15:27:04  <indutny>this shit is so complex
15:27:23  <indutny>yeah, his circle of libuv's life is great
15:27:24  <bnoordhuis>yeah, but the check handles run after timers and i/o
15:27:42  <bnoordhuis>only closing handles come after but i don't think that's something to worry about
15:28:10  <indutny>bnoordhuis: I think we should worry about
15:28:11  <indutny>it
15:28:20  <indutny>it already gave us a lot of pain
15:28:24  <indutny>in it's times
15:28:53  <bnoordhuis>hmm, uv-win runs the close cbs before the poll
15:29:16  <bnoordhuis>i guess that should be rectified in libuv
15:29:32  * piscisaureus_quit (Ping timeout: 252 seconds)
15:30:22  <indutny>de ja vu
15:30:32  <indutny>s/de ja/deja/
15:30:39  <indutny>I think you should file issue
15:30:52  <indutny>I clearly remember you mentioning it 2-3 months ago
15:31:07  <bnoordhuis>and then i forgot
15:32:14  <indutny>no surpise
15:32:17  <indutny>want me to do it for you?
15:33:42  <bnoordhuis>indutny: https://github.com/joyent/libuv/issues/699
15:35:16  <indutny>:)
15:35:19  <indutny>kewl
15:39:50  <indutny>isaacs: yt?
15:39:54  <isaacs>hi
15:40:12  <indutny>hi
15:40:20  <indutny>so I'm still trying to fix this shit
15:40:26  <indutny>:)
15:40:28  <isaacs>right
15:40:31  <isaacs>we should trade projects.
15:40:44  <indutny>haha
15:40:47  <isaacs>i've got three half-working branches tracking down perf regressions now.
15:40:53  <isaacs>i think i've got a handle on the worst offenders.
15:41:27  <indutny>ah, good
15:41:30  <indutny>will not bother you then
15:41:37  <indutny>I'm fighting with http and streams2 :)
15:41:38  <isaacs>basically: MakeCallback should move back to C++, and stream.Writable needs to have a lot of parts removed/chopped up, stream.Readable needs to have its long functions broken up, EE.emit is too big to be inlined.
15:41:43  <isaacs>i hear ya
15:41:51  <indutny>isaacs: cool!
15:41:56  <indutny>isaacs: this is a good news
15:42:07  <indutny>isaacs: how much fixing MakeCallback could give us? (just wondering)
15:42:13  <isaacs>well, depends on the benchmark.
15:42:26  <isaacs>it's about half of the losses in tcp_raw benchmarks (the ones that don't go through lib/*.js)
15:42:36  <isaacs>it's not that big of an impact on net-pipe
15:42:42  <indutny>interesting
15:42:49  <indutny>I bet tcp has a lot of callbacks
15:42:59  <isaacs>but! if you *dont* do that, then refactoring lib/_stream* doesnt' make that big of a diff either.
15:43:02  <indutny>but odd that it doesn't happen in net-pipe
15:43:02  <isaacs>yiou need them both
15:43:10  <indutny>oh
15:43:24  <isaacs>it's like it's setting some kind of upper limit, but so is the stream.Writable, and so if you only fix one, not much changes.
15:43:53  <indutny>so I was thinking about .ondata
15:44:01  <isaacs>the stream.Writable changes as i have them right now change the API pretty dramatically, so i'm going to back out of that and do it a bit differently.
15:44:02  <indutny>should I just call .ondata() before calling ._read() callback?
15:44:04  <isaacs>what about it?
15:44:32  <isaacs>indutny: the way that net.Socket works now, if you have a .ondata fn, then it doesn't even call this.push() at all.
15:44:44  <isaacs>(this.push() ~= the _read cb)
15:44:58  <indutny>but is it guaranteed that ._read() will be called?
15:45:04  <isaacs>eventually, yeah
15:45:18  <indutny>eventually == idk, really? :)_
15:45:45  * piscisaureus_joined
15:45:56  <isaacs>actually.. i'm not sure. i think we might be doing some stupid "just call handle.readStart" in http.js
15:46:07  <indutny>indeed
15:46:11  <indutny>but it somehow works
15:46:13  <isaacs>but it *should* be doing it :)
15:46:15  <indutny>on many tests
15:46:18  <isaacs>yeah
15:46:34  <isaacs>that's the problem with tests: you make them work, and then think it's good, because just the tests work :)
15:46:39  <indutny>:)
15:46:44  <indutny>ok, returning back to problem
15:46:44  <indutny>brb
15:50:18  <indutny>isaacs: another question
15:50:20  <isaacs>k
15:50:26  <indutny>isaacs: in stream_readable we're emitting 'end'
15:50:29  <indutny>but not calling .onend
15:50:37  <isaacs>what's .onend?
15:50:38  <indutny>I guess I should call it from tls.js?
15:50:43  <indutny>isaacs: wierd thing
15:50:45  <indutny>don't ask me
15:50:54  <indutny>http is expecting net.Socket to call it
15:51:03  <isaacs>kk. yes, tls.js should just do whatever ending stuff it needs in .on('end')
15:51:19  <indutny>ok
15:57:14  <indutny>isaacs: ah, looks like I see the problem
15:57:20  <indutny>isaacs: http is never calling .read()
15:57:30  <isaacs>righ
15:57:33  <indutny>so
15:57:37  <indutny>data accumulates
15:57:43  <indutny>and when high watermark is reached
15:57:48  <indutny>it stops calling ._read()
15:57:56  <indutny>great
15:57:57  <isaacs>i think we had some thing where if hte length < lowWaterMark, it'll call read(0)
15:57:57  <indutny>:)
15:58:05  <isaacs>but it did some weird behavior
15:58:07  <indutny>yes, even like this
15:58:22  <indutny>so I think http should consume data that it has read
15:58:23  <indutny>or
15:58:35  <indutny>when calling .ondata tls should consume data automatically
15:58:39  <isaacs>you're saying, that http.js should call socket.read() and pass that into the parser.
15:58:41  <indutny>which one sounds more correct to you?
15:58:45  <indutny>isaacs: yes
15:58:47  <isaacs>rather than socket.ondata = put into parser
15:58:48  <indutny>not exactly
15:58:50  <isaacs>yes, that sounds good
15:58:51  <indutny>well
15:59:02  <indutny>I think there're a couple of problems here
15:59:15  <indutny>we've ondata method for a performance's sake
15:59:22  <isaacs>right, but meh
15:59:27  <indutny>I guess .read() will slice everything together
15:59:32  <indutny>isaacs: meh != benchmarks
15:59:36  <isaacs>the only benefit of ondata is that it doesn't call buffer.slice
15:59:47  <isaacs>but then we call buffer.slice anyway to put it into the parser
15:59:48  <isaacs>right?
16:00:02  <isaacs>and read() will only get slower if you have more chunks built up
16:00:02  <indutny>nope
16:00:11  <isaacs>oh, it's parser.write(chunk, start, length) or something?
16:00:15  <indutny>yes
16:00:17  <isaacs>i see
16:00:18  <isaacs>ok
16:00:23  <indutny>let me try it first
16:00:41  <isaacs>i think maybe that's why ondata = "don't even put stuff into the Readable thingies"
16:01:17  <indutny>yeah
16:01:22  <indutny>I wonder how it works with net.Socket
16:01:34  <indutny>it should be accumulating a lot of data
16:01:41  <indutny>because noone ever reads it
16:02:00  <indutny>isaacs: just tried, calling .read() fixes that test
16:02:26  <isaacs>indutny: in net.js:
16:02:26  <isaacs> var ret = true;
16:02:26  <isaacs> if (self.ondata) self.ondata(buffer, offset, end);
16:02:26  <isaacs> else ret = self.push(buffer.slice(offset, end));
16:02:30  <indutny>oooh
16:02:32  <indutny>aaaah
16:02:35  <indutny>oh god
16:02:40  <indutny>ok
16:02:49  <indutny>sorry for my next message
16:02:54  <indutny>this is so fucking retarded shit
16:03:00  <indutny>I spent 2 nights trying to figure it out
16:03:16  * dapjoined
16:03:25  <isaacs>i'm sorry. it does suck a lot.
16:03:33  <isaacs>also, in http.js: if (self.onend) self.once('end', self.onend);
16:03:42  <isaacs>er, no, in net.js
16:03:46  <indutny>hahaha
16:04:06  <isaacs>which is silly, because it should just be adding 'end' handlers in http.js
16:04:08  <isaacs>oh well
16:04:19  <indutny>hahaha
16:04:37  <indutny>I think I just got mad
16:04:44  <indutny>ok, running tests now
16:04:49  <indutny>hopefully they'll work fine now
16:04:59  <indutny>ah, still other fails
16:05:06  <indutny>s/fails/are failing/
16:05:43  * sgallaghquit (Remote host closed the connection)
16:06:59  * hzquit
16:13:16  <bnoordhuis>isaacs: fs.createReadStream(filename, { bufferSize: 42 }) no longer works, you need to set the low/high watermarks as well
16:13:20  <bnoordhuis>is that a bug/regression or ?
16:13:38  <isaacs>bnoordhuis: "no longer works" how?
16:13:40  <isaacs>what happens?
16:13:52  <bnoordhuis>it just reads everything in one big chunk
16:14:11  <bnoordhuis>unless you set highWaterMark: 42 as well
16:14:17  <isaacs>ohh
16:14:19  <isaacs>hm.
16:14:31  <isaacs>i'm not sure if that's a bug or a feature :)
16:14:40  <isaacs>when do you want to read smaller pieces of a file than you have room for?
16:14:47  <bnoordhuis>unit tests :)
16:15:05  <isaacs>bnoordhuis: it's bufferSize
16:15:07  <isaacs>not chunkSixze
16:15:09  <isaacs> this.bufferSize = options.bufferSize || 16 * 1024;
16:15:28  <isaacs>oh, that's what your'e using.
16:15:29  <isaacs>nvm
16:15:42  * sgallaghjoined
16:20:37  <isaacs>bnoordhuis: um... i'm seeing it calling fs.read() with the bufferSize arg.
16:20:43  <isaacs>bnoordhuis: what's the problem, exactly?
16:21:03  <bnoordhuis>isaacs: let me see if i can turn it into a standalone test case
16:21:11  <bnoordhuis>(it's from a test suite from one of my modules)
16:21:30  <isaacs>https://gist.github.com/4715523
16:22:05  <isaacs>bnoordhuis: ohh, you mean that the 'data' event gets all of it in one blob?
16:22:10  <bnoordhuis>isaacs: yes
16:22:14  <isaacs>ok.
16:22:15  <isaacs>i see that.
16:22:20  <isaacs>hm.
16:22:28  <bnoordhuis>okay. no need for a test case then?
16:22:53  <isaacs>bnoordhuis: so, the reason is that it will keep calling _read until it has enough to make it to the lowWaterMark
16:23:16  <bnoordhuis>i figured it was something like that :)
16:23:37  <bnoordhuis>for the record, it's not a big issue for me
16:23:41  <bnoordhuis>but i thought i'd mention it
16:24:05  <isaacs>yeah
16:24:19  <isaacs>i think that the "special" default water level marks in fs streams are actually a bad idea.
16:24:22  <isaacs>it hurts performance.
16:24:33  <isaacs>and you can always override them anyway
16:24:48  <isaacs>the lowWaterMark should be 0 by default for all streams.
16:24:52  <isaacs>that would fix your issue
16:25:02  <isaacs>also, it makes a big impact on that fs.WriteStream regression
16:25:16  <isaacs>qv: https://gist.github.com/4715559
16:25:32  <bnoordhuis>yeah, seems reasonable
16:27:29  * `3rdEdenquit (Quit: dinner)
16:27:33  * perezdquit (Quit: perezd)
16:29:36  <isaacs>k
16:29:49  <isaacs>i'll send a PR today or tomorrow for that.
16:30:12  <isaacs>bnoordhuis: first, i'm going to polish up the "move makeCallback back to C++" patches.
16:30:39  <bnoordhuis>is that the main culprit?
16:30:55  <bnoordhuis>re perf regressions, i mean
16:32:34  <tjfontaine>it's about half the regression in the raw benchs, from what I saw last night
16:34:45  <isaacs>bnoordhuis: yeah, it's the culprit that isn't lib/net.js
16:34:50  <isaacs>bnoordhuis: er, lib/*.js really
16:35:09  <bnoordhuis>ah, good
16:35:32  <isaacs>bnoordhuis: but we can't possibly get the require('net') benches as fast as v0.8 while the process.binding('tcp_wrap') benches are so much slower.
16:35:42  <isaacs>bnoordhuis: the fact that they're even close is actually kind of impressive.
16:36:26  <bnoordhuis>wait, so process.binding('tcp_wrap') has become slower independent of makeCallback?
16:36:41  <isaacs>bnoordhuis: no, process.binding('tcp_wrap') has become slower *because of* makeCallback
16:36:53  <bnoordhuis>ah right, that makes more sense
16:36:54  <isaacs>bnoordhuis: src/tcp_wrap.cc calls node::MakeCallback
16:36:59  <isaacs>which calls process._makeCallback
16:37:43  <isaacs>another approach, which also made a dent, was to go through and change all the things that call makecallback, so that they all have a domain member 100% of the time, and are more consistent in their signatures, etc.
16:37:51  <isaacs>that lets v8 optimize the function better.
16:37:57  <isaacs>but it's so easy to deopt it, it's really not worth it.
16:38:01  <isaacs>and it's a gigantic change.
16:39:08  <isaacs>bnoordhuis: another issue with moving it back to C++ is that we now call process._tickCallback on every call to MakeCallback
16:39:14  <isaacs>bnoordhuis: and that function has some... issues.
16:39:21  <isaacs>bnoordhuis: but one thing at a time.
16:39:57  <isaacs>bnoordhuis: we can probably rework things a bit so that we call that function less often, and so that it's more optimizable.
16:40:11  <isaacs>bnoordhuis: or, fuckit, port that to C++ as well. ALL THE JAVASCRIPTS GET C++'ED!
16:40:21  <isaacs>(probably not actually winful)
16:44:21  * sgallaghquit (Remote host closed the connection)
16:46:59  <indutny>isaacs: another question
16:47:05  <indutny>isaacs: should I emit 'close' after 'error'?
16:47:21  <isaacs>indutny: 'error' = "All bets are off."
16:47:29  <isaacs>indutny: so, i mean, if you feel like it, sure.
16:47:35  <isaacs>indutny: emit whatever you want after 'error'
16:47:44  * sgallaghjoined
16:48:09  <indutny>it seems that it isn't playing nicely with http
16:48:18  <indutny>it emits another error :)
16:48:24  <indutny>socket hangup
16:49:31  <indutny>wait
16:49:35  <indutny>there's more than this
16:49:39  <indutny>I think its my fault
16:49:59  <indutny>nvm
16:52:19  * trevnorrisjoined
16:55:39  * loladirojoined
16:56:24  * loladiroquit (Client Quit)
16:57:35  <isaacs>indutny: i've gotta go to ride trains to meetings. i'll bbiab
16:57:38  * isaacs&
16:57:54  <indutny>see ya
16:58:16  <isaacs>oh, also, 0.9.9 today. no tls fixes, no perf fixes, just the stuff that's landed rightnow
16:58:27  <isaacs>k, /me away fer reals
16:58:36  <indutny>k
16:59:31  <trevnorris>bnoordhuis: can't build 94d2ad0. posted inline.
16:59:48  <bnoordhuis>trevnorris: ?
16:59:52  * bradleymeckquit (Ping timeout: 244 seconds)
17:00:04  <trevnorris>bnoordhuis: GH-4714
17:00:25  <bnoordhuis>trevnorris: oh? why is that?
17:00:39  <trevnorris>bnoordhuis: "../src/v8_typed_array.cc:322:49: error: 'WeakCallback' is a private member of '::ArrayBuffer'"
17:00:55  <bnoordhuis>hah
17:00:59  <bnoordhuis>what version of gcc is that?
17:01:05  <trevnorris>bnoordhuis: clang 3.2
17:01:17  <bnoordhuis>do you have gcc on your machine?
17:01:20  <trevnorris>yeah
17:01:28  <bnoordhuis>ah well, nvm
17:01:45  <bnoordhuis>it should compile with clang
17:02:40  <trevnorris>bnoordhuis: hm... what's up with clang? compiles fine with gcc 4.7.
17:04:04  <bnoordhuis>it's arguably a gcc bug because WeakCallback is indeed private and TypedArray is not a friend class
17:04:14  <trevnorris>heh, ok
17:04:17  * brsonjoined
17:05:16  * piscisaureus_quit (Ping timeout: 272 seconds)
17:05:37  <bnoordhuis>trevnorris: https://github.com/bnoordhuis/node/commit/1ea342f <- does that work?
17:05:40  <tjfontaine>bnoordhuis: also I was wondering, do you think it would be worth anyones time to make it copy on write?
17:06:11  <bnoordhuis>you can't really enforce that in v8
17:06:25  <bnoordhuis>you'd have to drop down to os specific hackery
17:06:44  <tjfontaine>hm ok
17:06:56  <bnoordhuis>and you'd have to keep the old object alive until the new one is gc'd
17:07:00  <tjfontaine>nod
17:07:07  <tjfontaine>more work than desirable
17:07:10  <bnoordhuis>yep
17:07:25  <trevnorris>bnoordhuis: works
17:07:32  <bnoordhuis>cool
17:07:32  <tjfontaine>memset and memcpy, I'm not sure we ever want Buffer to be backed by TypedArray :)
17:07:37  <bnoordhuis>now to decide if it's something we want :)
17:07:44  <bnoordhuis>tjfontaine: i was thinking the same thing :)
17:08:11  <bnoordhuis>not to mention the zeroing of new arrays
17:08:29  * bradleymeckjoined
17:08:34  <ryah>if node was invented today it would only be typedarrys
17:08:42  <ryah>that's for sure
17:09:02  <bradleymeck>would be nice...
17:09:17  <bradleymeck>isaacs: is there some sort of race that was fixed from npm 1.2.2 to 1.2.4?
17:09:30  <trevnorris>well, you have to remember that a spec for handling binary data is still in the works.
17:09:42  <trevnorris>so if node was created 2 years from now, it probably would use that instead.
17:09:55  <ryah>what spec is that?
17:10:18  <bradleymeck>ryah: either way, blobs are very painful to use for now and they are how you actually deal with stuff usually
17:10:36  <trevnorris>ryah: http://wiki.ecmascript.org/doku.php?id=harmony:binary_data
17:12:23  <bnoordhuis>ryah: you're in NYC now aren't you?
17:12:34  <bnoordhuis>bert and i are in SF next week
17:12:38  <Benvie>it's planned for binary data to build on top of typed arrays, which will recently fully specified in the ES6 spec
17:12:46  <Benvie>which was
17:12:51  <ryah>bnoordhuis: ye
17:12:55  <ryah>s
17:12:57  <bnoordhuis>too bad
17:13:03  <ryah>indeed
17:13:58  <indutny>bnoordhuis: for how long are you staying there?
17:14:08  <trevnorris>bnoordhuis: you doing a conference or something? i'll have to drop by.
17:14:08  <indutny>ryah: are you living there now?
17:14:12  <ryah>yes
17:14:22  <bnoordhuis>indutny: just a couple of days
17:14:31  <bnoordhuis>trevnorris: not a conference, a business meeting :/
17:14:42  <indutny>ryah: nice, hopefully, we'll visit NY at Feb 18 or later
17:14:49  <bnoordhuis>but if you're in SF, you're free to drop by
17:15:05  <trevnorris>bnoordhuis: ugh. isn't that like a 10 hour flight?
17:15:09  <indutny>bnoordhuis: If I'll visit US, I'll be in SF :)
17:15:12  <indutny>but that'll be later
17:15:21  <bnoordhuis>trevnorris: yes, longer even
17:15:47  <ryah>indutny: hit me up when youre here - we can grab a beer or something
17:15:59  <trevnorris>ryah: you in sf?
17:15:59  <tjfontaine>it's 5.5 from ohio, so it better bet longer for bnoordhuis :)
17:16:19  <ryah>trevnorris: ny
17:17:00  <trevnorris>... yeah. *facepalm*
17:17:29  * trevnorrisfinds it difficult to read irc and code.
17:19:24  * bnoordhuisis off to dinner
17:19:47  <indutny>haha
17:19:49  <indutny>ryah: sure
17:20:00  <indutny>though, I'm still not sure if we'll go there
17:20:08  <indutny>depends on many things
17:20:19  <indutny>I should be able to tell this more definitely tomorrow
17:25:56  <ryah>so when is 0.10 being released?
17:26:14  <tjfontaine>probably a couple weeks
17:26:21  <tjfontaine>another .9 this week
17:26:24  <indutny>there're a lot of things to do before this
17:26:31  <ryah>too long between stables
17:26:34  <indutny>yes
17:26:37  <indutny>but streams2 you know
17:26:46  <indutny>and performance regressions
17:26:47  <ryah>yeah...
17:28:07  <bradleymeck>indutny: whats the regression at right now, still over 10%?
17:28:17  * bradleymeckhas not checked in a while
17:28:28  <indutny>bradleymeck: I'm not fully aware of it, you better isaacs and trevnorris
17:28:36  <indutny>it seems that we've some things to fix
17:28:47  <indutny>to return it back to 0.8
17:28:59  <indutny>but I don't know any numbers
17:29:36  <ryah>people ever bench against 0.6?
17:29:46  <ryah>or 0.4 ?
17:29:55  <ryah>0.4 was fast
17:29:57  * mikealquit (Quit: Leaving.)
17:30:28  * mikealjoined
17:30:36  <indutny>ryah: well, we've benched it when we was releasing 0.6 and 0.8 respectively
17:30:46  <indutny>so I don't think there're any real need to do this
17:30:51  <indutny>just make it as fast as 0.8 is
17:30:53  <indutny>or even faster :)
17:30:55  <indutny>that's the goal
17:31:36  * bradleymeckquit (Quit: bradleymeck)
17:31:46  <trevnorris>ryah: been working on a more comprehensive benchmark suit: https://github.com/joyent/node/pull/4656
17:32:03  <CoverSlide>well didn't v0.8 have a tls regression?
17:32:22  <indutny>CoverSlide: in speed?
17:32:28  <indutny>I guess no
17:32:31  <indutny>we've improved a lot of things
17:32:32  <indutny>also
17:32:32  <CoverSlide>i think mjr complained about that
17:32:38  <indutny>tell me about that
17:32:41  <indutny>I'm working with him :)
17:32:55  <CoverSlide>well yeah
17:33:11  <trevnorris>right now on linux i'm seeing a 10% regression using tcp_wrap directly, and ~15% regression using the js api.
17:33:32  <tjfontaine>with the makecallback change it's closer to 5% for tcp_wrap correct?
17:33:34  <indutny>CoverSlide: hopefully I'll get this fixed either in patch versions of 0.10
17:33:36  <indutny>or in 0.12
17:34:10  <trevnorris>tjfontaine: yeah. though imho that's still not the best way.
17:34:41  <trevnorris>tjfontaine: calling back and forth from cc to js is bad mojo.
17:34:52  <tjfontaine>no arguments there
17:37:03  <trevnorris>tjfontaine: something is going on in v8 though. mem usage when using streams is about double of what it was.
17:39:09  <trevnorris>well. tried to benchmark against v0.6.20, but process.hrtime() doesn't exist.
17:39:45  <tjfontaine>ya you'll have to monkeypatch something in for that
17:59:52  * indexzerojoined
18:02:26  <ryah>trevnorris: sweet
18:03:13  * bradleymeckjoined
18:05:31  <trevnorris>whoot! finally created a prime number generator in js that's only 50% slower than optimized c. https://gist.github.com/trevnorris/3955671
18:06:15  <trevnorris>ryah: thanks. it's making it easy to script recording times over n commits.
18:06:17  <tjfontaine>heh
18:06:23  * mikealquit (Quit: Leaving.)
18:06:30  * brsonquit (Ping timeout: 240 seconds)
18:07:06  * loladirojoined
18:07:22  <trevnorris>tjfontaine: that took me months to perfect. the one that brought it together was the `(x - (x % 8)) / 8`
18:07:27  * brsonjoined
18:07:29  <trevnorris>that allowed the generator to be inlined.
18:07:46  <trevnorris>(instead of just using ~~(x/8))
18:08:32  <tjfontaine>hm I wonder if you should tell them about that
18:09:39  <trevnorris>tjfontaine: might help. when they detect a conversion from non-Smi they run a series of tests to check the type of value it is.
18:09:42  <trevnorris>e.g. isFinite
18:09:51  <tjfontaine>nod
18:13:50  * qmxchanged nick to qmx|away
18:14:57  <ryah>trevnorris: have you seen chrome's benchmark stuff?
18:15:08  <ryah>trevnorris: they're buildbot graphs and what not
18:15:17  <ryah>*thier
18:15:32  <trevnorris>ryah: don't think I have.
18:15:52  <CoverSlide>I like Mozilla's arewefastyet thing
18:16:23  <CoverSlide>but it's got fewer details
18:16:24  <trevnorris>CoverSlide: yeah. they have that on a big screen in the office.
18:16:32  <ryah>ttp://build.chromium.org/f/chromium/perf/dashboard/overview.html
18:16:49  * `3rdEdenjoined
18:17:25  <trevnorris>ryah: thanks for sharing. never seen those before.
18:17:47  <CoverSlide>why did node stop using buildbot?
18:18:02  <indutny>good question
18:18:06  <indutny>but I don't have answer
18:19:23  <indutny>github is down
18:19:24  <indutny>nice
18:30:10  * TooTallNatejoined
18:35:48  <isaacs>indutny: node's buildbot was running on no.de, and no one was keeping it updated.
18:35:54  <isaacs>indutny: it had broken quite a long time ago
18:36:06  <indutny>ok, so answer is - noone cares enough :P
18:36:11  <isaacs>indutny: basically.
18:36:20  <isaacs>indutny: no one cares enough to stop doing other things instead of that.
18:36:21  * perezdjoined
18:36:34  <indutny>well yeah
18:36:37  <isaacs>indutny: we all care. but i think time and attention is the limiting factor.
18:36:43  <indutny>we wasn't thinking about performance that much before
18:41:39  * mikealjoined
18:42:14  <trevnorris>bnoordhuis: think you're the best to ask. doesn't it add overhead to always check .IsEmpty() every time MakeCallback is called?
18:42:34  <indutny>nope
18:42:39  <indutny>its almost no-op
18:42:51  <indutny> V8_INLINE(bool IsEmpty() const) { return val_ == 0; }
18:43:00  <indutny>trevnorris: ^
18:43:16  <trevnorris>ok. so if there's a hit it's probably unmeasurable.
18:43:38  <indutny>there're no hit :)
18:43:44  <indutny>Array::New() and
18:43:51  <indutny>Call() occupies most of the time
18:44:07  <indutny>checking one pointer's value is really no-op here
18:45:10  <trevnorris>ah... yeah. i've never had good performance experience using ->Set
18:45:11  * TooTallNatequit (Quit: Computer has gone to sleep.)
18:45:26  <indutny>its pretty slow
18:45:32  <indutny>slower than doing the same thing in js
18:45:42  <indutny>I mean
18:45:45  <indutny>its much slower
18:46:05  * TooTallNatejoined
18:46:05  <indutny>though, I think if you really want to be confident in it - you'll need to benchmark it
18:46:11  <indutny>since its just my assumption
18:46:23  <trevnorris>does TryCatch add much like it does in js?
18:46:24  <indutny>obj.prop = val in js should use some inline caching
18:46:44  <indutny>trevnorris: not that much
18:47:01  <indutny>but keep in mind that you don't have any inline optimizations
18:47:07  <indutny>and JIT code has
18:47:21  <indutny>so doing a lot of stuff with js objects in C++ is pretty slow anyway
18:47:27  <indutny>slower than in js
18:48:56  <isaacs>yeah, it's rare that C++ js code is faster than JS js code
18:49:04  <isaacs>(why this makecallback stuff is so surprising, actually)
18:49:31  <indutny>isaacs: I think the problem is not in js code
18:49:38  <indutny>isaacs: the problem is that we've added a lot of code to it
18:49:39  <indutny>:)
18:49:49  <indutny>like building arrays
18:49:58  <trevnorris>so, why is a call made to MakeCallback just to make a call to _makeCallback?
18:50:01  <indutny>and doing weird stuff with arguments
18:50:12  <isaacs>indutny: well, V8 is flopping around optimizing and then de-optimizing process._makeCallback
18:50:21  <indutny>that's the problem, yes
18:50:28  <trevnorris>isaacs: that's because of the domain stuff.
18:50:39  <isaacs>indutny: the change to move from fn.apply(obj, args) to obj[fn](...) helped, but not THAT much
18:50:44  <isaacs>trevnorris: not only that.
18:50:52  <isaacs>trevnorris: moving it to C++ helps the tcp_raw a lot.
18:51:04  <isaacs>the other change is the nextTick changes.
18:51:10  <trevnorris>i just meant as far as _makeCallback deoptimizing.
18:51:12  <indutny>isaacs: http://blog.indutny.com/f/fs-0.9.7.svg
18:51:37  <isaacs>trevnorris: it had more to do with the fact that it was always a different kind of object coming in.
18:51:46  <indutny>I did it 2-3 weeks ago
18:51:47  <isaacs>trevnorris: and the args array was always a different length.
18:51:50  <isaacs>ok
18:51:54  <isaacs>indutny: what's this?
18:52:06  <indutny>flamegraph of fs benchmark
18:52:13  <indutny>interesting thing
18:52:18  <indutny>watch at bars right above node`_ZN4nodeL5AfterEP7uv_fs_s
18:52:28  <indutny>Array::New(), Object:Get...
18:52:32  <indutny>they're all called from MakeCallback
18:53:07  <isaacs>indutny: i mean, what's the benchmark?
18:53:18  <indutny>fs-readfile.js
18:54:08  <isaacs>k
18:54:19  <isaacs>indutny: add it to the list :)
18:55:34  <trevnorris>isaacs: won't all functions run that are passed to _makeCallback be in cc land?
18:57:08  <isaacs>trevnorris: yes.
18:57:25  <isaacs>wel, actually, no, not the function.
18:57:37  <isaacs>it's getting passed FROM cc land, but the function is generally always *defined* in JS
18:57:53  <isaacs>except a few rare cases in crypto, iirc
18:58:32  <trevnorris>ok, so generally cc is passing a js fn back to js to run then return the results back to cc?
18:58:50  <mmalecki>isaacs: hey. any chance we could get https://github.com/isaacs/npm/issues/3133 pushed to node?
18:58:51  <isaacs>no
18:59:10  <isaacs>mmalecki: yes.
18:59:14  <isaacs>(the no was for trevnorris :)
18:59:25  <isaacs>trevnorris: we don't return the results to cc, afaik
18:59:41  <isaacs>oh, i guess we do return scope.Close(ret)
18:59:51  <isaacs>but whatever, that doesn't matter.
18:59:57  <isaacs>nothing ever actually looks at the return value.
19:00:07  * mikealquit (Quit: Leaving.)
19:00:48  <trevnorris>isaacs: https://github.com/joyent/node/blob/master/src/node.cc#L1115
19:01:15  <isaacs>ok. so, there's one case where it does matter :)
19:02:00  <trevnorris>isaacs: guess what i'm thinking is there's a lot of back and forth for one of the hottest code paths.
19:02:30  <isaacs>trevnorris: so, i'm going to try to be data-driven about this.
19:03:12  <isaacs>trevnorris: tcp_raw gets better moving it to c++. http_simple et al dotn' get worse. for each change from here on, it needs a benchmark that outputs <label>:<number> where the <number> gets bigger with the result, and no other numbers get smaller.
19:03:27  <isaacs>(or at least, not much smaller, and not without understanding why)
19:03:35  <isaacs>we don't have to be slaves to local maxima
19:03:46  <isaacs>indutny: yes, that can improve a lot.
19:04:05  * perezdquit (Quit: perezd)
19:04:10  <trevnorris>isaacs: and that makes sense because you're removing a hop to and from js-land.
19:04:15  <isaacs>trevnorris: yep.
19:04:20  <isaacs>well, there's still a hop
19:04:24  <isaacs>it just happens later.
19:04:30  * sgallaghquit (Ping timeout: 240 seconds)
19:04:32  <isaacs>so, V8 is not opting and deopting process._makeCallback a lot
19:05:10  * mikealjoined
19:05:27  * mikealquit (Client Quit)
19:05:36  * mikealjoined
19:06:03  * piscisaureus_joined
19:06:08  <trevnorris>ok. it's coming together.
19:07:39  <trevnorris>isaacs: well, i'm feeling pretty done with PR-4656, and am going to wait for your patch to land before doing much with readable streams.
19:08:40  <trevnorris>i'm going to try working .splice() out of _tickCallback. it's just annoying me.
19:08:53  * lohkeyjoined
19:09:32  * bradleymeckquit (Ping timeout: 255 seconds)
19:09:53  <trevnorris>isaacs: oh, and you were right about calling a function w/ wrong number of args. was something crazy like 131x's slower.
19:09:59  * mikealquit (Ping timeout: 255 seconds)
19:11:24  <mmalecki>isaacs: would a pull request make it happen faster :) ?
19:13:46  * bradleymeckjoined
19:19:07  * Ralt_joined
19:26:42  * loladiroquit (Quit: loladiro)
19:28:46  * mikealjoined
19:31:31  * Ralt_quit (Remote host closed the connection)
19:31:53  * `3rdEdenquit (Remote host closed the connection)
19:31:57  * TheJHquit (Ping timeout: 252 seconds)
19:36:02  <trevnorris>_tickCallback is right out of quantum mechanics. it works when I try to log what's going on, but fails when I remove the log.
19:37:24  <CoverSlide>Schroedinger's callback
19:40:16  <trevnorris>dude, seriously. anytime I console.log from nextTick, or _tickCallback it works. but when they're removed it fails....
19:40:50  <CoverSlide>maybe try using process.stdout.write
19:42:06  <trevnorris>ah. good one.
19:44:45  <trevnorris>so StartTickSpinner has a comment referencing ev_prepare, but thought bnoordhuis told me ev was removed or some such?
19:45:51  * stagasquit (Ping timeout: 245 seconds)
19:48:07  <tjfontaine>it was, but not all comments may be up to date
19:48:23  <trevnorris>ok, good enough.
19:54:05  * TheJHjoined
19:54:06  <indutny>isaacs: I've nice idea
19:54:11  <indutny>about how to make reviewing more challenging
19:54:31  <indutny>for every LGTM comment we should add +1 to member's score
19:54:56  <indutny>but if merged commit has introduced error later we should -1.5 from member's score
19:55:09  <tjfontaine>so what would your score be? :)
19:55:26  <indutny>I hope it'll be positive :)
19:56:00  <indutny>but it'll be motivating anyway
19:56:41  <tjfontaine>I think things like gerrit and rietveld have similar concepts
19:57:21  * sblomjoined
19:58:00  <indutny>really?
19:58:10  <indutny>haven't used it from maintainer's side
20:00:22  <sblom>isaacs: I've tried to trick the new installer into installing x64 and x86 side by side like you have and it has resisted completely.
20:00:52  <trevnorris>in Tick _tickCallback is gotten every time using `process->Get(...)`, but since it shouldn't change can't it just be gotten once?
20:01:35  <sblom>isaacs: It's not the end of the world if others can do what you've done, obviously. :)
20:01:53  <sblom>isaacs: But it really bothers me that I can't figure out how it's even possible for you to have done it.
20:02:37  <sblom>piscisaureus_: do you have any feedback on http://github.com/joyent/node/issues/4694
20:02:42  * `3rdEdenjoined
20:02:43  <sblom>(installer patch)
20:06:46  * sgallaghjoined
20:08:02  * stagasjoined
20:11:28  * `3rdEdenquit (Ping timeout: 272 seconds)
20:17:17  * EhevuTovjoined
20:20:18  * stagas_joined
20:21:30  * stagasquit (Ping timeout: 276 seconds)
20:21:43  * stagas_changed nick to stagas
20:24:31  <isaacs>mmalecki: i always pull in the new npm before a release.
20:24:42  <isaacs>mmalecki: so don't sweat it. if it's in npm, it'llbe in the next node release.
20:27:31  * `3rdEdenjoined
20:27:33  <indutny>isaacs: yay
20:27:36  <indutny>isaacs: tests are passing
20:27:37  <indutny>:)
20:27:41  <indutny>let me run it once again for sure
20:27:48  <indutny>but it seems that I'm done with cryptostreams
20:28:51  * indutnyis crossing fingers
20:29:52  * `3rdEdenquit (Remote host closed the connection)
20:30:15  <indutny>YESSS
20:30:40  <indutny>only pummel tests left
20:31:28  * hzjoined
20:34:36  <indutny>interesting
20:34:40  * `3rdEdenjoined
20:34:43  <indutny>bnoordhuis: is test-https-ci-reneg-attack.js timing sensitive?
20:34:53  <bnoordhuis>indutny: i don't think so. why?
20:34:57  <indutny>well
20:35:08  <indutny>I get 18 renegotiations instead of expected 17
20:35:29  <indutny>and reneg limit relies on timing
20:35:33  <indutny>as far as I can see in tls.js
20:36:01  <bnoordhuis>that's right. i thought you meant timing sensitive as in milli- or microseconds
20:36:17  <indutny>ok
20:36:24  <indutny>I think I can safely ignore this
20:36:31  <indutny>considering that I haven't touched that code anyway :)
20:36:41  <trevnorris>bnoordhuis: have a moment for a ? about uv_idle_start?
20:36:49  <bnoordhuis>trevnorris: sure. what's up?
20:37:35  * paddybyersquit (Read error: Connection reset by peer)
20:38:09  <trevnorris>bnoordhuis: StartTickSpinner is called and runs uv_idle_start, but that's only reached after a function has been passed to process.nextTick.
20:38:41  <trevnorris>so is uv_idle_start meant to begin idling node?
20:38:49  <bnoordhuis>well
20:38:58  <bnoordhuis>it's something of a misnomer, really
20:39:10  <bnoordhuis>an idle handle prevents libuv from blocking in the poll syscall
20:40:32  <bnoordhuis>trevnorris: has anyone explained to you how the event loop works in libuv?
20:40:50  <trevnorris>bnoordhuis: nope
20:41:00  <bnoordhuis>okay, a short primer
20:41:09  * paddybyersjoined
20:41:21  <bnoordhuis>libuv has handles. most deal with i/o of some kind, e.g. tcp/pipe/udp handles
20:41:26  <bnoordhuis>then there's timers
20:41:43  <bnoordhuis>and then there are assorted leftovers like idle/prepare/check handles
20:42:09  <bnoordhuis>when you call uv_run(), it calls the system's poll function, e.g. epoll_wait() or kevent() or whatever
20:42:25  <bnoordhuis>usually, it blocks indefinitely or until the next timer expires
20:42:45  <bnoordhuis>but if you have an idle handle, it will do a poll with a zero timeout
20:42:53  <bnoordhuis>iow, peek (for new events) but don't block
20:43:14  <trevnorris>so why would uv_idle_start be called directly after a fn was passed to nextTick?
20:43:36  <trevnorris>seems it wouldn't need to enter an idle state if there was something to run.
20:43:43  <bnoordhuis>to give node a chance to fetch new events from the system through epoll_wait/kevent/port_getn
20:43:58  <bnoordhuis>but without blocking when it does so
20:44:07  <bnoordhuis>you could accomplish the same thing with a zero timeout timer
20:44:19  <trevnorris>aaahhhh... the world is coming together.
20:44:39  <bnoordhuis>we copied the idle handle concept from libev for reasons that have been forgotten in the fog of time
20:45:16  <trevnorris>so uv_idle_start is async. so that means it could be out while two fn's were passed to nextTick, right?
20:45:30  <trevnorris>meaning, it wouldn't have needed to be run the second time.
20:46:06  <bnoordhuis>you mean node calls uv_idle_start twice, once for each function that's registered with process.nextTick?
20:46:20  <trevnorris>it's called every time nextTick is called
20:46:26  <bnoordhuis>right
20:46:38  <bnoordhuis>possibly inefficient but not really harmful
20:46:58  * mikealquit (Quit: Leaving.)
20:47:02  <bnoordhuis>libuv has a `if (handle->active) return` guard in uv_idle_start so no harm done
20:47:12  <trevnorris>so, most efficiently it should only need to run once a tickQueue is finished?
20:48:05  <bnoordhuis>not sure if we're talking about the same thing here
20:48:17  <bnoordhuis>node only needs to call uv_idle_start once per tick of the event loop
20:48:23  <bnoordhuis>i.e. on the first call to process.nextTick
20:48:39  <bnoordhuis>but again, calling uv_idle_start repeatedly doesn't really hurt
20:48:47  <trevnorris>it's being called every time process.nextTick is called.
20:49:18  <trevnorris>so if nextTick was called twice in the same function then it would call uv_idle_start a second time before it's had a chance to respond from the first time.
20:52:01  * qmx|awaychanged nick to qmx
20:52:05  * qmxquit (Excess Flood)
20:53:05  * stagas_joined
20:53:21  * stagasquit (Ping timeout: 245 seconds)
20:53:33  * stagas_changed nick to stagas
20:53:50  * piscisaureus_quit (Quit: ~ Trillian Astra - www.trillian.im ~)
20:53:55  * qmxjoined
20:54:07  <bnoordhuis>trevnorris: uv_idle_start just activates the handle
20:54:33  <bnoordhuis>you can call it just once or a billion times, it doesn't really matter
20:54:53  <trevnorris>bnoordhuis: ok, so calling it multiple times doesn't hurt.
20:55:04  <bnoordhuis>that's what i'm trying to convey, yes :)
20:55:27  <trevnorris>guess what i'm trying to figure out is where the optimal place to call it is, since calling cc from js does hurt.
20:56:19  <trevnorris>btw, thanks for explaining all that.
20:56:23  <bnoordhuis>np
20:56:33  <bnoordhuis>btw, if you're on linux, profile with perf
20:56:41  <bnoordhuis>that gives you a nice rundown of where cpu time is spent
20:56:58  <bnoordhuis>at least for c++ code (i'm working on the js bit)
20:57:21  <bnoordhuis>uv_idle_start and friends never show up for me so it's probably negligible
20:57:50  <trevnorris>dude, i've had to create a pad just to list all the ways to profile you keep telling me about.
20:57:56  <bnoordhuis>haha
20:58:10  <bnoordhuis>different tools, different uses
20:58:19  <bnoordhuis>perf is really great for cpu profiling
20:58:37  <bnoordhuis>it even tracks L1/L2 cache stalls/hits/misses for you
20:59:00  <trevnorris>valgrind for memory, perf for cpu, strace for system calls... any I missed?
20:59:00  <bnoordhuis>provided your machine has the hardware performance counters for that, of course
20:59:16  <bnoordhuis>no, that's pretty much the holy trinity
20:59:20  <bnoordhuis>gdb for debugging, of course
21:00:00  <trevnorris>yeah.... need to learn how to use that. you'd mock me endlessly if you saw how I debug.
21:00:10  <tjfontaine>printfs
21:00:47  <tjfontaine>just about anything is better than trying to debug xbox1 with the tri-color led and blinking
21:01:05  <bnoordhuis>haha
21:03:39  <isaacs>trevnorris: yes, that should be a persistent
21:03:53  <isaacs>trevnorris: ie, process._tickCallback should be persistent
21:04:42  <isaacs>trevnorris: Re the benchmark-refactor pr..
21:05:11  <isaacs>trevnorris: as you know, i like the idea of getting our benchmarks in order
21:05:18  <isaacs>trevnorris: but i think this bench_timer thing istoo complicated.
21:05:29  <isaacs>trevnorris: it should be more like test/common.js
21:05:48  <isaacs>trevnorris: i'd like for "adding a benchmark" to be on par with "adding a test" in terms of difficulty and learning curve.
21:06:14  <isaacs>trevnorris: and then `make bench` should run all the benchmarks, and just spit out the results of each one.
21:06:22  <trevnorris>isaacs: i understand, but dude it's not that easy. way more factors to take into account.
21:06:28  <isaacs>trevnorris: sure.
21:06:38  <isaacs>trevnorris: some of them can use a shared timer implementation, i mean, that's fine.
21:06:51  <isaacs>especially for the buffer stuff, and other tests where you want to do high-frequency testing.
21:07:17  <isaacs>but usually, you just want to grab process.hrtime() at the start, and then again at the end, and tell it how many ops there were, and spit out a number.
21:07:31  <isaacs>some common functionality clearly is needed.
21:08:09  <trevnorris>a lot of the current tests are just as complicated, expect you need to do things like set env vars or munge urls
21:08:20  <trevnorris>s/expect/except
21:08:23  <isaacs>right
21:08:33  <isaacs>i'm not excusing our current benchmark suite :) it's beyond terrible.
21:08:39  <isaacs>and it's way overdue for a massive overhaul.
21:09:08  * loladirojoined
21:09:33  <trevnorris>isaacs: so what is it you'd like to see changed?
21:09:51  <trevnorris>so far every feature exists because of a request by someone in here.
21:09:55  <isaacs>righ
21:10:09  <isaacs>and it's been a very informative exploration :)
21:10:13  <isaacs>and i'd like to keep you involved.
21:10:35  <isaacs>but i'm reading over it, and imagining myself as a new contributor trying to add a patch...
21:10:38  <isaacs>ading a test is soooo easy.
21:10:48  <isaacs>and adding a benchmark requires a lot of understanding
21:10:54  <isaacs>it's necessarily somewhat harder, of course.
21:11:20  <isaacs>but like, we should not have it parsing command line args, or doing stuff based on environmetn vars, etc.
21:11:27  <isaacs>it should just run a bunch of stuff, and spit out results.
21:12:00  <isaacs>if we want to check the simple http server with multiple different response sizes, great: benchmark/http.js can just run it wtih 1,10,1024,102400
21:12:03  <isaacs>you know?
21:12:13  <isaacs>and output multiple different lines of results.
21:12:25  * qmxchanged nick to qmx|away
21:13:00  <trevnorris>the cli options were for fine grain testing, but all of them are optional.
21:13:10  * Ralt_joined
21:13:21  <isaacs>right
21:13:21  <trevnorris>for example, multiple tests will live in a single file. but as bnoordhuis pointed out if he wants to observe valgrind on one of them he can't
21:13:47  <isaacs>we can maybe have shared files in benchmark/lib or something, and multiple different tests that load them in with different settings.
21:14:11  * stagasquit (Ping timeout: 255 seconds)
21:14:12  <trevnorris>yeah. I built that in. you can setup default parameters.
21:14:15  <isaacs>so benchmark/http-simple-1024.js would do something like require('./lib/http-simple.js')(1024), whatever.
21:14:19  <trevnorris>checkout benchmark/_net_callbacks.js
21:14:44  <trevnorris>that shows outputs specific for iterative testing and running them through make bench
21:15:35  <trevnorris>so all tests run in `make bench` mode by default, but then can be customized by cli options.
21:16:16  <trevnorris>have you tried running `make bench` with it?
21:16:42  <isaacs>yeah
21:16:53  <trevnorris>is that the type of output you're talking about?
21:17:10  * perezdjoined
21:17:25  <isaacs>sort of...
21:17:25  * sgallaghquit (Remote host closed the connection)
21:17:29  <isaacs>but it'd be better to have only one thing per line
21:17:51  <trevnorris>talking specifically about the net benchmarks?
21:17:59  <isaacs>tcp_raw_c2s_max: 5.8783\ntcp_raw_c2s_avg: 4.9025
21:17:59  <isaacs>etc
21:18:01  <isaacs>yeah
21:18:08  <trevnorris>ok. give me 1 min
21:19:16  <isaacs>i'm finding it a bit weird to go about adding a benchmark with this, because there's a lot of stuff to know
21:19:28  <isaacs>like, for example, there's a regression in fs.WriteStream
21:19:35  <isaacs>i can write up a demonstrative benchmark pretty easily
21:19:47  * hzquit
21:19:57  <isaacs>but i'm not sure how to use bench_timer to do it... or what the benefit would even be of that
21:20:33  <trevnorris>can you gist an example of what you'd use?
21:22:24  <isaacs>sure.
21:22:26  <indutny>this two-way ssl thing is melting my brain
21:22:34  <indutny>I've almost finished it :)
21:22:45  <trevnorris>isaacs: is \n not showing up as a new line on your machine?
21:22:55  <isaacs>trevnorris: nono, i was using that as an example :)
21:23:03  <trevnorris>ah, ok.
21:23:06  <isaacs>trevnorris: since you can't put a literal \n in an irc message :)
21:23:32  <isaacs>oh! also, this: var elapsed = hrtime[0] * 1e9 + hrtime[1];
21:23:46  <isaacs>it's better to do elapsed = hrtime[0] + hrtime[1]/1e9;
21:23:56  <isaacs>you can lose precision
21:24:11  <isaacs>better to lose precision on the nanoseconds than on the seconds :)
21:24:23  <isaacs>(though it's unlikely, since presumably this is a diff hrtime)
21:24:55  <trevnorris>ah, got it.
21:26:16  <indutny>time to sleep
21:26:16  <indutny>ttyl
21:26:32  * indutny&
21:26:52  * stagasjoined
21:28:37  <isaacs>trevnorris: (sorry for the delay, i'm forgetting what branch this was on...)
21:28:57  <trevnorris>eh, no worries. a lot going on.
21:29:01  <MI6>joyent/node: Ben Noordhuis v0.8 * 6b99fd2 : zlib: pass object size hint to V8 Inform V8 that the zlib context object (+1 more commits) - http://git.io/N7L44Q
21:29:22  <isaacs>crap. i lost it.
21:29:33  <isaacs>i hadn't checked the test in, and clobbered it with a git checkout -f
21:29:35  <isaacs>oh well.
21:29:53  * Ralt_quit (Remote host closed the connection)
21:32:04  * stagasquit (Ping timeout: 248 seconds)
21:34:53  <indutny>isaacs: a question :)
21:35:04  <indutny>isaacs: tls.CleartextStream.end() - half close or not?
21:35:41  <MI6>joyent/node: Ben Noordhuis v0.8 * a86ebbe : blog: remove dangling symlink Fixes #4716. - http://git.io/YAD82Q
21:36:26  <bnoordhuis>TooTallNate: ping
21:36:33  <TooTallNate>bnoordhuis: pong
21:36:51  <bnoordhuis>TooTallNate: node-gyp question. what's the expected output when g++ isn't installed?
21:37:05  <bnoordhuis>i frequently get bug reports from people who can't install my modules
21:37:21  <bnoordhuis>and it usually turns out they're missing the compiler toolchain or make or something
21:37:21  <TooTallNate>bnoordhuis: it would probably try invoking it anyways, and then say g++ not found
21:37:30  <tjfontaine>or cc not found
21:37:38  <bnoordhuis>https://github.com/bnoordhuis/node-heapdump/issues/6 <- example
21:37:41  <trevnorris>isaacs: updated. with those two changes.
21:37:46  <TooTallNate>bnoordhuis: the problem is that people don't know how to sift through npm/node-gyp's output
21:37:49  <TooTallNate>since it's so verbose
21:37:52  <TooTallNate>at least that's my theory
21:37:58  <bnoordhuis>according to the guy who reported it it says "gyp: binding.gyp not found (cwd: /home/ubuntu/$) while trying to load binding.gyp"
21:38:08  <bnoordhuis>and i didn't see it in the npm-debug.log either
21:38:44  <TooTallNate>bnoordhuis: that's pretty strange, i don't think i've seen that one
21:38:52  <tjfontaine>nor have I
21:39:03  <tjfontaine>cwd $ is interesting though
21:39:14  <TooTallNate> /home/ubuntu/$
21:39:18  <TooTallNate>ya, strange
21:39:19  <bnoordhuis>yeah. that's asking for trouble, that is :)
21:39:21  <tjfontaine>I know, the $ is what interests me
21:39:32  <TooTallNate>i think his invocation was weird perhaps
21:40:12  <isaacs>trevnorris: here you go: https://gist.github.com/isaacs/4717966
21:40:39  <isaacs>trevnorris: what's nice about this benchmark is that you can drop that file into v0.8 or master, and see changes.
21:40:56  <TooTallNate>bnoordhuis: incidentally, gyp gets run (and that error being returned) before g++ is even invoked
21:40:57  <isaacs>(printing the node version in the result is kind of silly, of course)
21:41:06  <TooTallNate>bnoordhuis: so it's strange that he's implying they're related
21:41:15  <bnoordhuis>TooTallNate: good point
21:41:26  <tjfontaine>lame, $ isn't the problem (at least with .8 and darwin) :P
21:41:54  <TooTallNate>i mean i think people sometimes clone repos, run node-gyp rebuild, but don't cd into the dir first
21:42:02  <TooTallNate>thus, binding.gyp not being there...
21:42:18  <tjfontaine>quite possible
21:42:41  <tjfontaine>s/possible/probable/
21:43:02  <bnoordhuis>you think so?
21:43:07  <TooTallNate>isaacs: refactored my "stream parser" thingy into a standalone mixin module finally: https://github.com/TooTallNate/node-stream-parser
21:43:12  <bnoordhuis>i don't want people that clueless touching my beautiful code
21:43:15  <tjfontaine>I can certainly replicate it by `rm binding.gyp` :) Exception: binding.gyp not found (cwd: /Users/tjfontaine/$) while trying to load binding.gyp
21:43:59  <tjfontaine>bnoordhuis: it's either that or they have some weird invocation as TooTallNate suggested, like they're trying to do $HOME or something
21:44:00  <TooTallNate>tjfontaine: well ya :p
21:44:04  <tjfontaine>TooTallNate: :P
21:44:08  <isaacs>indutny: hm... it should match the net.Socket API on that.
21:44:16  <indutny>so, no half-close by default
21:44:17  <isaacs>indutny: ie, if you set allowHalfOpen, then it's true, otherwise, no
21:44:20  <indutny>that's how it was working
21:44:22  <isaacs>indutny: not for http, no
21:44:27  <TooTallNate>bnoordhuis: hahaha… never underestimate the user
21:44:28  <isaacs>indutny: for arbitrary tls.connect, whatever.
21:44:31  <TooTallNate>or overestimate in this case
21:44:34  <indutny>hm...
21:44:37  <indutny>ok, I'll think about it
21:44:40  <isaacs>indutny: oh, i see, no, not by default
21:44:44  <isaacs>by default we don't allow halfopen
21:44:49  <indutny>right now I'm just trying to make it end streams gracefully
21:44:53  <isaacs>kewl
21:45:01  <isaacs>getting halfOpen right is really tricky
21:45:03  <tjfontaine>bnoordhuis: I mean, the username is ubuntu, and the path is $, the bar is pretty low already
21:45:14  <bnoordhuis>true
21:46:49  <bnoordhuis>streams question: is it legal to emit one more data event from your stream's end() function?
21:47:21  <bnoordhuis>i need to emit the leftovers (so to speak) before shutting down
21:47:27  <TooTallNate>bnoordhuis: this is a duplex stream?
21:47:37  <bnoordhuis>well, a stream.Stream
21:47:38  <TooTallNate>bnoordhuis: Transform invokes a _flush callback function if deinfed
21:47:56  <bnoordhuis>ah, i should've mentioned that it should also work with v0.8
21:48:01  <bnoordhuis>i.e. the old streams model
21:48:07  <TooTallNate>bnoordhuis: i've been using izs/readable-stream to keep backwards compat
21:48:19  <TooTallNate>adding it as a dep
21:48:32  <TooTallNate>although he needs to update it again soon
21:48:34  <TooTallNate> /cc isaacs
21:48:36  <bnoordhuis>i guess it's conceptually a transform stream
21:48:50  <bnoordhuis>you write data at one end and transformed (iconv-ified) data comes out at the other end
21:49:03  <isaacs>trevnorris: so, for example, a benchmark/net.js could run all the differnet modes easily enough
21:50:54  <trevnorris>isaacs: you mean benchmark/net.js could run all the tests in benchmark/net/ ?
21:51:00  <isaacs>trevnorris: sure.
21:51:20  <isaacs>trevnorris: or we could have multiple ones, like we do for test/simple/test-http-* etc.
21:51:29  <isaacs>it can be somewhat flexible
21:53:35  <isaacs>trevnorris: so, we can still have benchmarks that take cli args, like we do with tests. but runnign by default should do a "spread"
21:53:41  <isaacs>and each one should just be spitting out a few numbers.
21:54:35  <trevnorris>isaacs: then i'll need to know the default values you'll want for the net tests.
21:54:36  * stagasjoined
21:55:07  * stagasquit (Read error: Connection reset by peer)
21:56:46  <trevnorris>isaacs: what i'm wondering is if every test also has a spread as far as the fs thoughput test, the output is going to be 1000 lines long.
21:56:53  <trevnorris>output of make bench, that is.
21:56:58  <isaacs>trevnorris: that's fine :)
21:57:10  <trevnorris>ok, cool.
21:57:34  <isaacs>if we had 1000 lines of consistently parseable and comparable benchmark outputs, then we'll have traded a very hard problem for a very easy one :)
21:57:50  * wolfeidauquit (Remote host closed the connection)
21:58:00  <isaacs>then we can throw them in a db, break each one down with a graph, etc.
21:58:08  * bradleymeckquit (Quit: bradleymeck)
21:58:13  <tjfontaine>amen.
21:58:19  <trevnorris>isaacs: now, I have a patch sitting local that runs all the net tests in two processes, but since a single one failing will cause a massive failure in automated testing.
21:58:38  <trevnorris>so I haven't updated the current PR with it
21:59:03  <trevnorris>and it hasn't been uncommon enough that I feel like it's super reliable.
21:59:07  <isaacs>TooTallNate: yes, updating readable-stream is on my todo list
21:59:35  <isaacs>trevnorris: yeah, so, you can run the client and server both as children, and have the parent kill them both if anything goes awry
21:59:41  * perezdquit (Quit: perezd)
21:59:56  <trevnorris>the parent dieing is where i'm having problems.
22:00:04  <isaacs>trevnorris: oh, well... that shouldn't happen :)
22:00:08  <isaacs>the parent should be very small.
22:00:09  <trevnorris>lol ok
22:00:18  <isaacs>like, one function
22:00:24  <isaacs>< 50 lines long
22:00:42  <trevnorris>is there no perf hit sharing stdio with a child process?
22:01:04  <isaacs>meh. not relevantly so, and if so, it's consistent anyway.
22:01:10  <trevnorris>ok
22:01:16  <isaacs>we're only going for comparable benchmarks, not "is node faster than python" benchmarks
22:01:33  <isaacs>ie, "grownup" benchmarks, the kind used for debugging, not childish bragging benchmarks :)
22:01:46  <trevnorris>well. i was thinking more of "is node as fast as c" but it can wait. ;-)
22:01:48  <isaacs>as long as it's apples-to-apples, some consistent losses are fine.
22:02:01  * loladiroquit (Quit: loladiro)
22:03:57  <trevnorris>isaacs: so can we solidify exactly what the output should be for the different types of tests? (e.g. multi-iteration like net, single async run like fs throughput or single sync run like buffers)
22:04:32  <isaacs>trevnorris: let's just for now say, every line to stdout shoudl be <label>: number
22:04:40  <trevnorris>ok
22:04:47  <isaacs>trevnorris: we can have the benchmark runner throw if the child output doesn't comply
22:04:52  <isaacs>ie, whatever `make bench` does
22:04:56  <trevnorris>so remove the min/max/med and just print mean?
22:05:02  * perezdjoined
22:05:04  <isaacs>well, you can print each of those on separate lines
22:05:21  <trevnorris>each with the name before it?
22:05:23  <isaacs>blah_blah_min: 1
22:05:26  <trevnorris>ok
22:05:27  <trevnorris>will do
22:05:27  <isaacs>blah_blah_mean: 2
22:05:31  <isaacs>blah_blah_max: 3
22:05:32  <isaacs>etc.
22:06:23  <trevnorris>isaacs: so the buffer tests are pretty much there.
22:06:29  <trevnorris>it's the net tests that need some love.
22:06:40  <isaacs>yeah
22:07:05  * mikealjoined
22:07:20  * stagasjoined
22:08:26  <trevnorris>isaacs: ok, just updated the pr for that output.
22:09:08  * loladirojoined
22:10:37  <trevnorris>isaacs: as for the fs throughput, two main things bench-timer can do is easily allow for multiple test types (e.g. --sizes 1, 10,1024) and
22:11:27  <trevnorris>can pass --iter n to run the same test n times for timing comparisons. running only once is not reliable enough to report on.
22:13:09  <trevnorris>also can easily add options for, say, where the output dir/file should be.
22:14:46  * wolfeidaujoined
22:17:34  * rendarquit
22:23:01  * wolfeidauquit (Read error: Connection reset by peer)
22:23:12  * wolfeidaujoined
22:27:02  * perezdquit (Quit: perezd)
22:27:24  * stagasquit (Ping timeout: 252 seconds)
22:28:08  <TooTallNate>isaacs: i think i found an API change in streams2
22:28:26  <isaacs>TooTallNate: oh?
22:28:40  <TooTallNate>ya one sec, verifying
22:28:50  <TooTallNate>i think fs.end() used to be allowed to take a callback function
22:29:07  <TooTallNate>but right now, if you do .end(fn), it calls _write(fn, fn)
22:29:21  <indutny>isaacs: yt?
22:29:29  * perezdjoined
22:29:52  <isaacs>sorta
22:29:55  <isaacs>what's up?
22:30:03  <isaacs>TooTallNate: yeah, bug.
22:30:06  <isaacs>TooTallNate: patch welcome :)
22:30:12  <indutny>isaacs: wanna review awesome tls stuff? :)
22:30:15  <TooTallNate>isaacs: ya totes https://github.com/joyent/node/blob/v0.8.18/lib/fs.js#L1575-L1579
22:30:17  <indutny>I think its ready for prime-time
22:30:17  <TooTallNate>ok i'll try
22:30:32  <isaacs>indutny: i do want to review that.
22:30:37  <isaacs>indutny: sorry, putting out fires atm
22:30:53  <indutny>I hope not literally
22:31:36  <mmalecki>I just imagined isaacs running with a fire extinguisher in a datacenter
22:31:55  <indutny>isaacs: https://github.com/joyent/node/pull/4718
22:31:56  <mmalecki>it was... hilarious, so to say
22:32:06  <indutny>mmalecki: btw, you may want to look at this too https://github.com/joyent/node/pull/4718
22:32:20  <indutny>bnoordhuis: yt?
22:32:27  <bnoordhuis>indutny: ih
22:32:31  <indutny>bnoordhuis: hi again
22:32:36  <mmalecki>indutny: I indeed do!
22:32:41  <indutny>bnoordhuis: can you please take a look at it too, once you'll have some free time?
22:32:45  <indutny>bnoordhuis: https://github.com/joyent/node/pull/4718 <- just in case
22:32:51  <CoverSlide>"Please review very carefully, I've put a couple of obscure bugs inside this pull request to annoy you guys later on."
22:32:58  <bnoordhuis>indutny: what's it about?
22:32:59  <CoverSlide>hahaha
22:33:09  <indutny>bnoordhuis: about porting tls to streams2
22:33:17  <mmalecki>indutny: do you need _ended?
22:33:27  <mmalecki>I think it should be handled by readable stream
22:33:52  <indutny>mmalecki: well, _ended is interesting thing
22:33:54  <mmalecki>so this._readableState.ended should do it, I think...
22:33:57  <bnoordhuis>indutny: um, maybe later
22:34:05  <indutny>mmalecki: I'm trying to limit access to internal methods
22:34:08  <indutny>bnoordhuis: you lazy man :)
22:34:16  <indutny>bnoordhuis: I spent almost one week on this
22:34:25  <indutny>ok, np
22:34:32  <mmalecki>indutny: hey, a little respect here
22:34:38  <mmalecki>Ben spent almost one week being lazy
22:34:54  <indutny>bnoordhuis: one week only?
22:34:55  <bnoordhuis>and it's wearing me out, i tell you
22:35:10  <indutny>nothing really interesting about this
22:35:17  <indutny>I've been lazy for last couple of years
22:35:26  <indutny>the trick is that I was still doing some things
22:36:07  <mmalecki>yeah, I can confirm that's the trick
22:36:19  <bnoordhuis>principle of least energy expenditure
22:36:24  <mmalecki>my boss still doesn't know I haven't actually done anything in a month
22:36:53  <indutny>indexzero: ^^^
22:36:56  <mmalecki>ups, is indexzero still here? I think he is.
22:37:22  <mmalecki>brb, I need to go back to pretending that I work
22:37:49  <indutny>mmalecki: like doing random commits with whitespace changes and vague description? :)
22:37:57  <mmalecki>indutny: yeah
22:38:04  <indutny>like this https://github.com/flatiron/cli-config/commit/81719d4b3a27f7285925b38bd45158cf42e53e09
22:38:32  <mmalecki>indutny: hey, this is the best commit I did the entire week, actually
22:38:46  <indutny>hehe
22:38:47  <indutny>ok ok
22:52:55  * mikealquit (Quit: Leaving.)
22:53:21  * mikealjoined
23:01:45  * `3rdEdenquit (Remote host closed the connection)
23:03:11  * c4miloquit (Remote host closed the connection)
23:04:32  <roxlu_>hi guys, I'm getting a Assertion failed: (stream->write_queue_size == 0), function uv__write, file src/unix/stream.c, line 728.
23:04:42  <roxlu_>when I use uv_run(loop, UV_RUN_NOWAIT)
23:04:53  <roxlu_>is that correct?
23:05:45  * perezdquit (Quit: perezd)
23:09:02  <bnoordhuis>roxlu_: depends on what you do but probably not
23:09:21  <roxlu_>what's the 'stream->write_queue_size' used for?
23:10:15  <roxlu_>got a pretty basic tcp connection: https://gist.github.com/roxlu/9e41686f62b2387a3f2d
23:11:00  <roxlu_>writing is done at line 84
23:11:09  <bnoordhuis>roxlu_: it's a counter that keeps track of the amount of outstanding data
23:11:11  <roxlu_>(I'n not entirely sure if I need to create a new write req
23:11:16  <roxlu_>ok
23:11:16  <bnoordhuis>ie.e. how much data still needs to be written out
23:12:30  <roxlu_>is it executed after a call to uv_write? (and could this be caused when I try to write 0 bytes? )
23:12:38  * perezdjoined
23:19:05  * perezdquit (Quit: perezd)
23:20:09  <bnoordhuis>roxlu_: i suspect you're using memory after it's released
23:20:44  <bnoordhuis>that assert you're hitting means the 'pending writes' queue is empty but stream->write_queue_size != 0
23:24:52  * hzjoined
23:25:48  <isaacs>man, fs.watch() really sucks for stuff that has to work across multiple different platforms
23:26:08  <roxlu_>bnoordhuis: ah yeah it seems that that is the problem
23:26:23  <roxlu_>I thought data was copied
23:26:51  <roxlu_>how do I know if libuv has written the data I gave to uv_write? (in the write_cb ? )
23:32:44  * loladiroquit (Quit: loladiro)
23:37:20  <saghul>yes, in the cb
23:37:34  <indutny>isaacs: tell me about that
23:37:39  <indutny>thankfully it works :)
23:37:41  <indutny>in some way
23:37:48  <isaacs>well... i'm seeing some odd stuff.
23:37:54  <isaacs>like, i have a whole bunch of watchers on the same file
23:37:55  <isaacs>then i delete the file
23:37:58  <indutny>on osx?
23:38:06  <isaacs>and all the watchers get a 'rename' event, BEFORE the file is gone.
23:38:13  <isaacs>linux
23:38:30  <indutny>interesting
23:39:14  <isaacs>i'm going to replace the watcher with a setTimeout(retry)
23:39:23  <isaacs>and just keep retrying until the time runes out
23:39:26  <isaacs>*runs out
23:39:48  <isaacs>i was doing this cute thing where it would set up a watcher, and a timer for the "stop waiting" time.
23:40:06  * perezdjoined
23:44:53  <roxlu_>somehow my write_cb gets called in a loop whenever I do one call to uv_write
23:44:54  * AvianFluquit (Remote host closed the connection)
23:45:34  <bnoordhuis>roxlu_: make sure you close the handle properly
23:45:50  <roxlu_>what do you mean bnoordhuis ?
23:45:50  <bnoordhuis>don't free the handle's memory until your close cb is called
23:46:12  <roxlu_>I'm not free'ing anything actually
23:46:22  <bnoordhuis>or reuse the memory
23:46:24  <roxlu_>I'm allocating on the heap .. and not free'ing as a test
23:46:32  <bnoordhuis>ah okay
23:51:06  <tjfontaine>bnoordhuis: probably want to reopen 3872 as well
23:51:18  <tjfontaine>I'm sure he'd be happy to refresh the code then
23:51:44  <bnoordhuis>reopened :)
23:51:52  <tjfontaine>:)
23:58:48  * hzquit
23:59:16  * TheJHquit (Ping timeout: 248 seconds)