00:00:01  * ircretaryquit (Remote host closed the connection)
00:00:10  * ircretaryjoined
00:23:38  * roxluquit (Ping timeout: 256 seconds)
00:23:42  * roxlu_joined
00:47:44  <isaacs>ok, i'm gonna try ripping out all this high/low watermark shit.
00:47:55  * c4miloquit (Remote host closed the connection)
00:47:57  <isaacs>if there's a write pending, we return false. if there isn't, we return true.
00:48:01  <isaacs>period.
00:52:10  * loladirojoined
01:16:21  * isaacs&
01:30:33  * bnoordhuisquit (Ping timeout: 245 seconds)
01:34:07  * mikealquit (Quit: Leaving.)
01:36:51  * abraxasjoined
01:39:09  * mikealjoined
01:49:58  * jmar777_quit (Remote host closed the connection)
01:50:36  * jmar777joined
01:54:43  * jmar777quit (Ping timeout: 248 seconds)
02:15:08  * bradleymeckquit (Ping timeout: 245 seconds)
02:21:54  * bradleymeckjoined
03:27:40  * bradleymeckquit (Quit: bradleymeck)
03:37:18  * jmar777joined
03:38:29  * brsonquit (Ping timeout: 255 seconds)
03:53:52  * jmar777quit (Remote host closed the connection)
03:54:29  * jmar777joined
03:54:59  * c4milojoined
03:58:59  * jmar777quit (Ping timeout: 248 seconds)
04:13:26  * loladiroquit (Quit: loladiro)
04:14:46  * c4miloquit (Remote host closed the connection)
04:15:57  * stagasquit (Read error: Connection reset by peer)
04:20:27  * loladirojoined
04:31:24  * trevnorrisjoined
04:33:16  * AvianFluquit (Remote host closed the connection)
04:51:55  <LOUDBOT>http://twitter.com/LOUDBOT/status/298292777102495744 (simcop2387/##turtles)
05:04:55  * TheJHjoined
05:13:24  * TheJHquit (Ping timeout: 248 seconds)
05:18:37  * indexzerojoined
05:30:34  * indexzeroquit (Quit: indexzero)
05:40:42  * mikealquit (Quit: Leaving.)
05:43:35  * mikealjoined
05:51:11  * loladiropart
05:51:33  <isaacs>hmm... we sure do call gettimeofday a lot.
05:53:37  * wolfeidauquit (Remote host closed the connection)
05:55:07  <isaacs>at least, that's what --prof is saying
05:55:27  <isaacs>i'm a bit skeptical of it, though..
06:18:22  * perezdjoined
06:22:24  <trevnorris>isaacs: the graph is interesting. streams2 was merged between 0.9.3 and 0.9.4. but still looks like there has been significant between .4 and .6.
06:27:09  * perezdquit (Quit: perezd)
06:29:01  * paddybyersjoined
06:32:11  <trevnorris>isaacs: been thinking. doing think those tests are appropriately representative. running the entire thing in the same script means that if one consumes additional cpu then it's forced to be slower.
06:32:27  <trevnorris> /doing/don't
06:33:04  <trevnorris>going to rewrite them to pipe data between two processes.
06:45:07  * mikealquit (Quit: Leaving.)
06:51:51  * mikealjoined
06:56:25  <trevnorris>isaacs: ok. so good news. when I spawn it into two processes both v0.8 and master jump up to ~8 Gb/sec.
06:56:39  <trevnorris>so that just means that master uses about twice the memory and 20% more cpu.
07:21:06  <trevnorris>isaacs: actually, scratch my comment on memory usage. at first glance looks like that might not be true.
07:24:51  * EhevuTovjoined
07:25:14  * felixgejoined
07:30:12  * felixgequit (Ping timeout: 276 seconds)
07:31:49  * rendarjoined
07:34:52  * `3rdEdenjoined
07:39:06  <isaacs>trevnorris: hm.
07:39:13  <isaacs>trevnorris: that is indeed interesting.
07:39:33  <isaacs>trevnorris: i'm deep in the thick of v8 inline frenzy right now.
07:39:50  <trevnorris>isaacs: yeah. just ran tcp_net_c2s and got 15.881 - v0.8; 15.888 - master
07:40:59  <isaacs>trevnorris: that's pretty awesome, actually
07:41:28  <isaacs>trevnorris: so... why does it get so much slower right when we land streams2?
07:41:52  <isaacs>trevnorris: i mean, saying "it's just using more memory and more cpu" is basically the same as saying "it's just slower" ;)
07:42:23  <isaacs>trevnorris: on both my smartos and darwin boxes, it falls from ~3.5 to ~3.0 Gb/sec
07:42:25  <trevnorris>isaacs: yeah. i'm still formulating an idea of why this is happening.
07:42:29  <trevnorris>and that is on the net tests.
07:42:30  <isaacs>kewl.
07:42:39  <trevnorris>next i'll do the raw and see what it looks like
07:42:43  <isaacs>k
07:42:58  <isaacs>i still think this Writable rewrite is probably worthwhile.
07:43:06  <isaacs>it means giving up the high/low water marks for the writable side.
07:43:09  <isaacs>but.. oh well.
07:43:30  <isaacs>we don't have batched writev support anyway, so saving up a bunch of bytes to write all at once doesn't make much sense anyhow.
07:44:19  <isaacs>also, apparently, there are a bunch of things you can do with arrays that V8's inlining optimizer treats as an early return.
07:44:23  <isaacs>which is super strange.
07:44:42  <isaacs>buffer.push([chunk,enc,cb]) is not inlineable, but buffer.push({c:chunk,e:enc,cb:cb}) is
07:44:42  <trevnorris>heh, that is strange.
07:46:30  <trevnorris>hm. yeah, no good explanation for that one.
07:46:49  <isaacs>trevnorris: i think i need to have a test that will just pipe one stream into another.
07:47:12  <isaacs>the trouble there is that there's really no good comparison to v0.8, since we didn't hve the base classes, then
07:48:11  <isaacs>of course... all the inlining stuff i'm doing, it's still only amounting to about a 1% diff in the speed.
07:48:22  <isaacs>and i'm cutting deep.
07:50:25  <trevnorris>i think it's good. honestly I'd like to figure out how to call domain in _makeCallback w/o switching contexts.
07:50:56  <trevnorris>(but don't have an idea about perf improvements there)
07:51:03  <trevnorris>so what did you mean about pipe to pipe?
07:53:59  * piscisaureus_joined
07:55:40  * Raltjoined
07:56:10  <trevnorris>isaacs: hm, interesting. realized an error in the _net_c2s test. after fixing it v0.8 jumped from 15.8 to 18.4
07:56:19  <isaacs>trevnorris: there ya go :)
07:56:33  <trevnorris>pipe is still correct though.
07:56:38  <isaacs>kewl
07:56:57  <isaacs>trevnorris: actually, i guess so far this has yielded a 4% diff on smartos.
07:57:04  <isaacs>so that's worthwhile, i suppose.
07:57:29  <isaacs>piscisaureus_: goedemorgen
07:57:43  <trevnorris>imho, it definitely is. and with those out of the way and good benchmarks we'll be able to track it more carefully through future development.
07:58:20  <piscisaureus_>isaacs: moge
07:59:15  <piscisaureus_>goedenavond in jouw geva
07:59:16  <piscisaureus_>l
07:59:30  <isaacs>the thing that sucks about this "early return" issue in inlining is that it means you have to write your code like super old-school C.
07:59:46  <isaacs>with var ret; and then giant if-blocks, and then at the bottom, a single "return ret"
08:00:04  <isaacs>V8 should just figure out how to inline early returns.
08:00:34  <trevnorris>isaacs: imho, think it'd be helpful to figure out how to break things down into smaller functions.
08:00:44  <trevnorris>v8 can inline sets of functions easily.
08:00:49  <isaacs>piscisaureus_: bijna morgen
08:01:13  <isaacs>trevnorris: well, that's another thing, function text has to be rather small to be inlined.
08:01:14  <trevnorris>then if there's a deop'd part it won't affect the rest of the quick code.
08:01:20  <isaacs>trevnorris: like, even comments in the code will fuck it up
08:01:32  <trevnorris>yup. know that stupid one.
08:02:01  <isaacs>so a lot of this refactor is just moving comments outside of the fucntion body, and breaking up into small functions, and removing any tidy early return values.
08:02:07  <isaacs>other stuff to generally uglify the code.
08:02:09  <isaacs>it makes me sad.
08:03:06  <trevnorris>yeah. i think it'll hit a good balance. just a few growing pains.
08:06:09  <trevnorris>isaacs: also with some better profiling we should be able to determine what code is absolutely necessary to uglify. and other code that just can't be.
08:07:03  <trevnorris>(e.g. knowing that some operations must be used that will force deopt. so no point in over optimizing)
08:09:41  <isaacs>yeah, deopts are super magical to me right now.
08:09:53  <isaacs>i know that you'll get deopted if your data types change.
08:10:04  <isaacs>but like.. i've got cases where i'm pretty sure that's not happening, and it's still getting deopted
08:10:32  * felixgejoined
08:10:33  * felixgequit (Changing host)
08:10:33  * felixgejoined
08:11:09  <isaacs>especially in cases where it's optimizing a function, then immediately deoptimizing it
08:12:18  <trevnorris>it does suck you in, doesn't it? ;-)
08:12:44  <trevnorris>is it printing you out the assembly stack of the deopt?
08:13:58  <isaacs>yeah
08:14:10  <isaacs>which i guess, mraleph would say, tells me everything i need to know, obviously.
08:14:19  <isaacs>but i'm a bit confused by it.
08:14:23  <trevnorris>can you gist one for me?
08:14:53  <isaacs>https://gist.github.com/4705572
08:16:04  <isaacs>[optimizing: onwriteDrain / 2f775a95c7d9 - took 0.026, 0.056, 0.051 ms]
08:16:04  <isaacs>**** DEOPT: onwriteDrain at bailout #2, address 0x0, frame size 0
08:16:15  <isaacs>it's optimizing onwriteDrain, and then immediately deopting it repeatedly
08:16:44  <isaacs>trevnorris: same iwth Socket._write, socketWrite, etc.
08:16:57  <trevnorris>isaacs: it will help if you give a name to all the anonymous functions.
08:17:10  <isaacs>yeah, i know
08:17:43  <isaacs>ok, i need to get some sleep.
08:17:51  <trevnorris>cool. see ya tomorrow.
08:18:11  <isaacs>piscisaureus_: do you have any idea why the v8 tick processor would be showing way more time spent in gettimeofday in master than in v0.8?
08:18:22  <isaacs>piscisaureus_: this makes me curious:
08:18:45  <piscisaureus_>isaacs: I have no time, sorry.
08:18:51  <piscisaureus_>isaacs: later today :-)
08:18:55  <isaacs> ticks parent name
08:18:57  <isaacs> 3107 36.8% ___gettimeofday
08:19:20  <isaacs>piscisaureus_: you're just saying that to be clever, since the question is about gettimeofday, aren't you?
08:19:34  <piscisaureus_>isaacs: :D
08:19:46  <isaacs>piscisaureus_: np, i have no time tonight. if you have any flashes of insight, you can use internets to send messages.
08:19:47  <piscisaureus_>isaacs: there's a reason I got up at 5:30 today ...
08:19:52  <isaacs>i'll bet :)
08:19:58  <piscisaureus_>isaacs: indeed. async ftw
08:20:02  <isaacs>whatever it is, i hope it's wroth seeing the sunrise
08:20:17  <isaacs>or, does the sun rise this time of year on the north pole?
08:20:21  <isaacs>;P
08:21:04  <isaacs>ok, the jokes are getting bad, even for me. it's time to sleep. g'nite.
08:21:06  * isaacs&
08:21:14  * isaacspart
08:21:18  * isaacsjoined
08:24:52  * saghuljoined
08:25:16  <indutny>morning
08:33:41  <piscisaureus_>isaacs: slaap lekker
08:38:19  * piscisaureus_quit (Read error: Operation timed out)
08:44:41  * piscisaureus_joined
08:59:49  <trevnorris>isaacs: just broke up raw_c2s: v0.8 - 19.627; master - 18.369
09:00:06  <trevnorris>that's a pretty large non-js related gap that needs to be filled
09:04:31  * mraleph1joined
09:04:55  * mralephquit (Read error: Connection reset by peer)
09:18:38  * paddybyers_joined
09:21:11  * paddybyersquit (Read error: Operation timed out)
09:21:11  * paddybyers_changed nick to paddybyers
09:43:06  * EhevuTovquit (Quit: This computer has gone to sleep)
10:14:55  * trevnorrisquit (Quit: Leaving)
10:17:08  * wolfeidaujoined
11:25:41  * hzjoined
11:50:11  * stagasjoined
12:27:50  * MI6quit (Ping timeout: 252 seconds)
12:27:50  * stephankquit (Ping timeout: 252 seconds)
12:27:50  * tellnesquit (Remote host closed the connection)
12:28:54  * stephankjoined
12:29:13  * tellnesjoined
12:40:41  * stagas_joined
12:43:21  * stagasquit (Ping timeout: 245 seconds)
12:43:21  * stagas_changed nick to stagas
12:53:06  <indutny>piscisaureus_: hoya
12:53:10  <indutny>I like russian developers
12:53:13  <indutny>they just told me
12:53:36  <indutny>"There's a problem with Request/Response classes in node.js... we can't extend them and add our own methods"
12:53:39  <indutny>meh
12:53:54  <indutny>Why people are looking at the problem from only one side
12:54:20  <stagas>sure they can
12:54:40  <indutny>well, there's a problem with this solution
12:54:50  <indutny>and it'll happen pretty soon
12:54:59  <stagas>what is that?
12:55:05  <indutny>because extending prototypes always leads to problems
12:55:14  <indutny>like method name clashes
12:55:27  <indutny>and also all other modules will observe side-effects
12:56:02  <stagas>I suppose you need to be careful and it is what you actually need to do
12:56:10  <indutny>well
12:56:15  <indutny>not a promotion
12:56:21  <indutny>but I think flatiron handles this in a best way
12:56:35  <indutny>you just extend your app
12:56:44  <indutny>and methods of your app always have access to this.req and this.res
12:58:38  <stagas>yeah that's one way, I usually add stuff in middleware in express so it's instance methods, a bit of an overhead but meh
12:58:54  <indutny>well
12:59:02  <indutny>this is another way
12:59:11  * Raltquit (Read error: Operation timed out)
12:59:31  <stagas>I suppose that kills the hidden classes in v8
12:59:38  <stagas>but js allows me to do it so I do it
12:59:40  <stagas>:P
12:59:43  * Raltjoined
13:00:14  <indutny>yes
13:00:16  <indutny>it kills them
13:00:19  <indutny>pretty hard
13:01:33  <stagas>it's the compiler's problem, it should find out that I'm adding these constant stuff all the time and optimize it
13:01:57  <indutny>well, it works :)
13:02:11  <indutny>my visions of this thing
13:02:17  <indutny>there's C and dynamic languages
13:02:44  <indutny>if your code is close to C (i.e. everything is static) - its compiler job to make it as fast as possible (nearly C)
13:02:51  <indutny>if its very dynamic - do not blame compiler
13:03:06  <indutny>you've traded code-style for speed
13:05:08  <stagas>yes but repeatedly adding the same x,y,z properties to object A on every instance of it is something that can be addressed, it's not random stuff
13:05:42  <stagas>and it may be doing that already I don't know
13:05:50  <stagas>I notice things getting faster in the browser after a bit
13:05:53  <stagas>chrome
13:06:00  <indutny>stagas: its ok and it works
13:06:09  <indutny>if node wouldn't use req and res before your middleware
13:06:24  <indutny>but actually it still might work pretty fast
13:06:31  <indutny>but it'll use Polymorphic caches
13:06:40  <indutny>which are a bit slower if contains multiple hidden classes
13:07:00  <stagas>I see so it's not ideal but not entirely crap
13:07:11  <indutny>yes
13:07:13  <indutny>exactly
13:10:51  * bnoordhuisjoined
13:10:56  <stagas>I recently saw a jonathan blow talk and he was advocating to never optimize anything before it's an issue, and in 99,9% of the time it's going to be ok. it just kills your work flow, makes the code harder to mantain, you introduce bugs. better to just write simple stuff
13:11:44  <indutny>yes
13:11:46  <indutny>exactly
13:17:47  * jokesterquit (Remote host closed the connection)
13:18:55  <stagas>indutny: this is the talk it's very interesting he makes some valid points not just for games http://the-witness.net/news/2011/06/how-to-program-independent-games/
13:23:00  * hzquit (Ping timeout: 264 seconds)
13:23:50  <indutny>added to bookmarks
13:23:52  <indutny>will watch later
13:23:53  <indutny>hopefully
13:31:21  * hzjoined
13:35:30  * felixgequit (Quit: felixge)
13:45:23  * sgallaghjoined
13:55:54  * stagasquit (Ping timeout: 264 seconds)
14:00:59  <bnoordhuis>next_permutation in js is only 6x slower than the optmized c++ version
14:01:02  <bnoordhuis>not bad, v8
14:01:54  <indutny>:)
14:03:55  * stagasjoined
14:05:31  * loladirojoined
14:07:21  * jmar777joined
14:33:22  * stagasquit (Quit: ChatZilla 0.9.89-rdmsoft [XULRunner])
14:47:58  * loladiroquit (Quit: loladiro)
14:49:48  * loladirojoined
14:54:05  <indutny>whoa
14:54:14  <indutny>cryptostreams2 are almost working
14:55:56  * c4milojoined
15:02:21  <roxlu_>indutny: what are cryptostreams?
15:04:44  * `3rdEdenchanged nick to `3E|BRB
15:07:59  * c4milo_joined
15:24:18  * c4milo_quit (Remote host closed the connection)
15:24:19  * c4miloquit (Remote host closed the connection)
15:30:57  * `3E|BRBchanged nick to `3rdEden
15:34:39  * felixgejoined
15:43:39  * AvianFlujoined
15:46:19  * MI6joined
15:48:01  * indexzerojoined
15:48:44  * stagasjoined
15:52:56  * loladiroquit (Quit: loladiro)
15:57:14  * loladirojoined
16:10:20  <indutny>roxlu_: open tls.js :)
16:10:25  <indutny>and search for CryptoStream
16:11:56  * loladiroquit (Quit: loladiro)
16:13:02  <roxlu_>indutny: does this mean libuv gets ssl?
16:13:08  <indutny>no, node.js
16:16:59  * c4milojoined
16:17:14  * c4miloquit (Remote host closed the connection)
16:17:35  * c4milojoined
16:38:23  * perezdjoined
16:43:25  * stagas_joined
16:44:18  * stagasquit (Ping timeout: 252 seconds)
16:44:19  * stagas_changed nick to stagas
16:53:17  * `3rdEdenchanged nick to `3E|FOODING
16:55:55  * LOUDBOTquit (Remote host closed the connection)
16:56:03  * LOUDBOTjoined
16:56:52  * LOUDBOTquit (Remote host closed the connection)
16:57:00  * LOUDBOTjoined
16:57:06  <isaacs>indutny: tell your russian developer friends that this is exactly how Express works.
16:57:13  <isaacs>indutny: then give them a less on hidden classes
16:57:13  <indutny>haha
16:57:15  * LOUDBOTquit (Remote host closed the connection)
16:57:18  <isaacs>indutny: really, it is
16:57:22  * LOUDBOTjoined
16:57:23  <indutny>well, they think express is shit
16:57:25  <isaacs>Express extends the req and res objects all to hell
16:57:33  <isaacs>then it'll make your point very effectively, no doubt :)
16:57:37  <indutny>:)
16:57:40  <isaacs>s/less/lesson/
16:57:42  <indutny>I already told them to use flatiron
16:57:59  <indutny>though they're pretty scary PHP people
16:57:59  * dapjoined
16:58:06  <indutny>who has just ported Symfony on node.js
16:58:11  <indutny>and that's awful
16:58:16  <indutny>because I don't even know what Symfony is
16:58:30  <isaacs>gah
16:58:47  <isaacs>symfony is pretty ok, as php frameworks go
16:58:53  <isaacs>but... php frameworks don't go very ok :)
16:59:06  <indutny>:)
16:59:20  * indexzeroquit (Quit: indexzero)
16:59:22  <indutny>yeah, so I've found the source of the problem in CryptoStream
16:59:29  <indutny>sometimes I was emitting 'close' event
16:59:32  <indutny>without emitting 'end' event
16:59:43  <isaacs>ahh
16:59:50  <isaacs>no, 'end' is a must-have.
17:00:02  <indutny>yes, I know
17:00:09  <isaacs>if you don't override read() (and only use _read and push) then 'end' will definitely happen
17:00:09  <indutny>its just someone that calls .destroy()
17:00:16  <isaacs>ohhhh..
17:00:17  <isaacs>that's odd
17:00:35  <indutny>and yeah, I don't support 'end' on ._read level
17:00:41  <indutny>and it wasn't really supported before
17:00:54  <ryah>https://plus.google.com/u/0/101038813433650812235/posts/NAjqBW9rtSe
17:00:55  <indutny>we ain't checking if other side has shutdown connection
17:00:56  <isaacs>indutny: you can just push(null) to trigger the end
17:01:06  <isaacs>indutny: it's not your responsibility to emit('end'), it's Readable's
17:01:06  <indutny>isaacs: ah
17:01:10  <indutny>ok
17:01:20  <indutny>will it trigger 'end' immediately?
17:01:21  <isaacs>indutny: you can also call the _read cb with null
17:01:22  <indutny>I mean
17:01:23  <indutny>can I do this
17:01:32  <indutny>this.push(null);this.emit('close')
17:01:32  <isaacs>indutny: well, if the user isn't reading, it'll trigger 'end' once all the bytes are read out
17:01:37  <isaacs>indutny: yes, that's fine
17:01:51  <isaacs>we should probably do that in Socket._destroy, too
17:02:32  <indutny>yes, I guess so
17:02:48  * loladirojoined
17:03:31  <isaacs>though, actually, i think it's not an issue there, because if you _read() after destroy, you'll get an ECONNRESET or an EOF anyway, right?
17:04:47  <indutny>well
17:04:54  <indutny>its an issue if socket is piped
17:04:59  <indutny>into another stream
17:05:10  <indutny>ryah: interesting reading
17:05:24  * loladiroquit (Client Quit)
17:05:39  * Raltquit (Ping timeout: 248 seconds)
17:06:26  <indutny>isaacs: or am I wrong?
17:06:33  <indutny>as far as I can see stream will be unpiped if close emitted
17:06:40  <indutny>without ending its target
17:06:54  <isaacs>indutny: if the writer is closed, it'll be unpiped
17:06:59  <indutny>yes
17:07:02  <isaacs>indutny: since writing after close is usually pointless
17:07:03  <indutny>but unpipe doesn't end target stream
17:07:15  <isaacs>indutny: in that case the writer IS the "target stream"
17:07:26  <indutny>oh
17:07:32  <indutny>erm
17:07:48  <isaacs>indutny: why bother calling .end() if it's already been closed?
17:07:52  <indutny>right
17:07:55  <isaacs>whatever it was doing, it's done now :)
17:07:59  <isaacs>that's what close means
17:08:05  <indutny>this stuff is so complicated :)
17:08:08  * TheJHjoined
17:09:21  <indutny>isaacs: another thing
17:09:33  <indutny>suppose ._write() always calls cb() synchronously
17:09:46  <indutny>I guess .write() will always return true in such case
17:09:53  <indutny>and no 'drain' will happen
17:11:53  <indutny>nvm
17:11:56  <indutny>I've one idea
17:18:31  <isaacs>indutny: that's correct
17:20:05  <indutny>so my idea is to queue all writes if internal openssl buffer is bigger than high watermark
17:20:13  <indutny>does it seems to be correct to you?
17:21:54  <indutny>no, thats too low
17:22:16  <tjfontaine>I think he's looking to remove the hwm/lwm stuff because it overcomplicates and doesn't provide as much benefit as he hoped
17:22:56  <isaacs>indutny: yeah, i'm thinking that the water mark stuff on writable side is maybe a bad idea.
17:23:05  <isaacs>on the readable side, it's sort of necessary.
17:23:13  <isaacs>but we almost always want lwm to be zero.
17:23:18  <indutny>yep
17:23:23  <indutny>well, I need this stuff
17:23:24  <indutny>in tls
17:23:26  <isaacs>and we don't actually keep reading to hwm anyway.
17:23:28  <indutny>not exactly in this way
17:23:30  <isaacs>right
17:23:34  <indutny>but I need to limit possible amount of data
17:23:38  <indutny>that's pulled into openssl
17:23:45  <isaacs>so, i'd say, ignore hwm/lwm, and just do what makes sense for openssl.
17:23:56  <isaacs>you should take your cue from _read or _write
17:24:02  <indutny>k
17:24:06  <isaacs>if _read is called, the user wants more data. if _write is called, the user is putting more data in.
17:24:26  <indutny>yes, that's clear to me
17:24:43  <isaacs>call the write cb when it's done writing, or the _read cb when you are done reading, and stream.push(chunk) if stuff comes up out-of-band
17:25:24  <isaacs>if you find that you have to touch or look at ._readableState or ._writableState, please talk to me about it. either it means that the API is lacking, or you're doing something wrong.
17:25:40  <isaacs>either way, one of us has to fix something :)
17:25:51  <indutny>:)
17:26:03  <indutny>the thing is that all reads/writes are synchronous
17:26:07  <indutny>and its pretty interesting
17:28:36  <isaacs>yeah
17:28:48  <isaacs>it *should* just work fine
17:28:55  <isaacs>i mean, it's designed with that use case in mind
17:29:52  * mmaleckichanged nick to mmalecki[out]
17:30:11  * loladirojoined
17:31:40  <tjfontaine>ugh, Uint8Array(Uint8Array([1,2,3,4])) is supposed to copy?
17:34:43  * txdvquit (Read error: Connection reset by peer)
17:34:59  * txdvjoined
17:36:16  * TheJHquit (Ping timeout: 246 seconds)
17:38:58  * mikealquit (Quit: Leaving.)
17:42:56  <isaacs>anyone have any idea what this means?
17:42:56  <isaacs>**** DEOPT: doWrite at bailout #3, address 0x0, frame size 0 ;;; @63: gap.
17:46:42  * brsonjoined
17:48:46  * piscisaureus_quit (Ping timeout: 245 seconds)
17:51:18  <tjfontaine>isaacs: gist the full blurb and the relevant block of code?
17:52:27  <isaacs>tjfontaine: https://gist.github.com/4705572
17:53:34  <isaacs>hm.. looks likei managed to somehow get rid of one of the gap deopts...
17:53:37  <isaacs>MAGIC!
17:53:41  <isaacs>hooray1
17:53:49  <tjfontaine>heh
17:54:45  <isaacs>tjfontaine: posted afterWrite as a comment.
17:54:48  <isaacs>that's another one that's gapping
17:55:02  * wolfeida_joined
17:55:51  * brsonquit (Ping timeout: 245 seconds)
17:57:17  * brsonjoined
17:57:31  * wolfeidauquit (Ping timeout: 260 seconds)
18:00:22  <tjfontaine>oh this lead to me finding an article I hadn't read yet http://wingolog.org/archives/2011/09/05/from-ssa-to-native-code-v8s-lithium-language
18:00:51  * piscisaureus_joined
18:01:38  * jmar777quit (Remote host closed the connection)
18:02:16  * jmar777joined
18:02:21  * indexzerojoined
18:03:18  <isaacs>yeah
18:03:22  <isaacs>i found that as wel.
18:03:39  <isaacs>basically... what i could gather from it is that to know what "gap" means, i need to probably go to school some more.
18:03:54  * jmar777quit (Read error: Connection reset by peer)
18:04:04  * jmar777joined
18:05:09  <tjfontaine>is "@199: gap." an offset or regsiter
18:05:46  * loladiroquit (Quit: loladiro)
18:09:52  * mikealjoined
18:11:21  <isaacs>tjfontaine: no idea.
18:11:58  <isaacs>well... removing a bunch of deopts here actually ended up making the benchmark slower.
18:12:03  <tjfontaine>well it's interesting that afterWrite not only gets deopt'd but re-opt'd
18:12:11  <isaacs>so..... i'm clearly doing the wrong thing.
18:12:22  * TooTallNatejoined
18:13:01  <isaacs>https://gist.github.com/4708444
18:13:01  * TooTallNatequit (Remote host closed the connection)
18:13:24  * TooTallNatejoined
18:13:38  <isaacs>the floating patch that i'm stashing prevents a bunch of bailouts, but i think it is probably overall not good.
18:14:42  * loladirojoined
18:15:07  * piscisaureus_quit (Quit: ~ Trillian Astra - www.trillian.im ~)
18:18:21  * mikealquit (Ping timeout: 245 seconds)
18:18:25  <isaacs>this still is a bit weird to me: 1494 35.3% ___gettimeofday
18:18:35  * `3E|FOODINGchanged nick to `3rdEden
18:19:07  <tjfontaine>if you're spending more time in uv that's not something unreasonable to see
18:22:47  * trevnorrisjoined
18:24:55  <isaacs>it's tricky to get a side-by-side, because the tick processor doesn't work on smartos
18:25:45  <isaacs>but on the dtrace flamegraphs on smartos, i'm not seeing all THAT much time spent in gettimeofday
18:26:38  <bnoordhuis>isaacs: i'm trying to rpm-ify node. rpm tells you what the system's libdir is but there's a 'hard-coded' path in lib/module.js
18:26:50  <bnoordhuis>var paths = [path.resolve(process.execPath, '..', '..', 'lib', 'node')]; <- that one
18:27:09  <bnoordhuis>is that for any particular reason and can it be fixed?
18:27:17  <isaacs>hm.
18:27:21  <isaacs>lemme look at it. one sec.
18:27:42  <isaacs>ah. these are the global require() paths
18:27:47  <bnoordhuis>yes
18:27:52  <isaacs>those are deprecated, and only there for historical reasons.
18:27:54  <trevnorris>isaacs: added a `--noop` parameter last night to the benchmarks. basically just runs the test w/o any counters or output.
18:28:00  <isaacs>trevnorris: kewl.
18:28:17  <isaacs>trevnorris: i was seeing one of them deopting MakeCallback whenever the timer hit.
18:28:25  <bnoordhuis>isaacs: so if i install npm to /usr/lib64/node_modules, it should Just Work(TM)
18:28:28  <isaacs>trevnorris: but, i don't think that's a major issue.
18:28:30  <bnoordhuis>+ question mark
18:28:46  <isaacs>bnoordhuis: npm *intentionally* does not make globally installed modules require-able.
18:28:51  <trevnorris>isaacs: yeah. what happened was that .inc was incrementing so quickly that it overflowed the Smi to a double.
18:28:56  <isaacs>bnoordhuis: but, you do need to set the npm prefix conf.
18:28:58  <trevnorris>which is why I did noop
18:29:10  <bnoordhuis>isaacs: how do you do that?
18:29:30  <isaacs>bnoordhuis: drop a file called 'npmrc' in /usr/lib64/node_modules/npm/
18:29:40  <bnoordhuis>ah okay
18:29:42  <isaacs>bnoordhuis: it should contain this line: prefix = /usr/lib64
18:29:53  <isaacs>bnoordhuis: it can also have other configs, but please use judiciously.
18:30:01  <bnoordhuis>understood
18:30:04  <bnoordhuis>thanks
18:30:18  <isaacs>bnoordhuis: this is how we point stuff at %APPDATA% folder for windows
18:30:19  <isaacs>np
18:30:48  <isaacs>bnoordhuis: is it normal to be spending 35% of our ticks in ___gettimeofday?
18:31:00  <bnoordhuis>isaacs: depends on what you do
18:31:06  <bnoordhuis>can you see where the calls are coming from?
18:31:08  <isaacs>bnoordhuis: in net-pipe.js
18:31:11  <isaacs>nope
18:31:50  <trevnorris>isaacs: having a problem with spawning the script to test flow across two processes. can't reliably make sure the other process dies.
18:31:51  <bnoordhuis>it's not abnormal, that's probably loop->time updates
18:32:09  <bnoordhuis>is that on os x or smartos?
18:32:14  <isaacs>os x
18:32:14  <trevnorris>isaacs: until that can be hammered out, don't want it in since it will break automated testing.
18:32:24  <isaacs>on smartos i don't see much time being spent in that. i mean, it's a chunk, but not THAT much
18:32:28  <bnoordhuis>right, on smartos it should be clock_monotonic
18:32:28  <isaacs>nowhere near 35%
18:32:51  <bnoordhuis>err, clock_gettime(CLOCK_MONOTONIC)
18:32:52  <isaacs>bnoordhuis: ok
18:32:59  <isaacs>bnoordhuis: even that, is not showing up in the graph at all.
18:33:06  <isaacs>bnoordhuis: the __gettimeofday is showing up from timer.active(self)
18:33:13  <isaacs>expected, since that calls Date.now()
18:33:37  <bnoordhuis>oh, maybe libuv should use the xnu syscall (i forgot what it's called)
18:33:47  <bnoordhuis>that's probably faster than gettimeofday
18:34:13  * sblomjoined
18:34:35  <bnoordhuis>mach_absolute_time, that's the one
18:36:27  <bnoordhuis>hmm, i guess it should already be using that
18:37:22  <isaacs>seems that way
18:37:41  <bnoordhuis>isaacs: can you but a breakpoint on __gettimeofday in gdb
18:37:44  <bnoordhuis>then try this:
18:37:46  <isaacs>bnoordhuis: we call it in int uv_cond_timedwait(uv_cond_t* cond, uv_mutex_t* mutex, uint64_t timeout) {
18:37:53  <bnoordhuis>oh right
18:38:12  <bnoordhuis>because os x doesn't have monotonic clocks :/
18:38:18  <isaacs>and v8 calls it a bunch of places, so does openssl, cares
18:38:23  <isaacs>hrm.
18:38:24  <bnoordhuis>yes, it could be v8
18:38:27  <bnoordhuis>but let's check
18:38:35  <bnoordhuis>so in gdb `break __gettimeofday`
18:38:41  <bnoordhuis>then: commands
18:38:43  <bnoordhuis>silent
18:38:47  <bnoordhuis>bt 5
18:38:47  <bnoordhuis>c
18:38:48  <bnoordhuis>end
18:38:49  <isaacs>do i need node_g?
18:38:50  <bnoordhuis>run
18:39:01  <bnoordhuis>that'd be best but the release binary should work as well
18:41:24  <isaacs>https://gist.github.com/4708631
18:41:49  <bnoordhuis>it's always the same stack trace?
18:42:44  <isaacs>sometimes it's this one: https://gist.github.com/4708639
18:42:55  <bnoordhuis>right
18:42:57  <bnoordhuis>so it's v8
18:43:09  <isaacs>yep
18:43:09  <bnoordhuis>seems it doesn't use mach_absolute_time :/
18:43:29  <isaacs>indeed. only instance of that function is in deps/uv
18:43:31  <trevnorris>isaacs: which stream-* branch of yours has the latest improvements?
18:43:36  <bnoordhuis>pretty easy to fix though
18:44:03  <isaacs>trevnorris: um... kinda all over the map. not sure any of this is "improvements" at this point.
18:44:12  <isaacs>trevnorris: the patient is cut up into pieces on the operating table.
18:44:18  <trevnorris>lol ok
18:44:34  <isaacs>trevnorris: i'm experimenting with removing drastic amounts of functionality from stream.Writable.
18:44:44  <trevnorris>got it.
18:44:51  <isaacs>but i'm unsure if it's actually an improvement. of course, it breaks like a zillion tests, which i ahven't updated right now.
18:44:57  * piscisaureus_joined
18:45:02  <isaacs>and i'm not seeing much speed improvement... so... it's unclear if it's a win.
18:45:30  * mikealjoined
18:45:32  <isaacs>but my suspicion is that maybe if i update Readable accordingly, then they might play nicer together? i'm not sure, though.
18:45:46  <isaacs>if it turns out not to be worthwhile, i'm going to shitcan the whole branch, and try a different thing.
18:46:19  <trevnorris>understand.
18:46:57  * c4miloquit (Remote host closed the connection)
18:47:57  <isaacs>trevnorris: but, my latest stuff is on stream-writable-rewrite
18:48:05  <isaacs>it's marginally faster.
18:48:15  * indexzeroquit (Quit: indexzero)
18:48:20  <isaacs>averaging about 3.0Gb/sec on my machine instead of 2.8-2.9
18:48:25  <isaacs>(on net-pipe.js)
18:48:25  <trevnorris>isaacs: when the tests are broken into multiple processes, the margin of regression is larger.
18:48:34  <isaacs>trevnorris: that's great.
18:48:44  <isaacs>i'll care when net-pipe is back up to 3.5 on my machine :)
18:48:51  <isaacs>then we can zoom in more.
18:49:00  <trevnorris>isaacs: for example: tcp_raw_c2s: master - 17.752; v0.8.17 - 19.612
18:49:25  <trevnorris>imho we need to figure out why _raw_ tests have such a big gap.
18:50:17  * Raltjoined
18:51:58  <isaacs>trevnorris: so, one thing i noticed was that my tcp-raw stuff had a lot of cutesy closure bs in it
18:52:09  <isaacs>req.oncomplete = afterWrite(fn)
18:52:10  <isaacs>etc.
18:52:18  <isaacs>that should probably be unrolled.
18:53:02  <isaacs>if only to rule out V8 re-optimizing closures as a culprit
18:54:34  <trevnorris>isaacs: here's the output of _raw_c2s with the new noop: https://gist.github.com/d6104d0e906adb870723
18:55:02  <trevnorris>sec. let me update that w/ v0.8 as well
18:55:55  <trevnorris>isaacs: ok, I think those two comparisons will be helpful.
18:56:24  <trevnorris>heh, the only thing that's optimized in the v0.8 version is noop.
18:57:44  <trevnorris>isaacs: ok. just removed all the benchmark noise. v0.8 does almost nothing.
19:01:22  * indexzerojoined
19:01:49  <isaacs>nice
19:08:31  <trevnorris>isaacs: so it looks like v8 is complaining about itself: "Did not inline InstantiateFunction"
19:09:01  * felixgequit (Quit: felixge)
19:09:03  <trevnorris>indutny: know anything about ^
19:09:49  * felixgejoined
19:09:50  * felixgequit (Changing host)
19:09:50  * felixgejoined
19:12:48  * piscisaureus_quit (Ping timeout: 252 seconds)
19:14:37  * brsonquit (Ping timeout: 246 seconds)
19:15:43  <isaacs>hrm. so, ripping out all these features from Writable, and using smaller more inlineable functions, net-pipe got 2% faster, but http_simple got 1.2% slower.
19:15:44  * brsonjoined
19:16:15  <indutny>trevnorris: no
19:16:16  <indutny>not really
19:16:30  <indutny>bert is everywhere http://www.youtube.com/watch?v=tgTniHwGPuM
19:16:56  <trevnorris>ok, worth a shot. just don't like that v8 is complaining about itself.
19:17:39  <trevnorris>isaacs: hm. this is a conundrum.
19:18:13  <indutny>trevnorris: file a bug
19:18:58  <trevnorris>indutny: yeah, good point.
19:19:39  <trevnorris>bnoordhuis: what was it you said the other day about something getting fixed in v8 3.16? (i know that's so descriptive and all)
19:20:15  <bnoordhuis>trevnorris: too lazy gc?
19:20:51  <trevnorris>bnoordhuis: ah yeah. that's right. that ram usage was getting almost double.
19:21:23  <trevnorris>i did notice that v0.8 was garbage collecting almost 2x's as often as on master
19:23:43  <trevnorris>isaacs: so is most the slowness in the _raw_ tests not going to be on the js side?
19:24:21  * `3rdEdenquit (Quit: device swap)
19:26:48  <MI6>joyent/libuv: Ben Noordhuis master * 8311390 : darwin: merge uv_cond_timedwait implementation Merge the OS X specific i - http://git.io/CafB5g
19:27:55  <tjfontaine>bnoordhuis: missing file in that commit?
19:28:17  <bnoordhuis>missing file?
19:28:21  <tjfontaine>oh I see
19:28:26  <tjfontaine>mind me not
19:29:05  * `3rdEdenjoined
19:33:26  * c4milojoined
19:36:46  * mmalecki[out]changed nick to mmalecki
19:39:01  * mikealquit (Quit: Leaving.)
19:41:59  * felixgequit (Quit: felixge)
19:42:23  <isaacs>trevnorris: it could still be "on the js side". but it wouldn't be in node's lib/*.js. it might still be in src/node.js
19:42:44  <isaacs>trevnorris: but that's unlikely, i think. that file changed, but not TOO dramatically, and moving MC back to C++ doens't change things significantly.
19:43:16  * AvianFluquit (Remote host closed the connection)
19:43:20  <trevnorris>isaacs: ok. know of a way we could test the underlying mechanism (libuv I'd assume)?
19:43:26  <isaacs>trevnorris: we're looking for a 10-40% slowdown. shaving off 0.5% for a deopt of _makeCallback, it turns out, is not relevant.
19:43:50  <isaacs>trevnorris: gotta run. i'll be back in a few hours.
19:43:55  <trevnorris>kk
19:46:26  <mraleph1>was somebody looking for me?
19:47:43  <mraleph1>ah I see, isaacs mentioned me.
19:47:55  <tjfontaine>ya about understanding deopt output
19:48:35  * sblomquit (Ping timeout: 260 seconds)
19:48:37  <bnoordhuis>https://github.com/bnoordhuis/node/compare/9a488a6...e0bffc4 <- thoughts on this? adds --libdir and --mandir switches to configure
19:48:48  <mraleph1>isaacs: here are some tips. First: if you are running x64 build of node then deoptimization comment does not always match the real reason for deopt.
19:49:47  <mraleph1>isaacs: the second thing tagged-to-i are conversion. it deopts if you get double that does not fit into int or something that is not a number at all.
19:50:10  <mraleph1>isaacs: deoptimize - is a soft deopt − it happens when you reach code that was never executed before.
19:50:39  <tjfontaine>bnoordhuis: if target_path.endswith('/'): might be an issue on win32?
19:50:58  <trevnorris>mraleph1: v8 is deoptimizing itself (e.g. Instantiate) so nothing to worry about?
19:50:59  * paddybyersquit (Ping timeout: 248 seconds)
19:51:00  <bnoordhuis>do people use install.py on windows?
19:51:02  * felixgejoined
19:51:07  * felixgequit (Changing host)
19:51:07  * felixgejoined
19:51:08  <mraleph1>gap never deoptimizes. so if you see it — then your deopt point points to a wrong place. you need to do manual correlation between generated code and a deopt
19:51:18  <tjfontaine>bnoordhuis: I don't know, I suppose if they do that means they're in mingw land
19:51:25  <tjfontaine>where it wouldn't matter
19:51:28  <mraleph1>trevnorris: why does Instantiate deopt?
19:51:41  <bnoordhuis>tjfontaine: also, target_path is the path specified by us
19:52:00  <tjfontaine>ah right
19:52:32  <mraleph1>trevnorris: I guess for node.js it would be very cool is Instantiate would be implemented a little bit different, cause it is quite a hot path for object creation.
19:52:40  * lohkeyjoined
19:52:40  <trevnorris>mraleph1: whoop. nm. just said couldn't be inlined.
19:52:51  <trevnorris>mraleph1: it's just a message that isn't showing up when the test is run on v0.8
19:53:27  <mraleph1>well, some inlining heuristics might have change, or printing. I don't think it was ever inlined anywhere.
19:53:40  <trevnorris>cool
19:57:32  <mraleph1>I am working on a tool that would make reading of deopt output easier, but it is not there yet.
19:57:47  <mraleph1>Hopefully I'll get some raw version out soonish.
19:58:48  <bnoordhuis>mraleph1: v8 uses gettimeofday in a lot of places. what happens if the system clock goes back or forward?
19:59:15  <mraleph1>who knows.
19:59:35  <bnoordhuis>interesting, isn't it?
19:59:48  <mraleph1>I don't think it uses it only any essential code pats.
19:59:55  <mraleph1>on any
20:00:05  <mraleph1>s/only any/on any/
20:00:29  <bnoordhuis>looks like it, only for logging
20:00:41  <bnoordhuis>i.e. logging would be off
20:09:14  * mikealjoined
20:11:55  * EhevuTovjoined
20:13:43  <trevnorris>well poop. i'm splitting the net tests to run in multiple processes, and the gap is growing:
20:13:45  <trevnorris>tcp_raw_c2s master - 17.82 ; v0.8 - 19.46
20:14:24  <trevnorris>bnoordhuis: have a tip where I could start looking at the non js tcp_wrap side?
20:15:54  * mikeal1joined
20:16:16  * mikealquit (Read error: Connection reset by peer)
20:25:51  * jmar777quit (Read error: Connection reset by peer)
20:26:20  * jmar777joined
20:29:54  * Raltquit (Remote host closed the connection)
20:33:53  * mikeal1quit (Quit: Leaving.)
20:34:35  * Raltjoined
20:35:01  * mikealjoined
20:59:07  * AvianFlujoined
21:00:27  * rje`macquit (Ping timeout: 276 seconds)
21:02:06  * rje`macjoined
21:13:42  * Raltquit (Remote host closed the connection)
21:21:25  * piscisaureus_joined
21:21:29  * Raltjoined
21:21:30  <piscisaureus_>hello
21:21:35  <piscisaureus_>what was I supposed to be working on again?
21:22:50  * sgallaghquit (Remote host closed the connection)
21:25:15  <trevnorris>piscisaureus_: i'm seeing a significant gap in performance using tcp_wrap directly. and I'd think that part isn't affected too much by js problems.
21:25:44  <piscisaureus_>trevnorris: tcp_wrap is slower, or faster?
21:25:45  <trevnorris>and i'm having a difficult time figuring out where to look for issues there.
21:26:06  <piscisaureus_>trevnorris: btw - i can hardly believe it was that what i am supposed to be working on
21:26:09  <piscisaureus_>;)
21:26:18  <trevnorris>piscisaureus_: slower: master - 17.82 Gb/sec ; v0.8 - 19.46 Gb/sec
21:26:31  <CoverSlide>checkout, bench, log, checkout, bench, log
21:26:32  <trevnorris>heh, doubt it.
21:26:45  * paddybyersjoined
21:27:25  <piscisaureus_>trevnorris: have you tried reverting v8 to 3.11.10?
21:27:38  <trevnorris>piscisaureus_: there are api changes that I don't know how to fix.
21:27:54  <piscisaureus_>trevnorris: just revert those changes too :)
21:28:46  <trevnorris>ok, i'll give that a shot. thanks.
21:29:10  <trevnorris>CoverSlide: problem is so far haven't found any one commit that's caused a big issue. seems to have been slowly declining.
21:29:31  * perezdquit (Quit: perezd)
21:30:20  * perezdjoined
21:30:47  * wolfeida_quit (Remote host closed the connection)
21:31:37  * jmar777quit (Remote host closed the connection)
21:32:13  * jmar777joined
21:35:31  * TheJHjoined
21:36:44  * jmar777quit (Ping timeout: 256 seconds)
21:39:03  <trevnorris>isaacs: using tcp_wrap, what happens if you call handle.writeBuffer() several times in a row and pass a noop to oncomplete?
21:39:48  <piscisaureus_>trevnorris: isaacsl: is that regression with buffers btw?
21:39:57  <piscisaureus_>or strings?
21:40:01  <piscisaureus_>* isaacs
21:40:46  <trevnorris>piscisaureus_: how would it be w/ buffers? every benchmark I've written has shown buffer allocation and operations are faster.
21:41:11  <piscisaureus_>trevnorris: that's not what I meant
21:41:23  <piscisaureus_>trevnorris: v8 may have killed an optimization we do in write_wrap w/ strings
21:41:40  <piscisaureus_>(namely the one that relies on v8::String::HasOnlyAsciiChars)
21:42:14  <trevnorris>ah, interesting. don't know if this is related, but grabbing byteLength of a utf8 string takes for freakin ever.
21:42:53  * felixgequit (Read error: Connection reset by peer)
21:43:03  * paddybyersquit (Ping timeout: 244 seconds)
21:43:37  * felixgejoined
21:43:37  * felixgequit (Changing host)
21:43:37  * felixgejoined
21:44:13  <trevnorris>piscisaureus_: hm. I don't see HasOnlyAsciiChars in the docs or the includes?
21:45:09  <trevnorris>I see MayContainNonAscii
21:45:30  <piscisaureus_>trevnorris: ah yes, the final API was "inverted" -> https://github.com/joyent/node/blob/master/deps/v8/include/v8.h#L1086
21:46:10  <trevnorris>interesting.
21:50:11  * wolfeidaujoined
21:50:21  * felixgequit (Quit: felixge)
21:55:30  * rendarquit
21:59:42  * c4miloquit (Remote host closed the connection)
22:07:50  <piscisaureus_>i feel somewhat bad about the substack joke i made in my dotjs preso
22:07:55  * perezdquit (Quit: perezd)
22:08:21  <TooTallNate>piscisaureus_: what was it?
22:10:14  * perezdjoined
22:10:23  <piscisaureus_>i dont really remember where. I would look it up for you if i could stand watching myself.
22:12:00  * `3rdEdenquit (Remote host closed the connection)
22:13:27  * perezdquit (Client Quit)
22:13:57  <bnoordhuis>piscisaureus_: but what was the joke?
22:14:37  <piscisaureus_>iirc: "So here's a small and totally pointless module. Like, something that substack could write."
22:14:42  * perezdjoined
22:14:59  <TooTallNate>piscisaureus_: woah, cool venue!
22:15:27  <piscisaureus_>yes, that was great
22:15:40  * paddybyersjoined
22:17:45  * Benviequit
22:18:48  * hzquit (Ping timeout: 264 seconds)
22:24:33  * hzjoined
22:25:45  <isaacs>piscisaureus_: the regressions are less bad with strings.
22:26:26  <isaacs>piscisaureus_: i watched half of your talk before yoga class. i'll watch the rest this afternoon. good stuff. it's nice to see someone else talking about domains and streams2 :)
22:27:11  <trevnorris>link? i'm just waiting for a bunch of benchmarks to run.
22:27:17  <piscisaureus_>isaacs: thanks. I actually did it because I didn't know what to tell. Obviously I like to speak about stuff I worked on myself and this was all your work :-)
22:27:25  <piscisaureus_>isaacs: but thanks for giving me something to talk about :-)
22:29:34  <TooTallNate>http://www.youtube.com/watch?feature=player_detailpage&v=tgTniHwGPuM#t=796s
22:29:35  <TooTallNate>^ found it
22:31:14  <piscisaureus_>ouch
22:31:17  <piscisaureus_>yes, that
22:31:29  <piscisaureus_>I will just hope that nobody tells him ;-p
22:32:03  <isaacs>piscisaureus_: he'd probably be the first to agree that he writes little useless modules sometimes :)
22:32:14  <piscisaureus_>iphew
22:32:29  <piscisaureus_>Ok. I'm almost dying and going to sleep.
22:32:37  <piscisaureus_>isaacs: welterusten en tot morgen.
22:32:49  <piscisaureus_>euh
22:33:00  <piscisaureus_>isaacs: ik bedoel: fijne dag nog verder
22:33:21  * TheJHquit (Ping timeout: 245 seconds)
22:34:49  <isaacs>piscisaureus_: i dont' actually know dutch, you know :)
22:35:13  <piscisaureus_>isaacs: too bad. I actually don't know javascript. It doesn't matter.
22:35:45  <isaacs>piscisaureus_: whatever. javascript is like dutch. if you know any english/c, you probably can muddle through most of it.
22:40:14  <trevnorris>isaacs: you know if there's a way to make sure a child process doesn't get zombified?
22:40:48  <CoverSlide>headshot?
22:40:53  <CoverSlide>oh
22:40:57  <CoverSlide>:p
22:42:29  * `3rdEdenjoined
22:42:40  <trevnorris>CoverSlide: if only it was that easy.
22:45:17  <isaacs>trevnorris: they should not be zombified by default.
22:45:26  <isaacs>trevnorris: unless you spawn with { detached: true }
22:45:59  <trevnorris>isaacs: that's what I figured, but seems to act up sometimes. it's nothing big though
22:47:24  <mmalecki>each uv_write should be given a different uv_write_t, correct?
22:48:05  <piscisaureus_>goodbye
22:48:08  * piscisaureus_quit (Quit: ~ Trillian Astra - www.trillian.im ~)
22:49:22  <isaacs>trevnorris: you can make a last-ditch effort to child.kill('KILL') on process.on('exit')
22:49:37  * Benviejoined
22:50:44  * `3rdEdenquit (Ping timeout: 248 seconds)
22:52:00  <trevnorris>isaacs: my brain is going to explode. so i'm experimenting with changes to the benchmarks by spawning off either the client or the server depending on the test.
22:52:30  <trevnorris>and the numbers are ridiculous. one sec and i'll get them.
22:53:08  <isaacs>trevnorris: well, i'd expect ahuge bump if you did that, since now you're not sharing a CPU
22:53:10  * qmx|awaychanged nick to qmx
22:53:30  * qmxquit (Excess Flood)
22:53:30  <isaacs>trevnorris: so you'll have 2 CPUs at 100%, instead of 1
22:53:40  <trevnorris>isaacs: and there is, but not how i'd like.
22:53:50  <isaacs>ok
22:53:55  * qmxjoined
22:54:30  * Raltquit (Remote host closed the connection)
22:55:25  <trevnorris>isaacs: https://gist.github.com/b25221576277c907d1a9
22:55:45  <trevnorris>so comparing the raw tests shows that master is behind v0.8 ouside of js land
22:55:58  <trevnorris>and then from raw to net2 shows another large drop
22:56:05  <trevnorris>so there are regressions on two fronts.
22:56:29  <isaacs>yessir.
22:56:35  <isaacs>i'm interested in the lower-level ones first.
22:56:49  <isaacs>since that can exaccerbate higher-level regressions, as a general rule of thubm
22:56:57  <trevnorris>me too. i'd like to see the raw tests come closer.
22:57:03  <trevnorris>but i'm not sure where to start looking for that.
22:57:06  <isaacs>so, here's what i'll do: i'm gonna do another "run the tests on every commit" kind of graph
22:57:14  <isaacs>but with raw-c2s
22:57:30  <isaacs>and see where it slips
22:58:15  <trevnorris>already did, twice. all the way back to v0.9.0. couldn't see a serious drop in raw tests.
22:58:28  <trevnorris>i'll run them again to make sure, but couldn't see anything.
22:58:50  <isaacs>k
22:59:04  <isaacs>so you're saying that 0.9.0 is on par with master?
22:59:19  <isaacs>0.9.0 is almost identical to 0.8.0
22:59:21  <trevnorris>as far as the raw tests go, yeah. let me run that again to make sure.
22:59:33  <TooTallNate>this is… good news!
22:59:37  <isaacs>this script is in your benchmark-refactor branch?
22:59:38  * perezdquit (Quit: perezd)
23:00:20  <trevnorris>no. i just created it. updated the last gist with it.
23:00:24  <isaacs>awesome
23:01:18  <isaacs>so, this is going back just using HEAD~40
23:01:34  * perezdjoined
23:02:48  <isaacs>for my tests, i did git log --first-parent to walk back over the master commits only
23:03:00  <isaacs>and then another that just walked back over release comits
23:03:00  <trevnorris>isaacs: no. it's going back 10 commits at a time, 40 times.
23:03:04  <isaacs>oh, right
23:03:06  <isaacs>that's what i meant
23:03:18  <isaacs>how does HEAD~10 handle merge commits?
23:04:19  * indexzeroquit (Quit: indexzero)
23:04:35  <trevnorris>goes along the main branch, so always stays on master
23:04:42  <trevnorris>isaacs: check the gist again.
23:04:53  <trevnorris>just ran the new spawn net tests against master an v0.9.0
23:04:57  * perezdquit (Client Quit)
23:05:02  <trevnorris>you'll see the raw tests are almost exactly the same
23:05:20  <isaacs>trevnorris: neat
23:05:21  <isaacs>ok
23:05:56  <isaacs>well, v0.9.0 is consistently a bit faster.
23:06:04  <isaacs>but just a bit
23:06:15  <isaacs>you're right, that's not enough to explain it
23:06:42  <isaacs>ok, i gotta go to a different cafe, and then i'll throw some cloudy machine power at this :)
23:06:48  <isaacs>back in a bit :)
23:07:22  <trevnorris>kk. going to keep stepping back in time.
23:08:57  * qmxchanged nick to qmx|away
23:16:10  <trevnorris>isaacs: ok, disregard those numbers. the first child process didn't die for some reason. so all other tests were connecting to it.
23:17:20  <TooTallNate>benchmarks are hard :\
23:18:32  <trevnorris>seriously. being able to control the same variables every time can be a challenge.
23:20:19  * hzquit (Disconnected by services)
23:20:23  * hzjoined
23:20:55  * ArmyOfBrucequit (Ping timeout: 260 seconds)
23:22:06  * brucemquit (Quit: ZNC - http://znc.sourceforge.net)
23:23:07  * brucemjoined
23:30:15  <indutny>isaacs: yt?
23:30:28  <indutny>isaacs: what happens when ._read() returns more data then it was requested to return? :)
23:30:45  <TooTallNate>indutny: i believe that's fine
23:30:55  <TooTallNate>indutny: at least, returning *less* than requested I know is fune
23:30:56  <TooTallNate>fine
23:31:05  <TooTallNate>not positive about the opposite :p
23:31:11  <indutny>yeah
23:31:14  <indutny>seems to be the same
23:31:25  <indutny>ok, I'm still investigating what happens there :)
23:33:04  <mmalecki>imo it should buffer
23:33:21  <TooTallNate>mmalecki: well ya, that's what it would do until you .read() out the bytes
23:35:39  <trevnorris>indutny: one i'll hit frequently is system noise. when I run benchmarks, have to make sure as little is running in the background as possible.
23:35:57  <indutny>trevnorris: huh?
23:36:20  <trevnorris>for example, you can see as large as 10% variance in the tcp tests between two runs.
23:36:58  <indutny>yes, that's fine
23:37:06  <indutny>you need to account deviation
23:37:20  <indutny>its pretty statistical thing anyway
23:37:26  <indutny>either do N runs
23:37:35  <indutny>or run it for N times longer time
23:37:49  <indutny>until it'll be the same between runs
23:37:57  <indutny>or just account deviation
23:41:17  <trevnorris>indutny: yeah. more tests you run you get a tighter confidence interval, but usually your stdev will grow.
23:41:46  <trevnorris>but the distribution of the spread can let you know if the tests is correctly performed.
23:42:41  <trevnorris>so a quick reference would be a comparison between median and mean.
23:43:11  <trevnorris>if they're far off, then your spread is likely crap. but you also need to take into account a tight spread with a long tail.
23:43:34  <trevnorris>spread == distribution
23:45:08  <trevnorris>but you could also use the quartiles at 25, 50 and 75 to give you a better picture.
23:45:13  <trevnorris>(have some experience with this ;-)
23:47:30  * AvianFluquit (Remote host closed the connection)
23:49:55  * qmx|awayquit (Ping timeout: 248 seconds)
23:50:59  <isaacs>indutny: it's fine
23:51:14  <isaacs>indutny: the size arg to _read() is advisory
23:51:19  <isaacs>indutny: you can return anything or nothign
23:51:36  <isaacs>indutny: the size argument to read() is a contract.
23:52:41  <indutny>ok
23:53:23  <isaacs>trevnorris: re: system noise, yes, that is a problem. that's why i prefer to run benchmarks on joyent public cloud machines, rather than my own laptop
23:53:44  <isaacs>trevnorris: or, at least, run them in close proximity, without touching anything
23:53:55  <indutny>shit
23:54:00  <indutny>this watermarks shit is so borked
23:54:03  <trevnorris>isaacs: is it you're own vm, or is it shared resources?
23:54:09  <indutny>calling .read(0) won't always work because of it
23:54:17  <indutny>and after calling cb(null, '')
23:54:22  * qmx|awayjoined
23:54:24  <indutny>I need it to invoke ._read
23:55:45  <indutny>isaacs: what do you think about adding `n=== 0 || ...` to this line https://github.com/joyent/node/blob/master/lib/_stream_readable.js#L201 ?
23:56:09  <indutny>or should I just make highWatermark infinity?
23:56:20  <indutny>in my stream
23:56:47  <trevnorris>isaacs: got it: https://gist.github.com/b25221576277c907d1a9
23:56:59  <trevnorris>those are the key points when raw tests regressed.
23:57:22  <indutny>oh, I can't make high watermark infinite
23:57:23  <indutny>shit
23:57:29  * c4milojoined