00:00:03  <isaacs>usually because of assumptions that you don't even know you have.
00:00:13  <isaacs>you can try as hard as possible to be completely honest and reasonable, and still fuck up.
00:00:22  <isaacs>in fact, it's almost impossible NOT to fuck up in that case.
00:01:04  <einaros>I'll be eagerly awaiting the results all the same, though. regardless of how ws matches up against the others, I'll have good reason to make further improvements.
00:01:22  <isaacs>yes
00:01:32  <isaacs>benchmarks are not for finding out how fast you are, they're for finding out where yor'e slow
00:01:39  <einaros>exactly
00:01:40  <isaacs>that's why bad benchmarks are so frustrating! they tell you nothing!
00:03:10  <einaros>I've already started rewriting bigger chunks of my websocket code to work against libuv directly .. which should improve memory handling perf
00:03:15  <einaros>but overall it feels like a bad idea
00:03:51  * dshaw_quit (Quit: Leaving.)
00:04:40  <isaacs>hm
00:04:49  <isaacs>yeah, that sounds kind of suspect to me
00:05:03  <isaacs>i mean, unless you wantit to be portable to luvit and julia and whatnot
00:05:18  <isaacs>but if you want it to be specifically a node thing, then i usually find doing it in JS is best.
00:06:56  <einaros>what's killing websocket perf at the moment is 'new Buffer' and js<->native transition. so I'm just pushing as much as I can over to native to see if it improves anything which would make it be worth the extra work.
00:06:59  * isaacstopic: 4 more! https://github.com/joyent/node/issues?milestone=10&state=open
00:07:09  <einaros>I don't think it will be, but doesn't hurt to give it a go
00:08:09  <isaacs>einaros: you're creating these buffers in C or JS?
00:09:01  <einaros>right now - js
00:10:14  * Raynosjoined
00:11:43  * mmaleckiquit (Ping timeout: 246 seconds)
00:11:58  <isaacs>i see
00:12:04  <isaacs>hm. that shouldn't be terrible, then
00:12:36  <einaros>for a websocket.send(someBuffer) there are a few special cases where I can avoid making a buffer of size someBuffer.length, but in most instances I have to copy that data over to another buffer which is headerLength + someBuffer.length and then mask each byte of the payload by some value
00:13:04  <einaros>the masking I hand off to native code, that improves perf for larger payloads
00:14:09  <einaros>in cases where I do not have to touch the someBuffer data, I merely create a new buffer big enough to hold the header (typically 2-6 bytes long in that case), then do to separate socket writes of the header and payload
00:14:58  <einaros>since the nagle algo is off, that will most often result in two separate packets (with an added tcp header etc as overhead)
00:15:09  * ryahjoined
00:15:22  <ryah>meh, i really don't want to do this but this benchmark is annoying me
00:15:31  <ryah>has anyone gotten his load generator to run?
00:15:40  <ryah>i'm getting
00:15:42  <ryah>~/src/wsdemo% erl -pa ebin deps/*/ebin -s wsdemo
00:15:42  <ryah>Erlang R14B04 (erts-5.8.5) [source] [64-bit] [smp:4:4] [rq:4] [async-threads:0] [kernel-poll:false]
00:15:45  <ryah>{"init terminating in do_boot",{undef,[{wsdemo,start,[]},{init,start_it,1},{init,start_em,1}]}}
00:15:48  <ryah>init terminating in do_boot ()
00:16:02  <einaros>everything runs fine here, but his websocket client is broken
00:16:02  <ryah>einaros: have you run the benchmark?
00:16:26  <einaros>I haven't bothered fixing it, since it's his thing to deal with
00:16:50  <ryah>sure, but i want to see if 'ws' is actually faster than 'websocket' or it's something else entirely
00:17:03  <ryah>einaros: have you done benchmarks between the two?
00:17:52  <einaros>well at the moment you can't run ws against it, since the benchmark's client code doesn't work with incomplete payloads (https://github.com/Weltschmerz/wsdemo/blob/master/src/websocket_client.erl#L170-190)
00:18:19  <einaros>in terms of ws vs websocket-node, ws is faster for all use cases .. how much, however, varies
00:18:51  <ryah>einaros: by an order of magnitude?
00:19:44  <einaros>added you to a repo at https://github.com/einaros/websocket-benchmark
00:19:56  <einaros>I've yet to do latency benchmarks, which would be more interesting here
00:21:42  * sh1mmerjoined
00:21:42  * sh1mmerquit (Client Quit)
00:22:36  <isaacs>einaros: yeah, i recall
00:22:40  <isaacs>haven't had a chance to play with it yet
00:23:33  * ryahquit (Quit: bbl)
01:16:40  * loladiroquit (Ping timeout: 246 seconds)
01:50:58  * sh1mmerjoined
01:52:05  * dshaw_joined
01:57:18  * dshaw_quit (Quit: Leaving.)
02:01:47  * sh1mmerquit (Quit: sh1mmer)
02:16:09  * sh1mmerjoined
02:22:20  * brsonquit (Ping timeout: 245 seconds)
02:23:09  * sh1mmerquit (Quit: sh1mmer)
03:03:34  <AvianFlu>man, these centOS failures are batshit weird
03:25:33  * brsonjoined
03:58:09  * dshaw_joined
04:05:16  <isaacs>AvianFlu: oh, you're looking at em?
04:05:21  <isaacs>AvianFlu: which ones, the dgrams?
04:23:28  * mmaleckijoined
04:24:12  <AvianFlu>isaacs, I got shanghai'd into devops for most of the day, but I've been poking at the dgram ones
04:24:15  <AvianFlu>they don't make any sense to me
04:24:52  <AvianFlu>the parent's send socket closes quickly, but I don't see a cause
04:29:47  <mmalecki>lol, shanghai'd
04:34:32  <AvianFlu>mmalecki: trying to statically link an outdated opencv version counts as shanghai'd.
04:36:03  <mmalecki>AvianFlu: yeah it does
04:36:13  <mmalecki>AvianFlu: also, I recall you telling me about it yesterday
04:36:28  <mmalecki>fuck yeah, I remember something from last night!
05:16:51  * sh1mmerjoined
05:29:24  * dshaw_quit (Quit: Leaving.)
05:36:12  * dshaw_joined
05:47:31  * rendarjoined
06:31:15  * sh1mmerquit (Quit: sh1mmer)
06:53:43  * logicalparadoxjoined
06:54:16  * logicalparadoxquit (Client Quit)
07:05:18  * paddybyersjoined
07:21:06  * perezdquit (Quit: perezd)
07:28:22  * dshaw_quit (Quit: Leaving.)
07:32:11  <indutny>isaacs: what stdio[2] stuff were you guys talking about? :)
07:32:31  <indutny>isaacs: looks I didn't get that on the call (because I don't seem to get it even now :P )
07:33:46  <indutny>isaacs: btw, forks do not have stdin atm
07:44:39  * TheJHjoined
07:50:37  * bnoordhuis_joined
08:11:35  * bnoordhuis__joined
08:13:30  * paddybyersquit (Quit: paddybyers)
08:15:30  * bnoordhuis_quit (Ping timeout: 265 seconds)
08:19:16  * bnoordhuis__quit (Ping timeout: 246 seconds)
08:20:12  * mralephjoined
08:20:55  * paddybyersjoined
08:33:55  * hzjoined
08:36:43  * hzquit (Client Quit)
08:37:08  * hzjoined
08:40:10  * mmaleckiquit (Ping timeout: 244 seconds)
09:06:23  * mmaleckijoined
09:09:27  * mralephquit (Quit: Leaving.)
09:19:47  * brsonquit (Remote host closed the connection)
10:19:43  * TheJHquit (Read error: Connection reset by peer)
10:30:44  * loladirojoined
10:50:43  * loladiro_joined
10:53:04  * loladiroquit (Ping timeout: 265 seconds)
10:53:04  * loladiro_changed nick to loladiro
11:05:21  <hz>hey guys
11:05:43  <hz>in generated project for visual studio
11:06:02  <hz>in compiler command line i get /TP
11:06:15  <hz>it means the code is compiled as c++
11:06:35  <hz>is this wanted? or not?
11:45:26  * hij1nxjoined
11:45:32  * hij1nxquit (Remote host closed the connection)
11:46:05  * hij1nxjoined
11:46:27  * hij1nxquit (Client Quit)
12:42:07  <mmalecki>where is Ben when you need him.
12:42:46  <mmalecki>anyway, is running the same event loop twice safe?
12:43:17  * c4milojoined
13:50:06  * hzquit (Disconnected by services)
13:50:10  * hzjoined
14:15:35  * c4miloquit (Remote host closed the connection)
14:31:08  * loladiroquit (Quit: loladiro)
14:31:57  <indutny>mmalecki: AFAIK no
14:43:52  <mmalecki>indutny: thanks
14:43:56  <AvianFlu>indutny, wait, you said forks don't have stdin before?
14:44:27  <AvianFlu>like, it's broken?
14:46:33  <indutny>AvianFlu: em... not sure about before
14:46:43  <indutny>AvianFlu: but now they don't AFAIK
14:47:03  <AvianFlu>that's probably not a good thing.
14:47:07  * AvianFlugoes to poke it with a stick
14:47:31  <indutny>AvianFlu: indeed, stdin is an IPC channel atm
14:47:41  <indutny>AvianFlu: but it's like one-line change to fix it
14:48:02  <AvianFlu>the best kind :D
14:55:42  * loladirojoined
15:01:36  <einaros>that websocket benchmark node took part in should be run against 0.7, not 0.6
15:04:04  <AvianFlu>do we even know which node version was used?
15:04:31  <AvianFlu>(I agree with you though, lol)
15:05:15  <einaros>I haven't seen any mention of it
15:05:40  <einaros>but the benchmark is ridiculous: the suite is broken and the data is wrong.
15:18:54  * mmaleckiquit (Ping timeout: 240 seconds)
15:23:57  * piscisaureus_joined
15:30:32  * mmaleckijoined
15:36:46  * mmaleckiquit (Ping timeout: 246 seconds)
15:41:13  * milanijoined
15:46:58  * milaniquit (Quit: Ex-Chat)
16:17:37  <isaacs>indutny: yeah, i know, it's fine
16:17:42  <isaacs>indutny: we'll address in 0.9
16:18:24  <isaacs>indutny: the stdio[2] thing was just that for some reason, i think maybe a first wip, you didn't have that, and both piscisaureus_ and i seemed to have missed that you'd eventually added it.
16:18:45  <isaacs>indutny: like how child.stdio[0] === child.stdin, child.stdio[2] == child.stderr, etc
16:19:27  <isaacs>indutny: we'll get fork(..).stdin post-0.8
16:19:37  <piscisaureus_>for the sake of consistency we should stop lying and just admit that stdin is also readable :-)
16:20:41  <isaacs>piscisaureus_: heretic.
16:20:50  <isaacs>(yes, you're right, though)
16:21:15  <piscisaureus_>otherwise we would have to add the option "pipe-wo" and "pipe-ro" :-)
16:21:21  <piscisaureus_>it's a very easy fix
16:21:30  <piscisaureus_>maybe I do it monday
16:21:39  <isaacs>piscisaureus_: post 0.8!
16:21:45  <isaacs>piscisaureus_: yor'e scaring me :)
16:22:08  <piscisaureus_>isaacs: I am not going to do pipe-wo and pipe-ro, no
16:29:40  <einaros>phew, finally got that wsdemo benchmark to run against node without dropping >50% of the connections
16:33:28  <indutny>isaacs: ah, ok
16:37:14  <einaros>I still think the wsdemo benchmark is silly, but this will hopefully make node.js look less like a fail whale: https://github.com/ericmoritz/wsdemo/pull/23
16:45:24  <piscisaureus_>einaros: hey, so do you know why node performed so poorly?
16:48:33  <einaros>piscisaureus_: I've yet to actually run a full benchmark against clustered node and my websocket module, so I don't know how it matches up against the others now
16:49:15  <einaros>but websocket-node, which was tested, is kinda slow. also it seems that about half of the connections in the bench against node were dropped.
16:49:36  <einaros>running on a single core, I had difficulties getting more than about 4000 clients connected before attempts started timing out
16:50:19  <piscisaureus_>einaros: so how does ws perform?
16:50:25  <piscisaureus_>on a single core?
16:52:08  <einaros>under that bench? still no idea. he had a bug in his websocket code which broke against my library. this was fixed in another branch, in which he's also changed the output generator ... and the new generator doesn't seem to be fast enough to dump results to disk before *that* times out :P
16:52:17  <einaros>the whole thing is a mess
16:52:25  <piscisaureus_>einaros: haha
16:53:02  <piscisaureus_>einaros: btw, so y
16:53:50  <piscisaureus_>?
16:54:40  <piscisaureus_>einaros: btw, I was looking at this bufferutil.cc thing. It seems that the stuff it does should be possible in pure-js with good performance
16:55:38  <einaros>yeah, but for big data blocks (1MB+) the perf difference is very noticable
16:56:54  <piscisaureus_>we may have better luck with typed arrays
16:56:57  <einaros>https://gist.github.com/9e597c4e8a2b9b4cb453
16:57:58  <einaros>^-- matched against other node libraries. the difference is partly the native masker, and partly that I avoid reallocations
16:58:15  <einaros>I'm sure there's much to optimize even further, though
16:58:23  <piscisaureus_>yes
16:58:33  <piscisaureus_>the native masker looks like it could be optimized :-)
16:59:01  <piscisaureus_>maybe you could mask 4 bytes at a time instead of 1 at a time
16:59:17  <einaros>oh I am masking four at a time
16:59:34  <einaros>it's just the overflow which is one-at-a-time
16:59:51  <einaros>https://github.com/einaros/ws/blob/master/src/bufferutil.cc#L96-103
17:00:00  <piscisaureus_>ah, right
17:03:08  <einaros>one thing that's bothering me right now, is the way I'm dealing with sends
17:03:59  <einaros>given a ws.send(largeBuffer), I avoid creating a new buffer of largeBuffer.length + headerLength bytes, and copying largeBuffer, by doing two distinct socket.send()s
17:04:32  <einaros>for small payloads (which most will be), that's pretty silly, as I get an overhead of at least tcp header length
17:04:57  <piscisaureus_>einaros: that's not true
17:05:02  <piscisaureus_>(unless you turn off nagle)
17:05:10  <einaros>oh, nagle is way off
17:05:19  <einaros>otherwise we'd be even slower
17:05:22  <piscisaureus_>oh, yes, then that's a problem
17:05:39  <piscisaureus_>einaros: btw - would we ?
17:05:48  <einaros>higher latency
17:05:49  <piscisaureus_>it'd just increase latency right?
17:05:53  <piscisaureus_>yea
17:06:16  <piscisaureus_>I think there are very few real life scenarios where you'd want to turn nagle off
17:06:23  <einaros>since websockets are used for anything from webrtc to games, low latency is key
17:06:37  <einaros>for file transfers and such I agree completely
17:06:54  <einaros>but websockets have very little going for them as it is (pure http is better for most use cases)
17:08:34  <piscisaureus_>einaros: actually node could also just support sending a "list of buffers" of some sort
17:08:42  <piscisaureus_>einaros: since libuv fully supports that
17:09:11  <piscisaureus_>not in 0.8 tho
17:10:00  <piscisaureus_>einaros: but for small messages you could just optimize by concat'in into a single buffer
17:13:18  <einaros>piscisaureus_: yeah, I noticed that uv supports it.. I was considering writing a sender which uses libuv directly, but that feels sort of hairy as well
17:14:22  <piscisaureus_>we could just support `socket.write([header_buf, payload_buf, "some string"])`
17:14:33  <piscisaureus_>although I suspect ryah would turn over in his grave
17:14:56  <einaros>I'll have to benchmark this, to have something real to base my decisions on. for anything more than 1400ish bytes long, the current frame split approach can be reasonable (since it'll likely be split on the wire in either case)
17:15:48  * loladiroquit (Ping timeout: 244 seconds)
17:19:31  <indutny>ok, I give up with async ssl stuff
17:19:37  <indutny>it breaks every other thing
17:19:49  <indutny>and I'm still not sure if it'll get us any benefit at all
17:21:55  <einaros>hm. 50k sends of 64 bytes takes 37% longer with noDelay.
17:22:01  <einaros>that's beyond me
17:22:10  <einaros>oh, buffered
17:22:16  <einaros>ok, not beyond me anymore
17:25:33  * loladirojoined
17:29:20  * loladiro_joined
17:30:06  * loladiroquit (Ping timeout: 260 seconds)
17:30:07  * loladiro_changed nick to loladiro
17:33:23  * mmaleckijoined
17:37:37  <AvianFlu>ircretary: tell isaacs I figured out what's happening with the dgram tests
17:37:38  <ircretary>AvianFlu: I'll be sure to tell isaacs
17:40:15  * isaacstopic: 5 more! https://github.com/joyent/node/issues?milestone=10&state=open
17:40:20  <isaacs>AvianFlu: !?
17:40:29  <isaacs>AvianFlu: what's going on with dgram?
17:40:38  <AvianFlu>it's soooo strange, let me link the code
17:40:41  <AvianFlu>https://github.com/joyent/node/blob/master/test/simple/test-dgram-broadcast-multi-process.js#L159-162
17:40:45  <AvianFlu>so, right above that
17:40:49  <AvianFlu>the messages[i++]
17:40:58  <AvianFlu>for some reason, only on centOS, i comes into that as 4, the first time
17:41:10  * mmaleckiquit (Quit: leaving)
17:41:13  <AvianFlu>and it's not just a stupid global leak, I checked
17:41:43  <isaacs>AvianFlu: i == 4 the first time?
17:41:53  <AvianFlu>yes.
17:42:02  <AvianFlu>so the parent send socket just closes
17:42:07  <AvianFlu>and then the test times out, with no errors
17:42:16  <isaacs>so strange.
17:42:25  <AvianFlu>yeah, no fucking clue
17:43:30  <isaacs>AvianFlu: hm, tha'ts not quite what i'm seeing
17:43:36  <isaacs>$ ./node test/simple/test-dgram-broadcast-multi-process.js
17:43:36  <isaacs>i=0
17:43:36  <isaacs>[PARENT] sent 'First message to send' to
17:43:36  <isaacs>i=1
17:43:36  <isaacs>[PARENT] sent 'Second message to send' to
17:43:39  <isaacs>i=2
17:43:41  <isaacs>[PARENT] sent 'Third message to send' to
17:43:44  <isaacs>i=3
17:43:46  <isaacs>[PARENT] sent 'Fourth message to send' to
17:43:49  <isaacs>i=4
17:43:51  <isaacs>[PARENT] sendSocket closed
17:43:54  <isaacs>[PARENT] Responses were not received within 5000 ms.
17:43:56  <isaacs>[PARENT] Fail
17:45:25  <isaacs>AvianFlu: what i'm seeing is that the parent is sending 4 messages, but it looks like none of them are getting to the child
17:45:39  <einaros>piscisaureus_: https://gist.github.com/d53e8234ac692bcaad75 <-- between two laptops on my lan
17:45:43  <AvianFlu>oh, whoops, I read that wrong
17:45:52  <einaros>piscisaureus_: comparing buffer merge vs. frame split, that is
17:46:02  <isaacs>AvianFlu: even if you replace the process.nextTick with a setTimeout(fn, 100), it's the same
17:46:05  <isaacs>they just never get there
17:46:17  <einaros>piscisaureus_: the 12ms spent locking while merging the buffers is the interesting part, though
17:46:28  <piscisaureus_>einaros: locking??
17:46:38  <einaros>piscisaureus_: see the code
17:46:58  <isaacs>AvianFlu: this is rather suspicious: https://github.com/joyent/node/blob/master/test/simple/test-dgram-broadcast-multi-process.js#L207-209
17:47:17  <AvianFlu>yeah, I'd noticed that
17:47:57  <einaros>piscisaureus_: creating a new buffer of inputbuffer.length size + doing 2x copy to merge header and input takes around 11ms
17:48:25  <einaros>piscisaureus_: whereas just smacking two different buffers on to socket.write takes ~0
17:48:33  <piscisaureus_>einaros: that's rather odd
17:48:42  <piscisaureus_>einaros: 11ms is way too much
17:49:09  <einaros>well it's 10 MB on a macbook air, *shrug*
17:49:12  <piscisaureus_>alright, the payload is 10MB
17:49:41  <piscisaureus_>which means that the new buffer won't be chipped off the slab
17:49:52  <piscisaureus_>and the malloc() call will just make a mmap syscall
17:50:10  <piscisaureus_>so yeah, it is definitely not the cheapest operation
17:50:28  <piscisaureus_>einaros: but I think that it might make sense for very small payloads, e.g. when you are sending 50 bytes
17:50:46  <einaros>not a very common use case for websockets, though, so this would all be optimizations for the 0.1%
17:50:53  <einaros>so yeah
17:50:55  <einaros>might as well merge
17:51:20  <piscisaureus_>einaros: can you not just do either
17:51:35  <piscisaureus_>e.g. for small payloads do a merge, for large ones do a split?
17:52:35  <piscisaureus_>einaros: I think you should benchmark this with small payloads
17:52:50  <piscisaureus_>not 10MB ones
17:52:55  <einaros>I did both :)
17:53:06  <einaros>this was an edge case
17:53:27  <AvianFlu>isaacs, the listenSocket in the child, after the call to .bind(), has an .fd of -42
17:53:32  <piscisaureus_>einaros: what were the numbers fora1 64byte payload
17:53:36  <AvianFlu>shouldn't that be� not a negative number?
17:53:37  <isaacs>AvianFlu: wtf?
17:53:40  <isaacs>yeah
17:53:44  <isaacs>that looks weird.
17:53:52  <AvianFlu>I mean, udp is a little out of my element, but still XD
17:53:53  <isaacs>also, "42" is a magic number in a lot of places in node.
17:53:55  <isaacs>because we're nerds.
17:53:58  <AvianFlu>yeah, I've noticed
17:54:23  <isaacs>that's unfortunate, imo, since {} makes a better sigil, but <shrug>
17:54:32  <isaacs>is it always -42?
17:54:42  <AvianFlu>well, all three children came up with it as -42
17:54:48  <AvianFlu>let me run it a few more times
17:54:48  <einaros>piscisaureus_: for 128, which is probably a pretty realistic mean, it would be https://gist.github.com/6d1146bb29d6897a2976
17:55:11  <AvianFlu>yeah, it's consistently -42
17:55:19  <isaacs>super weird
17:55:34  <piscisaureus_>einaros: merging +1 :-)
17:55:44  <einaros>aye
17:55:55  <hz>http://scassato.hopto.org/download/shrug.txt
17:55:56  <hz>lul
17:58:41  <piscisaureus_>isaacs: AvianFlu: UDP sockets always have -42 as their fd. It doesn't mean anything
17:58:48  <AvianFlu>oh.
17:58:52  <isaacs>ok, kewl
17:58:58  <isaacs>see, like i said: magic
17:59:00  <isaacs>;)
17:59:05  <AvianFlu>I should go cook something with all these red herrings XD
17:59:21  <piscisaureus_>it's a leftover from the rushed transition to libuv in the last days before 0.6
17:59:42  <isaacs>piscisaureus_: the sins of our past.
17:59:49  <piscisaureus_>yes
17:59:55  <piscisaureus_>I think we could actually drop it now
18:01:28  * perezdjoined
18:01:29  * loladiroquit (Ping timeout: 245 seconds)
18:02:39  <piscisaureus_>http://www.dilbert.com/strips/comic/2012-06-14/
18:09:41  <AvianFlu>isaacs, it's probably just something like this, at this rate: https://www.centos.org/modules/newbb/viewtopic.php?topic_id=34513
18:10:00  * loladirojoined
18:10:40  <AvianFlu>(i.e. we're chasing a centOS config)
18:10:49  <isaacs>AvianFlu: !!
18:11:01  <isaacs>AvianFlu: can you try making those changes? you're a sudoer on centosdrone
18:11:07  <isaacs>that would be so awesome.
18:11:08  <isaacs>:D
18:11:09  <AvianFlu>sure, I'll give it a shot
18:11:27  <isaacs>lmk if you need to restart (or just do it, whatever, we don't actually use those boxes for anything important except this)
18:11:42  * isaacsis chasing down an npm bug atm
18:12:16  <isaacs>AvianFlu: if that does fix it, please do post the exact changes in the issue comments, so that future generations can benefit
18:19:57  <einaros>node 0.7 is being slow again
18:20:07  <einaros>0.7.11: Running 40000 roundtrips of 64 B binary data: 9.8s.. 253.99 kB/s
18:20:08  <AvianFlu>isaacs, it gives me 'permission denied', even with sudo
18:20:20  * c4milojoined
18:20:20  <einaros>0.6.18: Running 40000 roundtrips of 64 B binary data:5.5s.. 454.05 kB/s
18:21:11  <AvianFlu>isaacs, but the configs in question *do* seem to be the opposite of what the post says they should be
18:21:14  <AvianFlu>so there's that XD
18:21:59  <tjfontaine>if your sudo is on the echo and not the > it won't work
18:22:21  <AvianFlu>aha
18:22:23  <indutny>ok, time to fix the cake
18:22:26  <AvianFlu>nooby me strikes again
18:22:41  <tjfontaine>sudo -s so you can just have a shell to work in
18:22:48  <AvianFlu>tjfontaine, ++
18:22:49  <kohai>tjfontaine has 1 beer
18:22:49  <indutny>AvianFlu: ^ just wanted to said :)
18:22:58  <indutny>s/said/say
18:23:10  <isaacs>AvianFlu: sudo bash
18:23:13  <isaacs>AvianFlu: then do whatevs
18:23:15  <isaacs>then exit
18:23:26  <tjfontaine>evil evil, -s or -i > $SHELL :)
18:23:27  <isaacs>or sudo -s
18:23:31  <indutny>isaacs: bash can be replaced
18:23:39  <isaacs>not without root access
18:23:47  <indutny>isaacs: in you env variables
18:23:47  <isaacs>in which case, sudo can be replaced, as well
18:24:00  <indutny>or will it lookup in new ENV?
18:24:12  <isaacs>indutny: if i can hijack your envs, i can pwn your whole terminal
18:24:17  <tjfontaine>-i means use the new users ENV, -s means inherit some of current env
18:24:43  <isaacs>i usually just login as root
18:24:48  <AvianFlu>likewise!
18:24:53  <isaacs>i mean, if i have to do rooty stuff
18:24:53  <indutny>isaacs: yep, we're all mined
18:25:28  <indutny>uhh, that's disgusting
18:25:30  <tjfontaine>depends on the management style, and the number of admins that you share the duties with, and how much auditing you need
18:27:23  <isaacs>indutny: whatever. seriously.
18:27:32  <isaacs>if that box gets pwned, i'll just shut it down and make a new one
18:27:35  <isaacs>it's not real anyway
18:27:36  <isaacs>disposable
18:27:56  <indutny>isaacs: what about private keys?
18:28:20  <indutny>isaacs: attacker can copy everyone's keys once you typed "sudo bash"
18:28:20  <isaacs>indutny: you put private keys on machines you can't touch!?!?
18:28:25  <indutny>hahaha
18:28:29  <indutny>no
18:28:35  <isaacs>i don't put private keys on any machines i'm not holding in my hands
18:28:37  <isaacs>ssh -A
18:28:42  <isaacs>if i need to hop
18:28:47  <indutny>yes, it works fine for me too
18:29:03  <isaacs>that's the real reason i'm super paranoid about my laptop being stolen
18:29:16  <indutny>isaacs: don't you have a password on your key?
18:29:20  <isaacs>it'd be such a nightmare
18:29:28  <isaacs>indutny: passwords are broken in a matter of minutes
18:29:30  <isaacs>even long ones
18:29:35  <isaacs>(but yes, i do)
18:29:44  <isaacs>i mean, at least that gets me minutes ;)
18:29:50  <indutny>isaacs: in hours which is ok if you'll get to another machine and remove public keys from everywhere
18:29:51  <tjfontaine>ssh keys are a terrible mechanism, I prefer the patchset to ssh that lets you use standard certificates
18:30:03  <isaacs>also, i have 2 different ways to remote wipe this machine as soon as it connects to the internet, if an attacker gets it
18:30:12  <indutny>isaacs: hahaha
18:30:16  <indutny>isaacs: I won't steal your laptop
18:30:17  <indutny>never
18:30:21  <isaacs>:)
18:30:56  <indutny>this reminds me, that I should care more about my private key
18:31:00  * piscisaureus_quit (Ping timeout: 248 seconds)
18:31:00  <indutny>probably create new one
18:31:09  <indutny>:)
18:31:17  <indutny>no way it was stolen, but anyway
18:31:42  <tjfontaine>with certs you can specify on the server side valid times and delegate control and revoke, there's no way to have that control with standard ssh keys without making your infra to support it
18:32:24  * piscisaureus_joined
18:33:07  <isaacs>the nice thing about jpc smartos boxes is that you can just remove the key from your account, and it stops working on all your boxes.
18:33:30  <isaacs>but the linuxes use kvm, and it just stashes a /root/.ssh/authorized_keys at the creation time
18:33:36  <isaacs>which sucks also because it's never updated.
18:33:47  <tjfontaine>http://roumenpetrov.info/openssh/#features
18:34:04  <AvianFlu>isaacs, this is gonna need a reboot to take effect
18:34:39  <isaacs>AvianFlu: go for it
18:35:06  <isaacs>maybe send a `wall` just in case someone's on there, but i think everyone who would be is in this room right now :)
18:35:56  * c4miloquit (Remote host closed the connection)
18:37:05  * stephankquit (Quit: *Poof!*)
18:37:55  * stephankjoined
18:38:19  <einaros>isaacs: do you remember the websocket slowness I mentioned for 0.7, which was fine for a while with .10? it's back again now
18:38:40  <isaacs>einaros: oh, i didn't realize it was fine for a while with .10
18:38:45  <isaacs>einaros: but that's fascinating
18:38:47  <tjfontaine>seems like bouncing back and forth on v8 has something to do with it?
18:39:01  <isaacs>einaros: that seems to suggest that V8 3.9 was faster for your usecase than 3.11 is
18:39:24  <einaros>isaacs: I'm moving back to 3.9 to check that now
18:39:30  <isaacs>einaros: awesome!
18:39:58  <isaacs>we should probably have a ws benchmark in node.
18:40:15  <isaacs>sort of like http_simple.js, but for web sockets
18:40:26  <einaros>the test case is really simple, though. 64 bytes of test data sent to a server and echoed back. repeat 50k times and measure the time taken.
18:40:48  * mikealjoined
18:40:50  <einaros>for some reason it's around half the speed on .11 vs 6.19
18:42:34  <isaacs>einaros: yeah, all the more reason why we should be tracking it in a benchmark :)
18:42:42  <einaros>yeah.. 427662c6e9 (0.7.10 fixed): 482.44 kB/s. 0.7.11: 252.32 kB/s
18:48:22  <isaacs>einaros: that HAS to be a V8 perf regression.
18:48:58  <isaacs>einaros: can you try doing this in current master? git checkout v0.7.10-fix -- deps/v8
18:49:07  <isaacs>einaros: then build and etc
18:53:39  <einaros>will do in a sec, just started compiling 50464cd4f4, which is where v8 was bumped to 3.11.10
18:59:52  <indutny>isaacs: I've suspicion that this Diffie-Hellman thing is broken because of utf8
19:00:37  <indutny>isaacs: it's quite strong, because we're passing "binary" string all the way around
19:00:56  <einaros>isaacs: v8 it is
19:01:28  <isaacs>indutny: yep, that sounds likely
19:01:49  <isaacs>indutny: we really need to be using buffers everywhere internally, and only utf8-ify it at the very ends
19:02:12  <isaacs>einaros: great. can you post your findings to the V8 team?
19:02:15  <isaacs>piscisaureus_: ^
19:02:34  * elijah-mbpquit (Remote host closed the connection)
19:02:35  <piscisaureus_>isaacs: I will talk to Erik tomorrow
19:02:39  <isaacs>kewl
19:02:52  <isaacs>i have confidence that they'll fix it. it's not a 0.8 blocker.
19:02:57  <isaacs>but it is very important that it gets fixed soon.
19:03:16  <piscisaureus_>einaros: it would help to have a benchmark that is easy to run, with as little components as possible
19:03:17  <isaacs>they're easily trolled by reproducible benchmarks, though :)
19:03:40  <piscisaureus_>einaros: so we can bisect and figure out which commit is responsible
19:04:19  <einaros>piscisaureus_: https://github.com/einaros/websocket-benchmark
19:04:57  <piscisaureus_>einaros: is it possible to run it on a single computer, with one command?
19:05:04  <einaros>yup
19:05:47  <CIA-108>node: isaacs master * rb0b707c / (141 files in 20 dirs): npm: Upgrade to 1.1.27 - http://git.io/6v3CDw
19:05:55  <isaacs>this ^
19:06:07  <isaacs>npm init is so much more badass now
19:06:33  <isaacs>it reads deps, looks at your github url from the .git folder, pulls a description out of the readme, you name it
19:07:07  <isaacs>AvianFlu: did you reboot the thing and see the fix?
19:07:43  <einaros>is anyone familiar with this bailout? Bailout in HGraphBuilder: @"b": bad value context for arguments value
19:07:45  <isaacs>AvianFlu: or should i restart it
19:08:02  <AvianFlu>isaacs, I did, but it didn't help
19:08:15  <isaacs> /o\
19:08:17  <isaacs>ok
19:08:34  <AvianFlu>I've also seen test-child-process-fork2 fail a few times
19:08:39  <isaacs>yeah
19:08:40  <AvianFlu>but it's like 1 in 20
19:08:45  <isaacs>not at allconsistent, that one
19:13:26  <piscisaureus_>einaros: I don't know exactly, but apparently you're doing something with the arguments object that confuses crankshaft
19:15:56  * loladiroquit (Ping timeout: 246 seconds)
19:20:35  * hzquit
19:21:49  * mikealquit (Quit: Leaving.)
19:34:00  <piscisaureus_>einaros: with which bench was the difference most striking?
19:39:10  <AvianFlu>isaacs, it appears to have been iptables
19:42:23  * mikealjoined
19:46:53  <AvianFlu>[02:20|% 100|+ 430|- 0]: Done \o/
19:52:59  <indutny>:)
19:55:52  <indutny>oh, very nice
19:56:03  <indutny>isaacs: I think I've found a source of problem
19:56:20  <isaacs>AvianFlu: !! \o/
19:56:27  <isaacs>indutny: which problem?
19:56:53  <indutny>isaacs: Diffie-Hellman
19:56:54  <AvianFlu>there was an iptables rule that looks like it rejects forwarding in general - although I'm not that familiar with iptables
19:57:22  <isaacs>AvianFlu: Can you post teh triumph in https://github.com/joyent/node/issues/3450?
19:57:40  <isaacs>indutny: oh? nice!
19:57:57  <isaacs>AvianFlu: you have earned a share of the delicious cake :)
19:58:07  <AvianFlu>yeah I'm writing it up now
19:58:09  <AvianFlu>:D
19:59:03  <isaacs>anyone have objections to https://github.com/reid/node/commit/ceefc6738dd76eee522d05196fb0b734506550b7?
19:59:48  <indutny>isaacs: I don't
20:00:31  <indutny>isaacs: the problem is in openssl
20:00:32  <indutny>:P
20:00:46  <indutny>isaacs: BH_size returns bigger size than BH_compute_key
20:00:51  <indutny>which is odd to me
20:01:10  <isaacs>indutny: so it's lying about how much data we should read?
20:01:13  <isaacs>er, should use?
20:01:20  <indutny>isaacs: no
20:01:28  <CIA-108>node: Reid Burke master * r71a2a2c / (lib/net.js test/simple/test-net-during-close.js): net: Prevent property access throws during close - http://git.io/fXlIWQ
20:01:28  <indutny>isaacs: it's like we created 64 bytes of data
20:01:32  <indutny>isaacs: but returning 65
20:01:34  <indutny>and last one is a junk
20:01:46  <isaacs>oh, weird.
20:01:53  <isaacs>any way to verify what it tells us?
20:02:09  <indutny>isaacs: well, BH_compute_key returns correct one
20:02:24  <isaacs>so, can we just use that?
20:02:27  <indutny>isaacs: BH_size (which is intended to tell how much data should we allocate for BH_compute_key)
20:02:30  <indutny>isaacs: I think we can
20:02:36  <isaacs>awesome
20:02:39  <indutny>though, I aim to find why openssl is doing that
20:02:54  <indutny>I should verify if I'm correct
20:02:59  <indutny>s/if/that
20:03:01  * isaacstopic: 4 more! https://github.com/joyent/node/issues?milestone=10&state=open
20:03:50  <indutny>oh
20:03:54  <indutny>interesting
20:04:05  <indutny>BH_compute_key writes less bytes that it should!
20:04:14  <indutny>it returns 31, while it should be 32
20:04:44  <indutny>pquerna: yt?
20:07:04  <isaacs>i'll be back in a little bit. gonna go get some lunch
20:08:19  <CIA-108>node: Maciej Małecki master * r3db2e03 / lib/events.js : events: cache `domain` module locally - http://git.io/t8mtqA
20:08:51  * hzjoined
20:10:05  <AvianFlu>https://github.com/joyent/node/issues/3450#issuecomment-6384252
20:16:18  * isaacstopic: 3 more! https://github.com/joyent/node/issues?milestone=10&state=open
20:17:07  <indutny>isaacs: are you ready? :)
20:17:35  <indutny>isaacs: https://github.com/indutny/node/commit/6dc998bf7df494f68fb89a0c15cce6ec83574aeb <- review please
20:17:40  <indutny>I really want to taste that cake
20:17:50  <indutny>isaacs: should I add a pummel test?
20:18:47  * mmaleckijoined
20:20:34  <indutny>isaacs: hey man! where are ya?
20:25:41  <indutny>this cake is a lie
20:37:11  <CIA-108>node: Andreas Madsen master * r6d70a4a / (src/node.cc src/node.js): node: change the constructor name of process from EventEmitter to process - http://git.io/bswfKA
20:37:47  * mralephjoined
20:45:50  <isaacs>indutny: sure, add a pummel test that runs that other test 10000 times or something
20:46:04  <isaacs>indutny: and it's not a lie!
20:46:09  <indutny>isaacs: hahaha
20:46:12  <mmalecki>isaacs: you still want process.on('beforeExit')?
20:46:13  <isaacs>indutny: well, it might not be cake. but it'll be delicious!
20:46:20  <isaacs>mmalecki: sure, but not or 0.8
20:46:27  <isaacs>mmalecki: for 0.9, yes, absolutely
20:46:33  <mmalecki>isaacs: k. I got it implemented, it's really silly tho
20:46:45  <mmalecki>it just runs the event loop once again, no changes in libuv
20:46:57  <isaacs>mmalecki: post an issue with a comment that i said to mark it for 0.9
20:47:02  <isaacs>mmalecki: or a pull req, whatevers
20:47:08  <mmalecki>isaacs: k
20:47:59  * perezdquit (Quit: perezd)
20:57:28  <indutny>isaacs: https://github.com/indutny/node/commit/44ac34d53d34308d1a5c244646bf87f66843342f
20:59:30  <isaacs>indutny: lgtm!
20:59:41  <isaacs>nice investigation, and the comment is helpful.
21:00:27  <CIA-108>node: Fedor Indutny master * rae5b0e1 / (src/node_crypto.cc test/pummel/test-dh-regr.js): crypto: add padding to diffie-hellman key - http://git.io/5Vkusw
21:01:21  * indutnytopic: 2 more! https://github.com/joyent/node/issues?milestone=10&state=open
21:03:19  <isaacs>\o/
21:03:31  <indutny>hehe
21:03:35  <indutny>what about this https://github.com/joyent/node/issues/3446 ?
21:03:40  <indutny>is anyone working on that?
21:04:37  <AvianFlu>I'm poking at it a little, but you should definitely take a look too :D
21:04:51  * mikealquit (Quit: Leaving.)
21:05:44  <indutny>AvianFlu: ok
21:07:50  <indutny>isaacs: is that joyent box still working?
21:07:58  * mmaleckiquit (Ping timeout: 240 seconds)
21:08:38  * TheJHjoined
21:08:41  <indutny>looks like yes
21:08:42  <indutny>:)
21:10:31  <isaacs>hm, failure on debug: test-zlib-random-byte-pipes
21:10:42  <isaacs> /home/isaacs/node/test/simple/test-zlib-random-byte-pipes.js:81
21:10:43  <isaacs> this._hash = this._hasher.digest('hex').toLowerCase().trim();
21:10:43  <isaacs> ^
21:10:43  <isaacs>Error: Not initialized
21:10:58  <isaacs>i'll look into that
21:11:02  <isaacs>but first! to tacos!
21:11:26  <indutny>oh no
21:12:48  <isaacs>indutny: the centos box moved to
21:12:54  <isaacs>indutny: because i broke the other one
21:13:07  <isaacs>and fixing is harder than just rebuildint it ;)
21:13:19  <indutny>isaacs: em... and what about smartos?
21:13:25  <indutny>isaacs: umcats
21:13:33  <isaacs>indutny: umcats is there, but it's weak
21:13:36  <isaacs>indutny: use the smartosdrone
21:13:46  <indutny>isaacs: what is it?
21:13:55  <isaacs>indutny:
21:14:12  <indutny>isaacs: can you add my public keys to it
21:14:17  <isaacs>i keep meaning to set these up as subdomains under nodejs.org, but the dns thing we use si kinda lame
21:14:20  <indutny>please
21:14:21  <isaacs>indutny: it should let you in as root already
21:14:29  <isaacs>indutny: is there a pubkey that it should be allowing?
21:14:32  <indutny>ah
21:14:33  <indutny>right
21:14:42  <indutny>root@ worked fine
21:15:35  <indutny>ok, I'll look at that issue tomorrow
21:15:38  <indutny>going to sleep now
21:15:40  <indutny>ttyl gys
21:15:40  <isaacs>indutny: yeah, you have to create your own user account if you wanna log in that way. but same thing, no keys necessary, though if you set them, we can't log in as you either.
21:15:49  <isaacs>g'nite, thanks, indutny! :)
21:23:29  * loladirojoined
21:32:29  * isaacs_mobilejoined
21:42:59  * isaacs_mobilequit (Remote host closed the connection)
21:44:52  * isaacs_mobilejoined
21:50:56  <piscisaureus_>isaacs: einaros: https://github.com/v8/v8/commit/09c97a6df060a7f8de6ce24457da11d6 <-- baaaad!!
21:52:15  <piscisaureus_>cd ..
21:53:59  <einaros>that's the commit?
21:54:35  <piscisaureus_>yes
21:54:41  <piscisaureus_>I can revert it and push to a node branch
21:54:44  <piscisaureus_>so you can try without it
21:54:56  <einaros>alrighty
21:55:18  * isaacs_mobilequit (Remote host closed the connection)
21:57:23  <einaros>piscisaureus_: is it common for v8 to remove optimizations for Buffer?
21:57:32  * philipsquit (Excess Flood)
21:57:44  <CIA-108>node: Bert Belder perf-reg * r65f6352 / (21 files in 5 dirs): Revert "Promoting elements transitions to their own field." - http://git.io/KYuzag
21:57:52  <piscisaureus_>^-- einaros try that
21:58:11  <piscisaureus_>einaros: I don't really understand what you mean, but I think the answer is no
21:58:49  <piscisaureus_>einaros: Buffer is a node-only construct. v8 supports "external byte arrays" and it is not common for them to remove optimizations for that, no.
21:59:33  * philipsjoined
21:59:37  <einaros>piscisaureus_: I was just checking the output from --trace-bailout and --trace-deopt, and noticed a couple I haven't seen before, including "[removing optimized code for: Buffer]" and "Bailout in HGraphBuilder: @"b": bad value context for arguments value"
21:59:48  <piscisaureus_>ah
21:59:52  <piscisaureus_>einaros: that's something else :-)
22:00:12  <piscisaureus_>einaros: that means that there is something in the Buffer constructor that v8 can't optimize
22:00:54  <piscisaureus_>einaros: but I don't know if it was common to bail out in Buffer() before
22:01:25  <piscisaureus_>einaros: it would be great if you could check whether that perf-reg branch fixes your performance problem
22:01:38  <piscisaureus_>einaros: if that's the case, I will contact the v8 team and tell them what's up.
22:01:51  <piscisaureus_>einaros: btw - do you mind if I share the ws benchmark with them?
22:02:00  <einaros>piscisaureus_: I didn't notice either of of the bailouts for 0.6 - and since this benchmark is pretty buffer heavy I figured they could be bad news
22:02:12  <piscisaureus_>einaros: that could be the reason, maybe
22:02:24  <einaros>no bailouts or deopts for 0.6, just checked
22:02:28  <einaros>during send/receive, that is
22:02:31  <piscisaureus_>einaros: ok, cool
22:02:43  <einaros>I'll check the perf-reg branch now
22:03:09  <piscisaureus_>einaros: maybe, unless you have a good reason not to, could you open source the websocket-benchmark ?
22:03:18  <piscisaureus_>einaros: that'll give v8 people to take a look
22:03:26  <piscisaureus_>(hopefully)
22:04:06  * hzquit
22:04:14  <einaros>yeah, sure.. it's just been seen as somewhat controversial to be battling the other node websocket implementations :)
22:05:02  <piscisaureus_>einaros: well, just put a notice in the description that this is to compare v8 perf or something
22:05:07  <piscisaureus_>it's all in the framing
22:05:16  * rendarquit
22:05:17  <piscisaureus_>einaros: when it gets fixed you can close it again :-)
22:05:32  <piscisaureus_>einaros: if that's not acceptable, I'll just share a tarball or something
22:06:45  <einaros>it's public now
22:06:52  <piscisaureus_>einaros: cool
22:08:34  <piscisaureus_>einaros: btw - I also see some other funky behaviour when running with --trace-bailout
22:09:20  <einaros>there's a lot of static compared to 3.9
22:09:27  <piscisaureus_>yes
22:09:46  <piscisaureus_>it seems to be opt'ing and de-opting the same functions over and over again
22:10:12  <piscisaureus_>but that could also just be because they changed the verbosity or something
22:10:15  <piscisaureus_>I'll have to ask Erik
22:12:17  <einaros>one of the deopts I saw could be avoided by switching from .apply(foo, arguments) to a .apply(foo, Array.prototype.slice.call(arguments)) (or for) type construct
22:12:30  <einaros>but that seems inefficient as well
22:12:38  <piscisaureus_>einaros: we take patches for this kind of stuff :-)
22:13:14  <piscisaureus_>einaros: in my experience these bailouts and deopts give hints about where you can optimize, but nothing more
22:13:32  <piscisaureus_>most of the time there is no measureable performance increase
22:13:58  <einaros>at least they optimize beyond try/catch blocks now
22:14:14  <einaros>but in either case I'd be nice to hear what Erik has to say
22:16:12  <piscisaureus_>einaros: sure
22:17:33  <piscisaureus_>einaros: btw - the cluster "death" event has been renamed to "exit".
22:17:39  <piscisaureus_>einaros: that's why ws.js doesn't exit
22:17:45  <einaros>ah
22:18:05  <einaros>I was too lazy to check up on that ;)
22:19:14  <mraleph>piscisaureus_: you can ask me.
22:19:18  <mraleph>ask me ask me
22:19:24  <einaros>:D
22:19:28  <piscisaureus_>mraleph: hello there
22:19:38  <piscisaureus_>mraleph: https://github.com/v8/v8/commit/09c97a6df060a7f8de6ce24457da11d6
22:19:41  <mraleph>einaros: don't switch from .apply(obj, arguments) to slicing arguments. THAT IS WORSE.
22:19:45  <isaacs>piscisaureus_ and others: https://github.com/joyent/node/issues/3466
22:19:48  <isaacs>feelings? thoughts?
22:19:58  <piscisaureus_>mraleph: that commit makes v8 slow.
22:20:01  <isaacs>i don't want to have a bazillion os.*Dir() functions
22:20:05  <isaacs>but, it is pretty handy
22:20:13  <mraleph>piscisaureus_: I thought it was reverted because of that.
22:20:58  <einaros>mraleph: I'm definitely not. I was just acting on bailout messages.
22:21:15  <einaros>this is the deopt/bailout log from running against 0.3.11: https://gist.github.com/a15239b1d16f49032805
22:21:25  <einaros>wereas 0.3.9 is pretty much dead silent
22:22:55  <piscisaureus_>mraleph: I can't find any reverts of that commit in the v8 repo. Or did you mean the thing I just did?
22:23:08  <mraleph>einaros: foo.apply(obj, arguments) is supported context for arguments object, slicing it is not a supported context
22:23:13  <mraleph>piscisaureus_: yeah, my bad.
22:23:58  <einaros>mraleph: there's no slicing in the original code, so you can disregard that whole comment
22:24:12  <mraleph>piscisaureus_: well if it makes V8 slow (you have a repro and you bisected down to this) report it to bug tracker.
22:24:25  <piscisaureus_>mraleph: we were about to, sort of, do that
22:24:35  <piscisaureus_>mraleph: tbh I was just going to email Erik but reporting a bug also works
22:24:49  <piscisaureus_>although the latency is much higher
22:25:00  <mraleph>einaros: I was reacting at "deopts I saw could be avoided by switching from .apply(foo, arguments) to a .apply(foo, Array.prototype.slice.call(arguments)) (or for) type construct@
22:25:45  <mraleph>piscisaureus_: mailing Erik will probably work as well, but filing bugs leaves visible trail and ensures things do not fall through :-)
22:26:37  <piscisaureus_>mraleph: the other question is related to this --trace-deopt output.
22:26:38  <piscisaureus_>mraleph: in node 0.6 (v8 3.6) I didn't see nearly as much as now
22:26:55  <piscisaureus_>mraleph: it seems to deopt the same function over and over again
22:26:57  <mraleph>probably we did not optimize nearly as much
22:27:19  <mraleph>sampling profiler might have missed the function
22:27:34  <mraleph>counting profiler just optimizes everything that is called frequent enough.
22:27:43  <mraleph>we also relaxed some restrictions on reoptimizations.
22:27:58  <piscisaureus_>ah, ok
22:28:10  <mraleph>if you are seeing constants deopts something might be off either in our assumptions or in your function :-)
22:28:24  <piscisaureus_>haha
22:28:51  <piscisaureus_>mraleph: My stance on this: valid ES is valid ES. I cannot be wrong :-p
22:29:11  <piscisaureus_>mraleph: wrt Erik vs bug tracker. I'll be a multi headed dragon >:-)
22:29:58  <mraleph>piscisaureus_: https://chromiumcodereview.appspot.com/10554011/
22:30:08  <mraleph>here is why I thought it was reverted.
22:30:19  <mraleph>there is something that looks like a pending fix.
22:30:30  <piscisaureus_>ah, right
22:30:41  <piscisaureus_>well, let's try it out
22:31:27  <mraleph>sorry my brain tries to track this stuff in background, but it's now too hard to keep up with a project while working on another one :-)
22:32:43  <piscisaureus_>mraleph: np, I won't blame you
22:33:01  <mraleph>with respect to your stance.
22:33:13  <mraleph>deopt is not "being wrong"
22:33:24  <mraleph>deopt is more about "behaving in surprising funky ways"
22:33:32  <mraleph>not predictabily
22:33:46  <piscisaureus_>oh, sure, I was just kidding
22:33:59  <mraleph>anyways it's worth investigating.
22:34:08  <piscisaureus_>but it's all about sweet spot
22:34:08  <mraleph>what's the reason for deopt?
22:34:19  <piscisaureus_>yeah, that's very difficult to tell from the logs
22:34:22  <mraleph>should print LIR instruction if you —trace-deopt with —code-comments
22:34:36  <mraleph>it's not always accurate, but mostly accurate.
22:36:20  <piscisaureus_>mraleph: I shall try that
22:36:41  <piscisaureus_>mraleph: but right now, I am going to get some food (!!) and sleep
22:37:08  <piscisaureus_>mraleph: btw, thanks for pointing me to https://chromiumcodereview.appspot.com/10554011/
22:37:16  <piscisaureus_>mraleph: it sort of fixes the perf issue
22:37:23  <mraleph>sort of?
22:37:52  <piscisaureus_>mraleph: well, it's very difficult to tell, since this benchmark has a very big variance in outputs
22:38:02  <mraleph>hmmmmm
22:38:10  <mraleph>how did you bisect it then?
22:38:21  <piscisaureus_>mraleph: well, < 3s -> good
22:38:25  <mraleph>haha
22:38:28  <mraleph>ok
22:38:29  <piscisaureus_>mraleph: > 10s -> bad
22:38:38  <piscisaureus_>mraleph: because the regressions was that big
22:38:47  <mraleph>so with this CL it's < 10s?
22:38:57  <piscisaureus_>mraleph: yes, it's like 4, 5
22:38:59  <piscisaureus_>or so
22:39:11  <mraleph>so it still did not regain full speed.
22:39:12  <mraleph>I see.
22:39:21  <piscisaureus_>mraleph: well, it could also be random variation
22:39:29  <mraleph>ok, as you say :-)
22:39:42  <mraleph>anyways, have a good one, I am also out.
22:39:45  <piscisaureus_>mraleph: but we're definitely wayyyy lower than 10s again
22:39:54  <piscisaureus_>mraleph: you too man, take care
22:39:55  <piscisaureus_>I am out too.
22:40:38  * mralephquit (Quit: Leaving.)
22:44:12  <piscisaureus_>goodbye leute
22:44:16  * piscisaureus_quit (Quit: ~ Trillian Astra - www.trillian.im ~)
22:51:38  * loladiroquit (Ping timeout: 244 seconds)
22:56:06  * bnoordhuisjoined
23:04:19  * loladirojoined
23:26:00  * paddybyersquit (Quit: paddybyers)
23:26:31  * TheJHquit (Ping timeout: 260 seconds)