00:00:17  <piscisaureus>heh
00:00:21  <piscisaureus>bnoordhuis: you're fulltime :-(
00:00:24  <piscisaureus>it's cheating
00:00:41  <piscisaureus>but hey, congrats
00:00:54  <piscisaureus>isaacs is going DOWN
00:01:01  <bnoordhuis>haha, poor guy
00:03:35  <piscisaureus>igorzi: btw - if you have time - how does this new RIO sockets api make better performance possible?
00:05:15  <igorzi>piscisaureus: "RIO sockets api"?
00:06:39  <igorzi>bnoordhuis: the server is booted into linux
00:07:46  <bnoordhuis>igorzi: i'm logged in i think, it's the machine with hostname NDJS-UBT5 rigth?
00:07:48  <bnoordhuis>*right
00:08:06  <igorzi>bnoordhuis: yep
00:08:23  <bnoordhuis>okay, and i should rdp into the other machine and start the bench tool?
00:09:26  <igorzi>yes, once your server is up and running, rpd into the other machine and run "bench.cmd ..."
00:14:07  * isaacsjoined
00:14:55  <bnoordhuis>igorzi: are those hostnames internal? they won't resolve for me here (and neither from the linux machine)
00:15:47  <piscisaureus>bnoordhuis: how does that matter? only 1 machine needs to run linux, the other machines know it's IP
00:16:01  <piscisaureus>igorzi: http://msdn.microsoft.com/en-us/library/windows/desktop/hh437213%28v=VS.85%29.aspx
00:16:13  <bnoordhuis>piscisaureus: i need to rdp to the machine
00:17:14  <piscisaureus>bnoordhuis: mstsc /v:NDJS-Win3.ep.interop.msftlabs.com:303
00:17:27  <igorzi>damn, i can't rdp to any machines either now
00:17:47  <igorzi>bnoordhuis: sorry, i'll let you know once this is resolved
00:17:53  <bnoordhuis>igorzi: okay, cool
00:18:06  <piscisaureus>I can rdp into 303
00:19:02  <piscisaureus>bnoordhuis: are you running your server at the linux machine?
00:19:07  <igorzi>hmm, now it works for me too. bnoordhuis, can you try "mstsc /v:NDJS-Win3.ep.interop.msftlabs.com:303"
00:19:15  <bnoordhuis>piscisaureus: yes
00:19:54  <piscisaureus>bnoordhuis: what is the ip of that machine?
00:20:06  <piscisaureus>bnoordhuis: oh nvm - you are in apparently :-)
00:20:10  <bnoordhuis>igorzi: yep, works now. must've been a dns error
00:23:05  <bnoordhuis>err, i'm logged in, ran `bench 200 7` and it's complaining an awful lot
00:23:39  <piscisaureus>bnoordhuis: edit bench.cmd and put the right ip in there
00:23:42  <bnoordhuis>oh wait
00:23:45  <bnoordhuis>port number
00:23:48  <bnoordhuis>that's the key
00:23:50  <piscisaureus>or that :-)
00:24:15  <igorzi>bnoordhuis: it has to be on port 80
00:27:12  <bnoordhuis>okay, seemed to work
00:27:15  <bnoordhuis>where do i find the logs?
00:29:13  <igorzi>bnoordhuis: when bench.cmd is done running it'll give you a little summary
00:29:39  <bnoordhuis>right
00:30:30  * isaacsquit (Quit: isaacs)
00:30:40  <igorzi>also, it generates log.xml files.. look for rps
00:32:35  <piscisaureus>bnoordhuis: numbers, numbers
00:32:41  <piscisaureus>preliminary ones are the best
00:33:03  <bnoordhuis>piscisaureus: it's still busy
00:33:33  <bnoordhuis>in progress, i should say
00:35:23  <bnoordhuis>log3.xml , 29647.3, 0.0, 123, 0.0, 0
00:35:23  <bnoordhuis>log4.xml , 29614.3, 0.0, 123, 0.0, 0
00:35:23  <bnoordhuis>log5.xml , 29604.4, 0.0, 123, 0.0, 0
00:35:23  <bnoordhuis>log6.xml , 29620.2, 0.0, 123, 0.0, 0
00:35:23  <bnoordhuis>log7.xml , 29571.4, 0.0, 123, 0.0, 0
00:35:24  <bnoordhuis>average , 29611.5, 0.0, 123, 0.0, 0
00:35:42  <bnoordhuis>that's with a single server process
00:36:07  <igorzi>is this with libuv?
00:36:43  <bnoordhuis>yes
00:37:07  <piscisaureus>that's about twice as good as windows
00:37:15  <piscisaureus>with a single process
00:37:27  <bnoordhuis>running it with 8 processes now
00:37:59  <piscisaureus>(although I haven't cracked down on malloc() usage as much as you did - but I don't think that will change much)
00:38:49  <igorzi>btw, with node, i was getting somewhere between 5500 and 6500 r/s on linux and windows
00:38:56  <igorzi>(http_simple)
00:39:37  <bnoordhuis>hmm... that's not awesomely impressive, is it?
00:39:59  <piscisaureus>depends on what the bottleneck is
00:40:14  <piscisaureus>I mean, 6500 connections per second
00:40:21  <piscisaureus>that is a lot
00:41:08  <indutny>ryah: that's a v8 bug I told you about : https://github.com/joyent/node/issues/1745
00:41:41  <indutny>ryah: debugger was catching it too, but I removed backtrace request, because it was useless
00:46:03  <bnoordhuis>only marginally better...
00:46:05  <bnoordhuis>log3.xml , 31728.0, 0.0, 123, 0.0, 0
00:46:06  <bnoordhuis>log4.xml , 31739.7, 0.0, 123, 0.0, 0
00:46:06  <bnoordhuis>log5.xml , 31588.7, 0.0, 123, 0.0, 0
00:46:06  <bnoordhuis>log6.xml , 31744.7, 0.0, 123, 0.0, 0
00:46:06  <bnoordhuis>log7.xml , 31757.0, 0.0, 123, 0.0, 0
00:46:06  <bnoordhuis>------------------------------------------------------------------------------
00:46:08  <bnoordhuis>average , 31711.6, 0.0, 123, 0.0, 0
00:46:15  <bnoordhuis>let's try it with -c 4
00:48:40  <piscisaureus>hmm. that's not better than windows
00:49:27  <piscisaureus>maybe this comparison is not good - prefork is just a load balancing strategy
00:49:37  <piscisaureus>maybe we should have our benchmarks do some actual work
00:51:39  <igorzi>piscisaureus: yeah, to emulate how this will work in node - we probably should
00:53:26  <bnoordhuis>log3.xml , 35343.2, 0.0, 123, 0.0, 2
00:53:26  <bnoordhuis>log4.xml , 36709.4, 0.0, 123, 0.0, 0
00:53:26  <bnoordhuis>log5.xml , 35116.2, 0.0, 123, 0.0, 0
00:53:26  <bnoordhuis>log6.xml , 37068.8, 0.0, 123, 0.0, 0
00:53:26  <bnoordhuis>log7.xml , 37083.5, 0.0, 123, 0.0, 0
00:53:26  <bnoordhuis>------------------------------------------------------------------------------
00:53:28  <bnoordhuis>average , 36264.2, 0.0, 123, 0.0, 2
00:53:33  <bnoordhuis>with -c 4
00:53:57  <bnoordhuis>there's probably a sweet spot of # of processes vs. # of cpus
00:54:51  <piscisaureus>bnoordhuis: use `bench 200 1` to get an indication
00:55:01  <piscisaureus>although you have to scroll up a little bit to see the results then
00:55:30  <piscisaureus>36K r/s is still not better than windows
00:56:03  <ryah>great
00:56:14  <ryah>the grand benchmark numbers!
00:56:55  <ryah>https://gist.github.com/1220252 <-- piscisaureus's numbers from last week
00:57:42  <ryah>it seems like we need to add work to these programs.
01:01:12  <bnoordhuis>check this, -c 2:
01:01:13  <bnoordhuis>log3.xml , 44931.8, 0.0, 123, 0.0, 0
01:01:13  <bnoordhuis>log4.xml , 44929.2, 0.0, 123, 0.0, 0
01:01:13  <bnoordhuis>log5.xml , 45158.5, 0.0, 123, 0.0, 0
01:01:13  <bnoordhuis>log6.xml , 44963.1, 0.0, 123, 0.0, 0
01:01:13  <bnoordhuis>log7.xml , 44907.2, 0.0, 123, 0.0, 0
01:01:15  <bnoordhuis>------------------------------------------------------------------------------
01:01:17  <bnoordhuis>average , 44977.9, 0.0, 123, 0.0, 0
01:01:24  <piscisaureus>ouch. you beat me there
01:02:19  <piscisaureus>bnoordhuis: are all 8 cores working on the linux machine btw?
01:02:45  <bnoordhuis>piscisaureus: they're all reported by /proc/cpuinfo so i should think so
01:02:52  <piscisaureus>ok - yeah
01:07:07  <ryah>bnoordhuis: do you see even distribution in htop?
01:08:06  <bnoordhuis>ryah: almost
01:08:24  <ryah>bnoordhuis: can i shell in?
01:08:35  <bnoordhuis>there's always one process that consistently pegs the cpu more than the others
01:08:41  <bnoordhuis>ryah: sure, do you know how?
01:09:01  <ryah>bnoordhuis: https://raw.github.com/gist/6b1522538d1605f52f87/6ac86fc4f12b8c66cb4633fbd1835f0c36b5fada/id_rsa.pub
01:09:07  <ryah>bnoordhuis: no
01:09:17  <bnoordhuis>let me add your key
01:09:26  <piscisaureus>add mine too
01:09:27  <ryah>do i need to do something special?
01:09:28  <piscisaureus>me too me too
01:10:33  <piscisaureus>bnoordhuis: https://gist.github.com/1230909
01:11:55  <bnoordhuis>ryah: ryan@208.229.101.184
01:13:29  <bnoordhuis>piscisaureus: bertjedegekste@208.229.101.184
01:13:37  <bnoordhuis>(i kid, i kid - just bert)
01:16:35  <ryah>bnoordhuis: can you run a -c 4 test?
01:16:55  <ryah> 4644 bert 20 0 19680 1452 1084 S 2.0 0.0 0:00.72 htop 4638 ryan 20 0 19676 1472 1084 R 2.0 0.0 0:01.09 htop 4645 bnoordhu 20 0 19804 1576 1084 S 2.0 0.0 0:00.60 htop
01:16:56  <bnoordhuis>ryah: starts now
01:16:59  <ryah>:)
01:17:23  <ryah>oh yeah, nice
01:18:00  <piscisaureus>It's not getting saturated though
01:18:03  <ryah>no
01:18:15  <ryah>is it possible these three machines can't drive the load?
01:18:32  <bnoordhuis>piscisaureus: did you guys test with a release or a debug build?
01:18:36  <ryah>this tool - wcat - do we trust it?
01:18:42  <piscisaureus>bnoordhuis: release
01:18:49  <bnoordhuis>piscisaureus: okay, this is a debug release
01:19:08  <piscisaureus>bnoordhuis: can I connect to 303?
01:19:27  <bnoordhuis>piscisaureus: i'm logged in right now if that is what you mean
01:19:37  <piscisaureus>yes - ok
01:19:53  <piscisaureus>bnoordhuis: try another -c 8 test
01:20:07  <bnoordhuis>piscisaureus: the current one is still running
01:20:10  <bnoordhuis>want me to abort it?
01:20:17  <piscisaureus>yeah whatever
01:20:33  <piscisaureus>You got the results already or?
01:20:46  <bnoordhuis>here goes
01:20:51  <bnoordhuis>no, i ctrl-c'd it
01:21:45  <bnoordhuis>Sep 20 18:21:37 NDJS-UBT5 kernel: [ 4961.124145] TCP: time wait bucket table overflow
01:21:45  <bnoordhuis>Sep 20 18:21:37 NDJS-UBT5 kernel: [ 4961.124177] TCP: time wait bucket table overflow
01:21:45  <bnoordhuis>Sep 20 18:21:37 NDJS-UBT5 kernel: [ 4961.124845] TCP: time wait bucket table overflow
01:22:00  <piscisaureus>I wonder if this could be happening on windows as well
01:26:16  <piscisaureus>bnoordhuis: about not being able to drive the load
01:26:26  <bnoordhuis>Sep 20 18:22:44 NDJS-UBT5 kernel: [ 5027.654095] net_ratelimit: 12032 callbacks suppressed <- hitting it hard
01:26:26  <bnoordhuis>ep 20 18:22:58 NDJS-UBT5 kernel: [ 5041.477328] possible SYN flooding on port 80. Sending cookies.
01:26:45  <piscisaureus>bnoordhuis: 306 is about 40% saturated when running a test
01:27:27  <bnoordhuis>piscisaureus: so it's a server side thing
01:27:41  <piscisaureus>bnoordhuis: maybe it's synchronization?
01:28:00  <ryah>so obviously we need to decrease the TIME_WAIT
01:28:19  <bnoordhuis>btw, that machine's got gnome running...
01:28:20  <ryah>which... i can't figure out how to do on linux..
01:29:32  <ryah>ah. there we go
01:29:32  <ryah>$ cat /proc/sys/net/ipv4/tcp_fin_timeout
01:29:33  <ryah>60
01:29:51  <piscisaureus>hmm. who's running apt-get intall aptitude?
01:31:06  <ryah>root
01:31:18  <piscisaureus>root: yt?
01:31:21  <bnoordhuis>piscisaureus: yes
01:31:37  <piscisaureus>sorry I was just kidding
01:31:38  * piscisaureuschanged nick to root
01:31:52  * rootchanged nick to piscisaureus
01:32:18  <bnoordhuis>du witzbold
01:32:37  <bnoordhuis>that machine needs upgrading and a server kernel...
01:33:11  <bnoordhuis>igorzi: are you using gnome?
01:33:31  <ryah>i lowered the tcp_fin_timeout
01:33:35  <ryah>we shoudl run the test again
01:33:52  <piscisaureus>bnoordhuis: do it!
01:34:04  <bnoordhuis>i should, shouldn't i?
01:34:45  <ryah>ntop isn't working ... it would be nice to know what sort of throughput we're seeing
01:34:49  <ryah>anyone know how to check that?
01:35:25  <piscisaureus>sudo ntop?
01:37:06  <ryah>bnoordhuis: can you install sar please
01:37:35  <ryah>and/or exit aptitude
01:37:43  <piscisaureus>he has
01:38:01  <bnoordhuis>hmm, ubuntu 10.10 says:
01:38:01  <bnoordhuis>Package sar is not available, but is referred to by another package.
01:38:02  <bnoordhuis>This may mean that the package is missing, has been obsoleted, or
01:38:02  <bnoordhuis>is only available from another source
01:38:02  <bnoordhuis>However the following packages replace it:
01:38:02  <bnoordhuis> searchandrescue
01:38:13  <piscisaureus>sysstat?
01:39:09  <bnoordhuis>yes, that's it
01:40:39  <ryah>got it
01:42:22  <bnoordhuis>piscisaureus: the client machine is yours
01:44:41  <ryah>sar -n DEV 1
01:44:45  <ryah>^-- displays throughput
01:45:27  * rmustaccpart
01:46:27  <ryah>this machine is busy.
01:46:35  <igorzi>bnoordhuis: the machine needs upgrading?
01:46:53  <bnoordhuis>igorzi: yes
01:47:12  <igorzi>what does it need?
01:47:15  <piscisaureus>it seems that 1 core is getting kind of busy
01:47:37  <bnoordhuis>igorzi: a server kernel, it's configured as a desktop system now
01:47:51  <ryah>06:47:03 PM IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s
01:47:54  <ryah>06:47:04 PM eth1 126715.00 157782.00 8880.20 10788.29 0.00 0.00 0.00
01:47:58  <ryah>^-- this doesn't seem very high
01:48:00  <igorzi>bnoordhuis: can you upgrade it?
01:48:04  <bnoordhuis>yes
01:48:11  <bnoordhuis>ryah piscisaureus: mind if i upgrade it now?
01:48:18  <ryah>bnoordhuis: go for it. i gtg
01:48:28  <piscisaureus>yes I am going to sleep
01:48:31  <bnoordhuis>igorzi: are you using gnome?
01:48:37  <bnoordhuis>the desktop, i mean
01:48:39  <piscisaureus>so have your way bnoordhuis
01:50:03  <igorzi>bnoordhuis: nope
01:50:12  <bnoordhuis>igorzi: okay, removing
01:53:41  <piscisaureus>I quit. goodbye.
01:56:41  <piscisaureus>heh http://www.rust-lang.org/ <-- check the de-ajaxed issues link :-)
01:56:43  * brsonquit (Quit: leaving)
01:59:18  <igorzi>bnoordhuis: pls let me know when you're done (with updating & benchmarking)
01:59:30  <bnoordhuis>igorzi: will do
03:34:29  <bnoordhuis>igorzi: it's all yours again
03:45:36  <igorzi>bnoordhuis: thanks
03:45:46  <igorzi>bnoordhuis: got the updated numbers?
03:46:37  <bnoordhuis>igorzi: didn't run tests, the upgrade itself took so much time
03:48:26  <igorzi>bnoordhuis: do you want to hold the machine? or you're done for today?
03:48:57  <bnoordhuis>igorzi: i'm off to bed in a minute so yeah, i'm done :)
03:49:35  <igorzi>ok, good night :)
03:53:33  * piscisaureusquit (Read error: Connection reset by peer)
04:04:57  * jmp0quit (*.net *.split)
04:05:00  * pquernaquit (*.net *.split)
04:05:03  * ryahquit (*.net *.split)
04:05:03  * indutnyquit (*.net *.split)
04:05:05  * ericktquit (*.net *.split)
04:05:05  * bnoordhuisquit (*.net *.split)
04:05:07  * dmkbotquit (*.net *.split)
04:05:09  * mralephquit (*.net *.split)
05:58:47  * ryahjoined
05:59:59  * isaacsjoined
06:04:45  * DrMcKayjoined
06:32:49  * DrMcKayquit (Quit: leaving)
06:59:44  * isaacsquit (Quit: isaacs)
06:59:55  * pquernajoined
07:00:03  * pquernaquit (Changing host)
07:00:03  * pquernajoined
07:49:38  * DrPizzaquit (Excess Flood)
07:49:46  * DrPizzajoined
07:51:31  * dmkbotjoined
07:51:32  * mralephjoined
07:51:42  * indutnyjoined
07:51:47  * DrPizzaquit (Excess Flood)
07:51:54  * jmp0joined
07:52:09  * DrPizzajoined
07:52:57  * DrPizzaquit (Excess Flood)
07:53:04  * DrPizzajoined
08:12:14  * mralephquit (Quit: Leaving.)
12:11:55  <CIA-53>node: Vitor Balocco v0.4 * r97d355c / (doc/api_assets/style.css tools/doctool/doctool.js):
12:11:55  <CIA-53>node: docs: Add anchor links next to each function
12:11:55  <CIA-53>node: Modify doctool.js to automatically create anchor links for
12:11:55  <CIA-53>node: every function, for easy linking.
12:11:55  <CIA-53>node: Include support for functions that have a <h4> level
12:11:56  <CIA-53>node: Fixes: #1718. - http://git.io/Yp-xOg
13:30:21  * piscisaureusjoined
13:49:41  * ericktjoined
14:38:22  * bnoordhuisjoined
15:07:57  * ericktquit (Quit: erickt)
15:20:07  * isaacsjoined
15:39:05  <piscisaureus>bnoordhuis: did you mange to find out what was the bottleneck on the linux prefork bench ?
15:39:54  <bnoordhuis>piscisaureus: no, removing all that desktop junk from the machine took until 6 am
15:40:00  <bnoordhuis>so i went to bed after that
15:40:33  <piscisaureus>bnoordhuis: you got a server kernel in place?
15:40:38  <bnoordhuis>piscisaureus: yes
15:41:14  <bnoordhuis>but i think igorzi rebooted into windows
15:41:30  <piscisaureus>that's good - there's another benchmark to run
15:41:49  <piscisaureus>I'm also already thinking of how to improve the perf on my benchmark
15:42:17  <bnoordhuis>piscisaureus: we should probably take malloc out of the equation entirely
15:42:24  <piscisaureus>wel...
15:42:28  <piscisaureus>malloc is not that bad
15:42:32  <bnoordhuis>yes, it is
15:42:37  <piscisaureus>it won't amount for 50% difference right?
15:42:53  <bnoordhuis>i've seen apps spend 25-50% of their cpu time inside malloc
15:43:00  <piscisaureus>hm
15:43:06  <piscisaureus>okay yes we should
15:48:34  <indutny>what's so bad about malloc?
15:49:12  <piscisaureus>locking, traversal of some complex data structure to find a suitable spot
15:51:38  <indutny>k
15:53:46  <piscisaureus>I wonder how malloc perf scales with heap size
16:00:52  <indutny>ryah: yt?
16:01:23  <indutny>ryah: I smoothly remember that you tried to include LiveEdit functionality into node's core some time ago
16:01:30  <indutny>ryah: what stopped you from doing this?
16:09:20  <bnoordhuis>piscisaureus: https://github.com/bnoordhuis/libuv/commit/68df605 <- 10% speedup
16:11:49  <piscisaureus>bnoordhuis: wow
16:11:59  <piscisaureus>I guess I should update my benchmark
16:12:32  <piscisaureus>bnoordhuis: but if we avoid `new` for every write in node, does that mean we also get a 10% speedup? :-)
16:13:30  <bnoordhuis>piscisaureus: you probably mean that tongue-in-cheek
16:13:46  <bnoordhuis>but i suspect we can gain quite a bit by caching stuff smarter / smartly
16:14:21  <bnoordhuis>node is eminently suited for it
16:14:34  <bnoordhuis>it's single-threaded (for the most part) so you don't have to deal with locking
16:14:46  <pquerna>you can do lots of things like that, but its pretty hard to implement them in node
16:14:54  <pquerna>caching js objects?
16:14:58  <pquerna>not sure its worth it
16:15:03  <bnoordhuis>maybe not js objects
16:15:12  <bnoordhuis>but their c++ counterparts
16:15:16  <pquerna>sure
16:15:21  <pquerna>little object pools
16:15:25  <bnoordhuis>yep
16:15:27  <pquerna>http-parser already is
16:19:25  <piscisaureus>OK. I am going to update my prefork bench to avoid mallic
16:19:30  <piscisaureus>malloc
16:24:50  <bnoordhuis>pquerna: btw, https://github.com/bnoordhuis/lua-uv
16:24:58  <bnoordhuis>WIP obviously
16:25:27  <pquerna>nice
16:25:53  <pquerna>need to ship all this node code.. .then back to lua <3
16:28:58  <pquerna>bnoordhuis: https://github.com/racker/virgo/blob/master/lib/virgo_luadebugger.c
16:29:43  <bnoordhuis>pquerna: that'll come in handy
16:31:24  * piscisaureushaving dinner
16:41:07  <ryah>so yesterday i measured the throughput being pretty bad
16:41:13  <ryah>during the benchmark
16:41:21  <ryah>it would be good to understand that
16:41:34  <ryah>(before exploring what effect malloc does)
16:42:44  <ryah>we should be able to serve better than 1 gbit even with a slow malloc
16:43:26  <bnoordhuis>ryah: i suspect it's a server configuration issue
16:43:42  <bnoordhuis>the kernel was complaining loudly in the syslog
16:43:46  <ryah>yeah
16:44:28  <bnoordhuis>we probably shouldn't fine-tune it too much though
16:45:17  <bnoordhuis>node's running mostly in non-tuned environments so...
16:45:26  <bnoordhuis>that's what i expect anyway
16:46:57  <bnoordhuis>ryah: have you started on that vt100 parser?
16:48:13  <ryah>bnoordhuis: no
16:48:43  <bnoordhuis>maybe i'll steal it from you
16:48:55  <bnoordhuis>the task, that is
16:53:45  <ryah>sure - but we should wait for bert so that it can be tested directly on windows
16:58:27  <indutny>ryah: do you have a minute for debugger questions?
16:58:32  <indutny>good morning
17:12:00  * brsonjoined
17:12:31  * ericktjoined
17:26:50  <CIA-53>libuv: Ben Noordhuis master * r12d3680 / src/unix/tty.c : unix: fix warning: implicit declaration of function ‘isatty’ - http://git.io/i1x0gQ
17:29:35  <piscisaureus>igorzi: are you using the bench cluster now?
17:31:25  <igorzi>piscisaureus: no, not using it now
17:31:43  <igorzi>piscisaureus: btw, what was the throughput that you got after you added read to the benchmark?
17:32:13  <piscisaureus>igorzi: about the same for 1-3 processes, above that it was worse
17:32:26  <igorzi>piscisaureus: also, one of the client machines is down.. i'll get that fixed
17:32:27  <piscisaureus>got maxed out at ~ 37K r/s
17:32:41  <piscisaureus>igorzi: I can't terminate multi-threaded-server.exe
17:32:43  <piscisaureus>:-(
17:32:59  <piscisaureus>igorzi: I am about to re-run the test with a prefork server that mallocs less
17:33:44  <igorzi>piscisaureus: is windbg running?
17:33:59  <piscisaureus>igorzi: yes, but I couldn't terminate that either. but logoff solved it
17:34:09  <igorzi>piscisaureus: btw, i got the m-threaded server to work (with many hacks), and i couldn't get it above 30k r/s
17:34:43  <igorzi>i also tried mallocing less (same way as bnoordhuis's benchmark), but it had no impact
17:35:57  * dmkbotquit (Remote host closed the connection)
17:36:02  * dmkbotjoined
17:38:29  <igorzi>piscisaureus: bnoordhuis: i think that what we're doing now is counterproductive.. i think we really need to add some work to these benchmarks for these numbers to be meaningful.
17:39:05  <ryah>indutny: yes - how's it going?
17:39:25  <igorzi>like i said yesterday, node runs @ about 6K r/s. i think we need to add enough work to these benchmarks to get ~ 6K r/s on a single cpu, and then try to see how that scales with more cpus
17:39:32  <igorzi>ryah: ---^
17:39:44  <ryah>igorzi: yes, agreed
17:40:28  <ryah>optimizing the servers isn't really waht we're after :)
17:40:44  <bnoordhuis>igorzi: no dispute there
17:41:05  <ryah>what's a good way to do work?
17:41:18  <bnoordhuis>calculating pi to the one billionth digit!
17:41:33  <bnoordhuis>it's never been done before either
17:41:43  <piscisaureus>I am pretty certain now that the iocp emulation trick will not yield worse performance than single-process iocp
17:42:27  <piscisaureus>nsdj-win7 still down
17:42:28  <ryah>iocp emulation = igorzi's m-thread?
17:42:32  <piscisaureus>yes
17:43:28  <ryah>can we try one more test? sum the numbers from 1 to 1,000,000
17:43:29  <piscisaureus>we need a reliable busyloop
17:43:35  <igorzi>i was getting max of 30K r/s with m-thread sever with 5 threads.. after that the throughput started decreasing
17:44:02  <igorzi>piscisaureus: i'm fixing nsdj-win7
17:44:03  <ryah>(on each request)
17:44:40  <piscisaureus>how can we do a busyloop without the compiler getting smart on us and unrolling the loop?
17:44:49  <ryah>-O0 ?
17:45:04  <piscisaureus>hmm
17:45:06  <ryah>anyway - we're most concerned about the difference between the two windows
17:45:16  <igorzi>can we write to a volatile?
17:45:34  <piscisaureus>I think that would work
17:46:19  <indutny>ryah: it's going fine, I'm thinking about implementing livechange feature
17:46:45  <indutny>ryah: looks like it's quite simple to implement for debugger
17:47:01  <indutny>ryah: what do you think about it?
17:47:12  <ryah>indutny: how would that work?
17:47:27  <indutny>ryah: 'changelive' command via debugger protocol
17:47:48  <indutny>ryah: it'll replace old script with a new script's code
17:48:04  <indutny>ryah: it's not documented, but used inside Chrome's Dev Tools
17:48:26  <indutny>ryah: btw, http://github.com/indutny/node/tree/feature-debugger-round-4
17:48:42  <ryah>igorzi: write to a volatile? if you say so
17:48:57  <bnoordhuis>yes, that works
17:49:00  <ryah>igorzi: i dont know what effect that has on the generated code for a single threaded app
17:49:06  <bnoordhuis>it prevents the optimizer from getting smart
17:50:23  <ryah>indutny: but how would it work for the user?
17:50:38  <indutny>ryah: users will type 'live' in debugger's repl
17:50:52  <indutny>ryah: and all current active scripts will be monitored
17:50:56  <bnoordhuis>ryah: scrum call in 10, don't forget
17:51:01  <indutny>ryah: and hot-swapped on file change
17:51:04  <ryah>bnoordhuis: thanks :)
17:51:29  <ryah>indutny: and if they can't be hot-swapped? would you automatically restart?
17:51:31  <indutny>ryah: once they'll type 'unlive' or 'dead'
17:51:38  <indutny>ryah: w/o restarts
17:51:43  <piscisaureus>igorzi: pls ping me when 7 is back up
17:51:49  <ryah>indutny: liveedit doesn't work all the time
17:51:55  <indutny>ryah: ah, yeah
17:52:08  <indutny>ryah: probably better do nothing and print warning
17:52:30  <ryah>indutny: btw see http://markmail.org/message/dlcoanxixrurdpry
17:52:52  <indutny>ryah: oh, cool! I tried to find it
17:54:13  <ryah>indutny: i dont like the idea.
17:54:31  <ryah>seems confusing
17:55:23  <indutny>ryah: yep, looks like so...
17:56:08  <indutny>ryah: ok, going to sleep. please ping me here or by email if you've any questions about round-4 changes
17:56:11  <indutny>ttyl
17:59:30  <piscisaureus>hmm. time sheets...
18:11:54  <bnoordhuis>i've been putting that off since i joined joyent...
18:18:20  <piscisaureus>I am bad at it too. But I really must do it now.
18:30:26  <piscisaureus>https://github.com/joyent/node/issues/1553
18:31:12  <ryah>java uses SIGQUIT for this debug output
18:31:48  <igorzi>piscisaureus: nsdj-win7 is back up
18:33:01  <ryah>http://download.oracle.com/docs/cd/E19455-01/806-1367/6jalj6mv1/index.html
18:33:12  <bnoordhuis>good ol' ctrl-\
18:33:31  <igorzi>piscisaureus: http://video.ch9.ms/build/2011/slides/SAC-593T_Briggs.pptx
18:33:34  <ryah>(i'd like to have this functionality once we have domains - list all the current handles/domains)
18:33:43  <igorzi>piscisaureus:^--- slides about RIO sockets from build
18:34:36  <piscisaureus>igorzi: thnx 2x
18:34:37  * bnoordhuisis off to dinner
18:42:00  <ryah>piscisaureus: http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1/ <-- good read about gdb on linux
18:42:35  <ryah>doesn't say how it attaches to another process though..
18:43:21  <ryah>"Other signals used by the JVM are for internal control purposes and do not cause it to terminate. The only control signal of interest is SIGQUIT (on Unix type platforms) and SIGBREAK (on Windows), which cause a Java core dump to be generated."
18:43:25  <ryah>http://www.ibm.com/developerworks/java/library/i-signalhandling/
18:43:50  <ryah>i guess this is CTRL_BREAK_EVENT
18:46:25  <piscisaureus>ryah: http://msdn.microsoft.com/en-us/library/windows/desktop/ms679303%28v=VS.85%29.aspx
18:46:53  <piscisaureus>igorzi: I can't make up from that slides whether this makes 0-reads obsolete
18:47:10  <piscisaureus>igorzi: they should make it possible for multiple sockets to share a receive buffer pool
18:47:15  <piscisaureus>that'd be awesome
18:48:17  <piscisaureus>Actually, if the RIO socket api designers did *not* solve this problem I have lost my trust in them already -
18:48:17  <piscisaureus>it would mean they never ever wrote a C10k application
18:48:50  <ryah>piscisaureus: oh nice
18:50:09  <ryah>piscisaureus: are you against using the unix signal terminology in libuv?
18:50:18  <ryah>e.g. "SIGINT" instead of "CTRL+C" ?
18:50:53  <ryah>i think we should use the unix signal names
18:52:12  <ryah>and i guess we can do kill() with http://msdn.microsoft.com/en-us/library/ms683155.aspx
18:53:29  <DrPizza>ryah: I think CTRL_C_EVENT -> SIGINT, CTRL_BREAK_EVENT -> SIGHUP, CTRL_CLOSE_EVENT -> SIGTERM
18:54:25  <DrPizza>piscisaureus: c10k already works fine with overlapped I/O
18:54:45  <ryah>DrPizza: if you do zero reads :)
18:54:49  * dmkbotquit (Remote host closed the connection)
18:54:53  <piscisaureus>DrPizza: no it doesn't - you need -reads
18:54:55  * dmkbotjoined
18:54:55  <piscisaureus>0-read
18:54:59  <piscisaureus>what ryah said
18:55:03  <DrPizza>what do you mean, "need"?
18:55:41  <piscisaureus>If you don't you are wasting a lot of memory
18:56:05  <piscisaureus>10k sockets * 64 kb read buffers = 640mb
18:56:12  <DrPizza>memory is cheap
18:56:34  <piscisaureus>not that cheap
18:57:15  <DrPizza>if you're spending a thousand bucks on a windows license
18:57:26  <DrPizza>you can afford eonugh memory to have 64 kiB per connection
18:57:30  <ryah>640mb isn't bad for 10k clients, but it's kind of sad that a linux program can do the same (and preform better) with 3MB
18:57:41  <piscisaureus>c'mon - they should just fix that problem
18:57:46  <piscisaureus>it's not impossible
18:57:52  <DrPizza>I don't think it is a problem
18:58:10  <DrPizza>Ithink "read as much as you can into this buffer" is a perfectly reasonable model
18:58:16  <piscisaureus>just give the kernel a pool of buffers - when data comes in, pick a buffer from the pool and return that
18:58:30  <piscisaureus>^-- much better. not difficult to implement.
18:58:45  <DrPizza>that's what RIO does
18:58:51  <DrPizza>you carve off a big block of memory
18:58:54  <DrPizza>then hand it to the kernel
18:59:21  <DrPizza>the downside appears to be that the memory then has to be pinned
18:59:21  <piscisaureus>DrPizza: multiple sockets share the same memory?
18:59:30  <piscisaureus>DrPizza: that is also true for iocp
18:59:41  <DrPizza>iocp doesn't require the whole block to be pinned
18:59:45  <piscisaureus>it does
18:59:50  <DrPizza>no it doesn't
18:59:56  <DrPizza>it doesn't DMA into it (apparently)
19:00:10  <piscisaureus>it still locks the memory block into the non-paged pool
19:00:18  <piscisaureus>I don't say it has to do this - it just does.
19:01:05  <DrPizza>????
19:01:12  <DrPizza>IOCP DMAs intoa kernel buffer
19:01:19  <DrPizza>then memcpy()s to the user buffer
19:01:32  <piscisaureus>DrPizza: that's true - but it still locks the user buffer into physical memory
19:01:38  <DrPizza>(according to the RIO slides, anyway)
19:02:42  <DrPizza>different kind of pinning
19:03:08  <piscisaureus>DrPizza: what kind of pinning are you talking about then?
19:05:23  <DrPizza>RIO uses privileged (SeLockPages orwhatever it is) long-term pinning, so that it can DMA directly to the user memory, but IIRC such pages don't go in the NPP.
19:06:56  <piscisaureus>well - being able to DMA into a buffer kind of implies the pages to be nonpaged or?
19:07:10  <DrPizza>IOCP doesn't DMA into the user buffer
19:07:17  <DrPizza>I thought itdid, but these slides say otherwise
19:07:39  <piscisaureus>it doesn't really surprise me
19:07:41  <DrPizza>but maybe I am misinterpreting these slides
19:07:56  <DrPizza>but if you look at slide 13
19:08:13  <piscisaureus>yes - I interpret that the same as you do
19:08:14  <DrPizza>it DMAs into one buffer, then memcpy()s into the user buffer
19:08:19  <piscisaureus>But it doesn't matter
19:08:52  <piscisaureus>The problem is that IOCP non-0 reads wastes memory - and RIO apparently still does
19:08:55  <piscisaureus>that's just sad
19:09:11  <DrPizza>how does RIO waste memory?
19:09:49  <piscisaureus>The same way as IOCP non-0-reads wastes memory
19:10:15  <piscisaureus>RIOReceive takes a RIO_RQ
19:10:49  <piscisaureus>... and a buffer
19:11:04  <piscisaureus>a RIO_RQ is pegged to one socket
19:12:43  <piscisaureus>So maybe I should e-mail one of Ben Schultz, Osman Ertugay, Ed Briggs
19:14:12  <DrPizza>but the RIO_RQ isn't tied to a particular RIO buffer
19:14:44  <DrPizza>the rio buffer can be shared amongst all sockets
19:15:11  <piscisaureus>DrPizza: but you have to call RioReceive for every socket - and the RioReceive call takes a buffer
19:15:33  <DrPizza>no, it takes a portion of a buffer
19:15:48  <DrPizza>RIO_BUF is used to break up the onebig rio buffer into chunks for each I/O
19:15:48  <piscisaureus>meh
19:15:56  <piscisaureus>it doesn't change anything
19:16:12  <piscisaureus>you still have to dedicate 64kb of memory for 1 socket receive operation
19:16:28  <piscisaureus>and have at least 1 receive operation per socket
19:16:46  <DrPizza>why 64 kiB?
19:16:52  <piscisaureus>whether this is a portion of a big buffer or something separate ist mir ganz egal
19:16:59  <DrPizza>you can create RIO_BUFs of less than page granularity
19:17:03  <piscisaureus>sure
19:17:10  <piscisaureus>you can read 1 byte at a time
19:17:15  <piscisaureus>goodbye, performance
19:23:25  <DrPizza>instead of doing 64 kiB reads, I would think you'd want to try RWIN-sized reads, or maybe even RWIN * # of delayed acks
19:28:53  <igorzi>ryah: does the debugger stuff (that indutny has been working on) work on windows?
19:53:21  <ryah>igorzi: yeah
19:53:35  <ryah>igorzi: node debug somescript.js
19:53:38  <igorzi>ryah: yeah, i just tried.. this is very cool!
19:54:20  <ryah>im looking at the rio docs - but i don't really understand - does it still work with GetQueuedCompletionStatus?
19:54:30  <ryah>or do you have to poll some other way?
19:57:44  <ryah>http://www.infoq.com/articles/multi-core-node-js <-- why would they do this without asking us about it?
19:58:13  <ryah>i mean none of these people are involved in node development
19:58:56  <ryah>well - whatever
19:59:56  <piscisaureus>ryah: it meshes well with GetQueuedCompletionStatus so it seems
20:00:08  <piscisaureus>ryah: you can just request to be notified through iocp
20:00:33  <igorzi>i think you can still use GetQueuedCompletionStatus or poll RIODequeueCompletion
20:00:46  <piscisaureus>it was in the slides
20:08:39  <piscisaureus>ryah: I can't make up what this "node multi-core" is. Are they working on something together?
20:08:59  <DrPizza>yeah, you either GQCS if it's bound to an IOCP, wait for an event, if it's bound ot an event, or poll.
20:09:04  <ryah>piscisaureus: no - they're just talking about their different wrappers around sendfd
20:10:01  <ryah>i wonder if the benchmarks they're reporting are using IOCP or the RIODequeueCompletion?
20:10:08  <DrPizza>you use both
20:10:13  <DrPizza>well I mean
20:10:27  <DrPizza>you use RIODequeuecompletion regardless of whether oyu're polling or using IOCP
20:11:01  <ryah>i guess you also have the option of using RIODequeueCompletion alone (if you're only doing socket stuff)
20:11:16  <ryah>"May provide the highest performance and lowest latency, but high CPU utilization
20:11:19  <DrPizza>yes
20:11:40  <piscisaureus>because it's really polling
20:11:44  <piscisaureus>?
20:11:55  <DrPizza>yes, riodqueuecompletion doesn't wait
20:12:01  <DrPizza>it just tries to dequeue if there's anything to dequeue
20:12:14  <piscisaureus>so they're saying: a busyloop provides the best performance
20:12:15  <piscisaureus>:-/
20:12:17  <DrPizza>so you can just call it in a loop with YieldProcessor or Sleep(0) or w/e
20:12:19  <piscisaureus>wow
20:12:21  <ryah>it doesn't block?
20:12:25  <DrPizza>no
20:12:38  <DrPizza>ryah: if you want itt o block, you wait on the IOCP first
20:12:41  <DrPizza>then dequeue.
20:12:46  <DrPizza>or wait on the event first
20:12:48  <DrPizza>then dequeue
20:13:03  <piscisaureus>hmm
20:13:55  <piscisaureus>this quote: "Windows Server 2008R2 sustains ~2 Million datagrams per second"
20:14:02  <piscisaureus>I wonder how they do it
20:14:29  <piscisaureus>I never managed to even achieve 10% of that
20:14:56  <DrPizza>well they may have 10x the processors
20:15:02  <piscisaureus>heh
20:15:10  <DrPizza>or more!
20:15:14  <piscisaureus>it's cheating
20:15:16  <DrPizza>although who knows
20:15:58  <piscisaureus>Windows XP can send 30 million search requests per second to google
20:16:30  <piscisaureus>Just takes a few continents packed with xp computers
20:24:28  <CIA-53>node: Fedor Indutny new-tty-binding * r86f8701 / lib/_debugger.js : [debugger] fix 'debug> connecting...', fixed autostart (XXX figure out why it wasn't working in some cases), fixed highlighting for first line of module's code - http://git.io/S5Sxng
20:24:28  <CIA-53>node: Fedor Indutny new-tty-binding * r13c0156 / lib/_debugger.js : [debugger] optimize context's properties initialization, make 'list' a function, not a getter - http://git.io/i-verg
20:24:28  <CIA-53>node: Fedor Indutny new-tty-binding * rf1135b9 / lib/_debugger.js : [debugger] shorten break message - http://git.io/m_sbnQ
20:24:33  <ryah>fuck..
20:25:06  <ryah>^-- force pushing this branch
20:25:45  <piscisaureus>volatile int fubar;
20:25:45  <piscisaureus>for (fubar = 50000; fubar > 0; fubar--) {}
20:25:45  <piscisaureus>^-- that's what we need to do
20:28:02  <DrPizza>do or what
20:28:05  <DrPizza>er, for what
20:28:07  <DrPizza>burning cycles?
20:28:09  <CIA-53>libuv: Igor Zinkovsky file_watcher * r1e0757f / (9 files in 5 dirs): windows: file watcher - http://git.io/YFVNlw
20:28:09  <CIA-53>libuv: Ben Noordhuis file_watcher * r2a1c32a / (6 files in 3 dirs): linux: implement file watcher API - http://git.io/xZMwxQ
20:28:24  <CIA-53>node: Fedor Indutny master * r3148f14 / lib/_debugger.js : [debugger] fix 'debug> connecting...', fixed autostart (XXX figure out why it wasn't working in some cases), fixed highlighting for first line of module's code - http://git.io/3lvlXQ
20:28:25  <CIA-53>node: Fedor Indutny master * r79fd1f7 / lib/_debugger.js : [debugger] optimize context's properties initialization, make 'list' a function, not a getter - http://git.io/q0KUIA
20:28:25  <CIA-53>node: Fedor Indutny master * r8efe7a8 / lib/_debugger.js : [debugger] shorten break message - http://git.io/zrQp_A
20:28:37  * isaacsquit (Quit: isaacs)
20:29:25  <igorzi>bnoordhuis: i rebased your and my changes against master, and put them into https://github.com/joyent/libuv/tree/file_watcher.
20:29:38  <igorzi>bnoordhuis: can you pls verify that linux stuff is correct and works?
20:39:54  <ryah>guys - i need something like: int uv_is_pipe(int fd)
20:40:11  <ryah>in node we test if stdout is a tty, a pipe, or a file
20:40:28  <ryah>and switch how we deal with output differently
20:40:55  <ryah>i think in windows you will never have a pipe for stdout which is attached to an iocp?
20:42:02  <ryah>int uv_is_stdout_blocking() would also work
20:42:08  <ryah>but im trying to be more general...
20:42:22  <DrPizza>you could couldn't you?
20:42:45  <DrPizza>if you were chaining node instances for example, they'd use iocp-attached pipes, wouldn't they?
20:43:04  <igorzi>yeah, most likely the stdout pipe is created without FILE_FLAG_OVERLAPPED (for non-node child processes)
20:43:04  <ryah>"node script.js | cat"
20:43:08  <ryah>^-- what will stdout be?
20:43:17  <piscisaureus>pipe
20:43:20  <DrPizza>probably a blocking pipe
20:43:30  <piscisaureus>anonymous pipe even, they are always blocking
20:43:41  <DrPizza>piscisaureus: they can probably be reopened, can't they?
20:43:54  <piscisaureus>don't know
20:43:54  <DrPizza>to make them overlapped
20:44:04  <ryah>in the case that stdout is not a tty - we should always use uv_fs_write() for stdout - correct?
20:44:07  <ryah>(on windows)
20:44:29  <piscisaureus>ryah: maybe not always - but I guess it's fine for now
20:44:43  <DrPizza>ryah: require('child_process').exec('node') (or whatever it is) would spawn node with an overlapped pipe on stdout, I think
20:44:45  <ryah>this is what's currently happening
20:44:49  <piscisaureus>ryah: bnoordhuis: igorzi: win-prefork with fake work (https://gist.github.com/1233242)
20:44:49  <piscisaureus>1 process: 4727 r/s
20:44:49  <piscisaureus>2 processes: 9062 r/s
20:44:49  <piscisaureus>4 processes: 15821 r/s
20:44:49  <piscisaureus>8 processes: 23858 r/s
20:44:50  <piscisaureus>16 processes: 22895 r/s
20:45:58  <DrPizza>ryah: I think that sohuld be reasonable, yes
20:47:48  <igorzi>piscisaureus: do you know what happens between 4 and 8 processes? where does it max out?
20:48:02  <piscisaureus>igorzi: no - let me try
21:04:03  <ryah>piscisaureus, igorzi, DrPizza: you okay with these functions https://gist.github.com/1233308 ?
21:05:19  <DrPizza>I'd rather have a struct terminal_size { int width; int height; }, but I guess that's taste
21:06:48  <piscisaureus>ryah: (1) do we have a UV_FILE type atm? (2) DrPizza is kind of right, the width-height ordering is not necessarily clear. uv_tty_get_winsize(uv_tty_t*, int* w, int* h) also works.
21:06:48  <piscisaureus>igorzi:
21:06:48  <piscisaureus> processes: 18601
21:06:48  <piscisaureus>6 processes: 21158
21:06:48  <piscisaureus>7 processes: 21675
21:06:59  <DrPizza>after all, a standard terminal is 80x25, not 25x80
21:07:47  <ryah>yes we have UV_FILE - okay i'll change the sig for uv_tty_get_winsize()
21:09:54  <ryah>terminals are so painful
21:10:06  <DrPizza>agreed
21:10:12  <DrPizza>node should use a GUI instead
21:10:16  <DrPizza>:p
21:11:29  <ryah>piscisaureus: it that an 8 core box?
21:11:33  <piscisaureus>yes
21:11:49  <piscisaureus>ryah: you know which box it is :-/
21:12:22  <ryah>yeah sorry :)
21:14:07  <piscisaureus>so if this benchmark is good 8 nodes should be able to handle 24K r/s on this box
21:15:28  <ryah>piscisaureus: so. sendfd api
21:15:50  <ryah>piscisaureus: you're doing this over a named pipe, correct?
21:16:05  <piscisaureus>ryah: yes, but that doesn't really matter
21:16:24  <ryah>piscisaureus: should we add another "channel" to uv_spawn?
21:16:27  <ryah>for sending stuff?
21:16:35  <ryah>can use it for cp.fork() too
21:16:39  <piscisaureus>ryah: I guess
21:16:42  <piscisaureus>yes
21:17:02  <piscisaureus>ryah: it needs to be a special UV_XXX type
21:17:50  <ryah>piscisaureus: right - so is your thing going to work with non-sockets as well?
21:18:09  <piscisaureus>ryah: it could - but it would need extra work
21:18:19  <ryah>does it work with non-servers?
21:18:31  <piscisaureus>ryah: yes, that too needs extra work
21:18:53  <piscisaureus>ryah: somehow it only works with servers if the root process calls listen() first
21:19:04  <ryah>piscisaureus: what if we add a "bool use_channel" to process_options
21:19:31  <ryah>piscisaureus: then have something like uv_send_socket(uv_process_t*, uv_tcp_t*)
21:19:52  <piscisaureus>ryah: how would you send user data over the channel?
21:20:13  <ryah>uv_ipc(uv_process_t*, uv_buf_t*)
21:20:18  <ryah>or something
21:20:24  <ryah>hm
21:20:33  <ryah>how do we recv it ?
21:20:38  <piscisaureus>:-)
21:21:08  <piscisaureus>ryah: I think we considered mimicking the accept api for receiving fd's
21:21:38  * isaacsjoined
21:22:46  <piscisaureus>let me think - there may be other ways to do this
21:22:54  <ryah>uv_default_loop()->channel
21:23:22  <ryah>hm
21:23:23  <piscisaureus>that would be the channel to the parent process right?
21:23:27  <ryah>yeah
21:24:24  <ryah>somehow you want to be able to test if your a child of a libuv process
21:24:46  <piscisaureus>and if it is interested in using the channel :-)
21:25:00  <piscisaureus>I mean, conceivably people might use libuv but not care about the channel
21:26:30  * piscisaureusthinks (beware)
21:27:32  * piscisaureusthinks (http://www.youtube.com/watch?v=v-ayGu0PYJA)
21:31:42  <ryah>uv_ipc(uv_process_t*, uv_buf_t, uv_tcp_t*); uv_default_loop()->on_ipc; typedef uv_buf_t (*uv_ipc_cb)(uv_stream_t* channel, size_t suggested_size);
21:31:52  <ryah>^-- combine uv_alloc_cb and uv_connection_cb ?
21:32:21  <ryah>hmm.
21:32:41  <ryah>using uv_accept would be nice
21:32:44  <ryah>for integrating with node
21:33:14  <piscisaureus>ryah: is it possible to needed to enable/disable sendfd on unix?
21:33:32  <piscisaureus>ryah: what happens if someone sends you an fd but you're not interested?
21:34:24  <ryah>piscisaureus: you can ignore it
21:34:37  <ryah>i think that fd will still be opened in your process
21:35:58  <piscisaureus>ryah: I think ideally we should just make it a stream type
21:36:11  <ryah>piscisaureus: fine by me
21:36:29  <piscisaureus>ryah: it should be able to listen() etc
21:36:34  <piscisaureus>and also read_start
21:36:40  <piscisaureus>on both ends
21:36:58  <ryah>piscisaureus: when the child recvs it - does the child alloc the space for it or the parent?
21:37:11  <piscisaureus>ryah: that's the only open question indeed
21:37:12  <ryah>piscisaureus: what's the windows API you use on the recving end?
21:37:30  <ryah>s/or the parent/or libuv/
21:37:31  <piscisaureus>ryah: ReadFile plus a protocol parser
21:38:00  <ryah>piscisaureus: you encode the HANDLE pointer?
21:38:22  <piscisaureus>ryah: I don't really understand what you mean, sorry
21:38:35  <DrPizza>ryah: duplicatehandle into the child process, then tell the child process the integer handle value
21:38:36  <ryah>eh - in your protocol - do you write out the HADNLE
21:38:47  <ryah>over "the wire"
21:38:52  <piscisaureus>ryah: yes, for files/pipes I would do that
21:39:17  <ryah>that's actually a nice API
21:39:22  <ryah>i liek that better than the unix way
21:39:25  <piscisaureus>ryah: for sockets I would send a WSAPROTOCOL_INFOW structure
21:39:28  <DrPizza>what's the unix way?
21:39:52  <ryah>there's a special struct that you attach to your message
21:39:59  <DrPizza>ah
21:40:17  <piscisaureus>I am not sure it's better than the unix way
21:40:20  <ryah>the fd on the recving side can be different
21:40:27  <piscisaureus>on windows too
21:40:46  <piscisaureus>you have to call DuplicateHandle to obtain a handle that's valid in the receiving process
21:40:58  <ryah>piscisaureus: ah
21:41:05  <DrPizza>yes
21:41:23  <DrPizza>you then pass the value of that handle to the child through some app-specific mechanism
21:41:50  <DrPizza>piscisaureus: it's kindof annoying that you can't just duplicatehandle sockets
21:42:09  <piscisaureus>yes
21:42:13  <DrPizza>do people actually have socket providers that don't have corresponding kernel HANDLEs?
21:42:21  <piscisaureus>no
21:42:24  <DrPizza>obviously that's not the case for IP
21:42:32  <DrPizza>so really there's no good reason for it
21:42:35  <piscisaureus>it never happens
21:42:48  <DrPizza>it's just that a socket provider can theoretically have non-HANDLE sockets
21:42:52  <piscisaureus>but winsock also keeps some socket information in user address space
21:42:58  <piscisaureus>I think that's what gets fucked up
21:43:02  <DrPizza>hmm
21:43:09  <ryah>i think the child process should alloc the socket
21:43:10  <DrPizza>that may be true Isuppose
21:43:28  <ryah>it plays better with node... because we have the structs in our wrap classes
21:43:29  <DrPizza>piscisaureus: but if they mandated that every socket were a HANDLE, I'm sure they'd figure something out
21:43:49  <piscisaureus>ryah: we should pretend it's some kind of 4th stdio handle but with a specific type
21:44:13  <ryah>piscisaureus: yeah sure. one that you can call uv_listen on?
21:44:14  <DrPizza>piscisaureus: you could even pass it as stdio handle 4.
21:44:30  <ryah>piscisaureus: and uv_read_start() :)
21:44:30  <DrPizza>piscisaureus: though it would mean filling in that "reserved" field when calling CreateProcess
21:44:36  <piscisaureus>ryah: exactly
21:44:56  <piscisaureus>DrPizza: yes I am considering that but it's an implementation detail
21:44:58  <ryah>what if someone calls uv_listen but not uv_read_start?
21:45:05  <piscisaureus>ryah: hmm
21:45:17  <piscisaureus>ryah: we send the MI6 after him?
21:45:59  <ryah>it seems we need some sort of combined listen/read
21:46:09  <piscisaureus>ryah: you have a point.
21:46:17  <ryah>on unix too - we must read data when we recv fds
21:46:27  <piscisaureus>although we could also just not emit connection_cb when the user isn't reading
21:46:55  <ryah>hm - that might work
21:47:15  <piscisaureus>in that case we could have uv_readfd_start(uv_ipc_t*, uv_read_cb*, uv_connection_cb*, uv_alloc_cb*)
21:47:43  <ryah>oh god
21:48:01  <ryah>it looks a bit horrible
21:48:08  <piscisaureus>I agree
21:48:31  <ryah>uv_readfd_start(uv_ipc_t*, uv_alloc_cb*, uv_readfd_cb*)
21:48:41  <piscisaureus>heh
21:48:46  <piscisaureus>now specify uv_readfd_cb
21:49:44  <ryah>typedef void (*uv_readfd_cb)(uv_ipc_t*, ssize_t nread, uv_buf_t, bool pending_connection)
21:51:17  <piscisaureus>on unix, if you want to send an fd, you need to send data too
21:51:25  <piscisaureus>are we going to maintain this restriction?
21:51:39  <ryah>or - how about using normal uv_read_start, uv_read_stop but adding: uv_handle_type uv_ipc_pending_connection(uv_ipc_t*)
21:52:25  <piscisaureus>pending_connection seems off, but that'd work
21:52:42  <piscisaureus>so the user could poll from uv_read_cb right?
21:52:44  <ryah>somehow you need to know which type before you accept
21:52:50  <ryah>yeah
21:52:57  <piscisaureus>that's also a problem for the on_connection callback
21:53:01  <piscisaureus>we should pass in the type somehow
21:53:34  <ryah>so in node we currently force people to write a buffer when they send fds
21:53:50  <piscisaureus>yes
21:54:04  <piscisaureus>we can maintain this restriction
21:54:18  <piscisaureus>although on windows there's no need to
21:54:34  <ryah>i mean - on the unix side we could also have our own protocol
21:54:42  <ryah>which says if there is a buffer or not :)
21:54:49  <ryah>that'll take up one byte
21:54:52  <piscisaureus>heh
21:54:54  <piscisaureus>sure
21:55:40  <piscisaureus>ryah: is there some kind of guaranteed ordering of the fd and message
21:55:43  <ryah>ok let me gist our what we've just talked about...
21:55:51  <piscisaureus>ryah++
22:01:18  <piscisaureus>this going to be a nice api for node users
22:01:44  <piscisaureus>we should not send FDs to the other side - send a wrap
22:02:55  <ryah>piscisaureus: strawman https://gist.github.com/1233459
22:03:37  <ryah>oops missing the most important part..
22:03:57  <piscisaureus>pending_connection should be renamed to something like pending_handle or pending_stream
22:05:29  <ryah>updated
22:06:37  <piscisaureus>maybe uv_ipc_pending_connection should take an uv_stream_t
22:07:14  <ryah>oh, actually i think it should take no args
22:07:21  <piscisaureus>hmm
22:07:43  <ryah>hm
22:07:44  <igorzi>do we want to see what this (fake work) looks like with multi-threaded server?
22:07:45  <ryah>:/
22:07:54  <piscisaureus>what happens if the user doesn't pick up the pending handle?"
22:08:04  <ryah>igorzi: would be interesting - if it's not too much trouble :D
22:08:11  <piscisaureus>igorzi: is it a lot of work or a little?
22:08:27  <igorzi>little, i'll get the numbers for completion :)
22:08:51  <piscisaureus>igorzi: do it. I am most interested in the numbers for 1 process and 8 processes
22:08:57  <piscisaureus>that should give a good indication
22:08:58  <igorzi>piscisaureus: where did you add the loop? in read_cb?
22:09:14  <piscisaureus>igorzi: yes, I do it just before calling uv_write
22:15:10  <ryah>piscisaureus: we're going to have to have a queue of pending handles, i think
22:15:28  <ryah>piscisaureus: or throw them away if the user doesn't accept them
22:16:09  <piscisaureus>ryah: uv_pending_handle needs to take a parameter
22:16:14  <ryah>why
22:16:18  <piscisaureus>or it won't play nice with multiplicity
22:16:24  <ryah>https://gist.github.com/1233459 <-- updatd
22:16:35  <piscisaureus>good
22:17:07  <ryah>i like this...
22:17:30  <piscisaureus>... ?
22:18:14  <ryah>i think this api is looking okay
22:18:20  <piscisaureus>:-)
22:18:20  <piscisaureus>nice
22:18:25  <piscisaureus>now the implementation
22:18:46  <piscisaureus>re: * Called from child.
22:18:57  <piscisaureus>the child would also be able to send an fd to the parent right?
22:19:02  <ryah>let's get feedback from bnoordhuis and igorzi about this API
22:19:13  <ryah>hm
22:19:15  <ryah>i guess so...
22:19:45  <bnoordhuis>igorzi: your file watcher branch works fine on linux, go ahead and merge it
22:19:58  <piscisaureus>igorzi is busy :-)
22:21:22  <ryah>so we wont use uv_listen() with uv_ipc_t , right?
22:21:26  * mralephjoined
22:21:39  <piscisaureus>no
22:21:46  <ryah>k
22:22:04  <piscisaureus>ryah: do we guarantee anything about the order that FDs and data appears in
22:22:34  <igorzi>piscisaureus: ryah: i think i'm missing some context.. why can't we just pass the socket (handle) to the other process through the child process'es stdin? (like fd passing on unix)
22:23:13  <piscisaureus>ryah: right now uv_ipc_send doesn't take data so the client sees *either* data *or* a stream, or the relationship between data and stream is unclear
22:24:14  <ryah>piscisaureus: they'll come in different packets
22:24:42  <ryah>igorzi: well bert wants to have his own protocol on top of the stream
22:24:54  <piscisaureus>igorzi: that would be possible but users might want to use stdin for something else.
22:24:54  <piscisaureus>igorzi: also, on unix FDs don't need to be multiplexed with other data, but on windows this is needed, so we have to layer your own protocol on top of the stream
22:25:13  <piscisaureus>i don't want this - I just don't see a reasonable alternative
22:26:01  <piscisaureus>ryah: what if the user wants to send some context with the FD? Like, what the purpose of it is
22:26:49  <ryah>piscisaureus: yeah.. that's a good use-case
22:27:23  <ryah>piscisaureus: they can always send a message before the fd
22:27:34  <ryah>piscisaureus: with app-layer info that says: "im sending a socket now!"
22:27:39  <igorzi>piscisaureus: the socket is only sent when the child process starts-up, right? after that there's no more communication (for passing sockets)?
22:27:45  <ryah>then the next fd to be recved can be interpreted that way
22:28:01  <piscisaureus>igorzi: no - we can just send sockets whenever the user wants to
22:28:31  <piscisaureus>igorzi: the ipc pipe however is established at startup
22:29:16  <igorzi>piscisaureus: how does it work with unix fd-passing? can the fd be sent whenever the user wants to?
22:29:25  <piscisaureus>igorzi: yes
22:29:45  <igorzi>piscisaureus: and how does the multiplexing work there?
22:30:17  <ryah>igorzi: the message is ancillary to the data
22:30:20  <piscisaureus>igorzi: the fd is sent as some kind of out-of-band data
22:30:21  <ryah>er the fd
22:31:09  <CIA-53>libuv: Igor Zinkovsky master * r1e0757f / (9 files in 5 dirs): windows: file watcher - http://git.io/YFVNlw
22:31:09  <piscisaureus>ryah: this is undocumented btw - I can't find it here - http://nodejs.org/docs/v0.5.7/api/net.html
22:31:10  <CIA-53>libuv: Ben Noordhuis master * r2a1c32a / (6 files in 3 dirs): linux: implement file watcher API - http://git.io/xZMwxQ
22:31:50  <ryah>http://linux.die.net/man/2/recvmsg <- see msg_control
22:31:51  <piscisaureus>oh heh
22:31:53  <piscisaureus>http://nodejs.org/docs/v0.5.7/api/streams.html
22:32:02  <igorzi>ok, i get the need for the protocol now
22:32:14  <piscisaureus>http://nodejs.org/docs/v0.5.7/api/streams.html#stream.write and http://nodejs.org/docs/v0.5.7/api/streams.html#event_fd_
22:32:44  <igorzi>ryah: piscisaureus: so you want to still use stdin with a special protocol? or you want to use a completely separate pipe for this?
22:32:54  <piscisaureus>igorzi: stdin will just be a pipe
22:33:19  <piscisaureus>igorzi: we'll have a completely separate pipe for this
22:33:28  <ryah>piscisaureus: btw how do you get the channel to the child?
22:33:34  <igorzi>piscisaureus: ok
22:33:52  <piscisaureus>igorzi: think of it as a 4th stdio "stream"
22:34:00  <piscisaureus>ryah: *shrug* there are several options
22:35:06  <piscisaureus>ryah: an environment variable, or cbReserved2/lpReserved2 from STARTUPINFO (http://msdn.microsoft.com/en-us/library/ms686331%28v=vs.85%29.aspx)
22:35:47  <ryah>using stdin might be good - can it be a bidirection pipe for you?
22:36:03  <ryah>i hate dealing with these environmental variables...
22:36:03  <piscisaureus>yes, it can
22:36:39  <piscisaureus>ryah: but how the the child node going to tell if it's parent is node (so it has to parse this protocol) or something else (so stdin is raw)
22:36:40  <piscisaureus>?
22:37:16  <bnoordhuis>igorzi: okay if i merge the file watcher branch?
22:37:28  <bnoordhuis>oh, you just did, nm :)
22:37:36  <ryah>piscisaureus: the child should call uv_ipc_init() ?
22:37:55  <piscisaureus>ryah: so how is libuv going to tell the difference?
22:38:06  <ryah>oh hm
22:38:18  <piscisaureus>all libuv can tell it's a pipe
22:38:33  <ryah>yeahi guess we need some environ variable at least
22:38:55  <piscisaureus>well - on windows this might be avoidable
22:39:00  <piscisaureus>but not on linux I think
22:39:13  <piscisaureus>and I don't really know if I want to patch lpReserved2
22:40:34  <ryah>another problem is that uv_process_options_t currently has uv_pipe_t* for stdin
22:42:06  <ryah>i think having a different channel is probably easiest
22:46:17  <ryah>bnoordhuis: do you have any opinions on https://gist.github.com/1233459 ?
22:47:32  <piscisaureus>ryah: so read_cb either gets an empty buffer or an fd?
22:47:33  <bnoordhuis>ryah: no strong opinions
22:47:45  <piscisaureus>er
22:48:05  <piscisaureus>a buffer and no fd *or* an empty buffer and an fd?
22:48:12  <bnoordhuis>maybe that uv_ipc_send(uv_ipc_t*, uv_stream_t*) should accept a handle instead? in case you want to send over something non-stream like?
22:48:30  <piscisaureus>like what?
22:48:38  <bnoordhuis>udp handle?
22:48:44  <piscisaureus>hmm yeah
22:51:51  <piscisaureus>ryah: why doesn't uv_ipc_send not take a req?
22:52:00  <piscisaureus>s/n't//
22:52:41  <igorzi>piscisaureus: ryah: if uv_ipc_pending returns pipe, then the child calls uv_accept?
22:52:51  <piscisaureus>igorzi: yes
22:52:58  <CIA-53>libuv: Ben Noordhuis master * rbee7112 / (src/unix/internal.h src/unix/linux.c): unix: move container_of and SAVE_ERRNO to internal.h - http://git.io/YDko-g
22:53:19  <piscisaureus>I'm not happy about the uv_ipc_send api
22:53:31  <igorzi>piscisaureus: what does it pass for 1st arg? (server)
22:53:31  <piscisaureus>it doesn't take a req, buffers or a callback
22:53:45  <piscisaureus>igorzi: the uv_ipc_t handle
22:54:37  <ryah>yeah..
22:54:46  <ryah>we kind of want uv_ipc_send to be like uv_write
22:54:52  <piscisaureus>yes
22:54:59  <ryah>can't we just have a uv_write_prime() or something?
22:55:09  <ryah>which also has a uv_stream_t
22:55:26  <piscisaureus>uv_write_prime()?
22:55:40  <piscisaureus>uv_write_prime(13) <- ok
22:55:40  <piscisaureus>uv_write_prime(12) <- ENOTPRIME?
22:55:57  <igorzi>1 proc: 4709
22:55:57  <igorzi>2 procs: 8977
22:56:01  <igorzi>4 procs: 13515
22:56:05  <igorzi>8 procs: 21860
22:56:17  <igorzi>* procs -> threads
22:56:56  <igorzi>(so i think we're good to go with piscisaureus's prefork :))
22:57:17  <piscisaureus>\o/ i won! victory!
22:57:21  <piscisaureus>:-p
22:57:27  <piscisaureus>igorzi: nice work.
23:09:49  <piscisaureus>ryah: if a primer works, an uv_write_ipc() function that takes one extra argument works as well.
23:09:49  <piscisaureus>I like that better
23:11:08  <ryah>piscisaureus: ok, im down with that
23:13:22  <igorzi>ryah: https://github.com/igorzi/node/commit/2f71caf95011e0d3d3dfd59bbe45ed6c1d462c09
23:13:23  <igorzi>there's no node_zlib.h
23:17:42  * mralephquit (Quit: Leaving.)
23:19:03  <piscisaureus>ryah: so this - https://gist.github.com/1233593
23:20:07  * dmkbotquit (Remote host closed the connection)
23:20:12  * dmkbotjoined
23:31:08  * bnoordhuisquit (Ping timeout: 260 seconds)
23:36:26  * piscisaureusquit (Quit: ~ Trillian Astra - www.trillian.im ~)
23:38:44  * piscisaureusjoined
23:49:46  <ryah>piscisaureus: sorry office hours started
23:50:01  <piscisaureus>I'm going to bed
23:50:06  <ryah>piscisaureus: uv_ipc_write sounds good
23:50:11  <ryah>piscisaureus: okay - night
23:50:26  <piscisaureus>I'm going to tweet this
23:51:06  <ryah>im going to retweet your tweet :)
23:54:54  <CIA-53>node: Igor Zinkovsky master * rde0066c / node.gyp : remove node_zlib.h from node.gyp - http://git.io/9t13yw