00:01:12  <deoxxa>mmalecki: might that be because of that "lots of tiny writes" optimisation stuff?
00:01:46  <mmalecki>deoxxa: that matches what our load balancer does. is that in the changelog or something?
00:02:06  <deoxxa>i just remember a gist pasted a couple of months back that made me lol
00:02:41  <mmalecki>oh, I think I know what you mean. is that about string optimisations made by pisci?
00:03:06  <deoxxa>pooossibly, i remember it had a catchy little jab at java in the title
00:03:28  <mmalecki>lol, I don't remember it
00:03:39  <deoxxa>heh
00:03:53  <deoxxa>i can't remember much about it except that i lol'd at java (yet again)
00:04:27  <mmalecki>well, everyone lol's at java
00:15:10  * ericktquit (Quit: erickt)
00:15:57  * toothrotquit (Ping timeout: 240 seconds)
00:20:03  * toothrjoined
00:20:19  * loladiro_quit (Read error: Connection reset by peer)
00:20:24  * loladirojoined
00:21:26  * toothrquit (Changing host)
00:21:26  * toothrjoined
00:21:30  * toothrchanged nick to toothrot
00:30:35  * loladiroquit (Read error: Operation timed out)
00:50:07  <CIA-108>libuv: Ben Noordhuis master * rc5761f7 / src/unix/async.c : unix: speed up uv_async_send() some more still - http://git.io/G6iQBw
00:50:30  <bnoordhuis>^ taking micro-optimization to a whole new level
00:52:06  * felixgejoined
00:52:07  * felixgequit (Changing host)
00:52:07  * felixgejoined
00:52:07  * travis-cijoined
00:52:07  <travis-ci>[travis-ci] joyent/libuv#488 (master - c5761f7 : Ben Noordhuis): The build is still failing.
00:52:07  <travis-ci>[travis-ci] Change view : https://github.com/joyent/libuv/compare/3d9c1ebfeb3f...c5761f72b32b
00:52:07  <travis-ci>[travis-ci] Build details : http://travis-ci.org/joyent/libuv/builds/1813267
00:52:07  * travis-cipart
00:52:14  <tjfontaine>oh goodness
00:54:56  <bnoordhuis>i've resisted the urge for days
00:55:01  <bnoordhuis>but tonight i caved in
01:08:39  <bnoordhuis>i confess i don't quite understand why xchg is sometimes so much faster than cmpxchg
01:09:13  <bnoordhuis>i mean, it's a simpler operation and it's always at least 10% faster
01:09:27  <bnoordhuis>but sometimes it just speeds away and is like several times faster
01:14:05  <indutny>haha
01:14:06  <indutny>cache
01:14:11  <indutny>bnoordhuis: this is a cache,m na
01:14:15  <indutny>s/m na/man
01:14:38  <bnoordhuis>indutny: it can't be that because both instructions dirty the cache
01:14:49  <indutny>you think so?
01:15:01  <bnoordhuis>indutny: like that one ion said to another: i'm positive
01:16:46  <indutny>sure, you're
01:17:55  <indutny>ok, gtg
01:18:01  <indutny>Haru Sushi is waiting for us
01:18:02  <indutny>:D
01:18:11  <bnoordhuis>enjoy it, fedor
01:20:21  <indutny>bnoordhuis: thanks, ban
01:20:27  <indutny>btw, are you going to be in Portugal on lxjs?
01:20:35  <bnoordhuis>indutny: probably not
01:20:54  <indutny>bnoordhuis: oh, I'm going to visit you then :P
01:21:09  <bnoordhuis>indutny: you're coming to the netherlands?
01:21:18  <indutny>bnoordhuis: probably yes :D
01:21:27  <bnoordhuis>cool
01:21:28  <indutny>I think we'll ride over europe
01:21:35  <indutny>before going to lxjs
01:21:59  <indutny>ok
01:22:00  <indutny>ttyl
01:24:08  <mmalecki>indutny: and Poland?
01:33:54  * abraxasjoined
01:44:46  * bnoordhuisquit (Ping timeout: 246 seconds)
01:45:46  * EhevuTovquit (Read error: Connection reset by peer)
01:48:32  * EhevuTovjoined
02:04:57  * ericktjoined
02:13:54  * mmaleckiquit (Ping timeout: 255 seconds)
02:29:10  * EhevuTovquit (Quit: This computer has gone to sleep)
02:29:34  * ericktquit (Quit: erickt)
02:59:42  * brsonquit (Ping timeout: 250 seconds)
03:01:03  * dshaw_quit (Quit: Leaving.)
03:06:59  * dshaw_joined
03:18:40  <indutny>mmalecki: no Poland in plans, sorry
03:18:41  <indutny>:D
03:18:57  <indutny>piscisaureus: have you been in Poland?
03:19:21  <indutny>I think my grand-grand-grand-grand-ma was from Poland
03:19:33  <indutny>not really sure if I wish to visit this country ever
03:35:02  <indutny>bnoordhuis: interesting
03:35:19  <indutny>using instricts
03:35:24  * dshaw_quit (Quit: Leaving.)
03:35:56  <indutny>inline all the things!
03:36:09  <indutny>so you was sort of inspired about inline assembly
03:36:36  * EhevuTovjoined
03:43:37  * chobi_echanged nick to chobi_e_
03:54:24  <CIA-108>node: Nathan Rajlich v0.8 * rd3d83d7 / (src/node.cc test/simple/test-process-hrtime.js): process: throw a TypeError when anything but an Array is passed to hrtime() - http://git.io/9ahepg
03:54:32  * c4milojoined
03:54:53  * chobi_e_changed nick to chobi_e
04:20:08  * nathansobojoined
04:24:53  <nathansobo>hi… i'm interested in getting Node.js running inside a Chromium renderer process. since the renderer has its own event loop, it seems like the best approach might be to run Node's event loop on another thread, but register and execute callbacks on the renderer thread since V8 isn't thread safe. I found an example in the libev documentation that does this using locks to protect the event loop data, and using a combination of
04:24:53  <nathansobo>async_send and condition variables to coordinate thread interaction. that example uses some libev hooks that don't seem to have analogs in libuv. does anyone have any thoughts on my approach? am i heading off a cliff here?
04:27:15  <indutny>isaacs: hm... ab sometimes goes stuck on latest node
04:27:26  <indutny>isaacs: and server starts timing out
04:27:38  <indutny>have we seen this before?
04:27:49  <indutny>I'm quite sure that ulimit is fine
04:28:03  <indutny>probably node, is leaking fds (dunno)
04:37:23  * felixgequit (Quit: felixge)
04:37:57  <indutny>ok, that really looks like v8 fault
04:38:05  <indutny>going to rebuild node with older v8
04:40:28  <indutny>3.10
04:41:27  * brsonjoined
04:43:08  * felixgejoined
04:43:08  * felixgequit (Changing host)
04:43:08  * felixgejoined
04:45:46  <indutny>ok, the same thing with 3.10
04:50:04  * c4miloquit (Remote host closed the connection)
04:50:30  * c4milojoined
04:51:26  <indutny>nathansobo: so you can just use uv_async_send
04:51:51  <indutny>and some sort of analogue for the chrome's eventloop
04:52:11  <nathansobo>indutny: what about managing mutual exclusion around the loop data structure?
04:53:01  <indutny>nathansobo: you just should not change loop from other loop
04:53:38  <indutny>nathansobo: the only safe way of interaction with loop from another thread is uv_async_send
04:54:43  * c4miloquit (Ping timeout: 245 seconds)
04:55:20  <nathansobo>libev offers ev_set_loop_release_cb, which takes two callbacks to acquire and release the lock
04:55:59  <nathansobo>so you can add a watch while the loop is asleep
04:56:12  <indutny>well, why do you need that?
04:56:13  <nathansobo>and then do an async send to wake it up
04:56:24  <nathansobo>yeah maybe you can help me think of another approach
04:56:32  <nathansobo>i want to run node in the renderer thread
04:56:43  <indutny>it won't work
04:56:43  <nathansobo>basically have javascript be able to access node.js function AND the dom
04:56:46  <indutny>node's blocking
04:56:57  <nathansobo>that's why i want to run its event loop in another thread
04:57:03  <indutny>yes, you should do that
04:57:17  <indutny>and you don't need locks for loop
04:57:18  <nathansobo>okay… glad we're on the same page there
04:57:28  <nathansobo>oh… howcome?
04:57:38  <indutny>just uv_async_send
04:57:47  <indutny>to invoke some action in node's loop
04:57:54  <nathansobo>what if i register a new watch from the renderer thread?
04:58:00  <indutny>what's a watch?
04:58:06  <indutny>is it some sort of chromium thing?
04:58:12  <nathansobo>no, i mean a callback
04:58:19  <indutny>callback for what
04:58:30  <nathansobo>okay… i'll paint a scenario...
04:58:35  <indutny>yeah :)
05:00:53  <nathansobo>so i start up my chromium render thread and grab its v8 context, and load up all the node.js bindings into it. then in javascript let's say i want to watch a file on disk. that javascript is running on the renderer thread, and presumably to watch this file the node code has to register a callback on the event loop data structure. but there's another thread (the one that's doing the blocking) accessing that loop structure concurrent
05:00:54  <nathansobo>(potentially).
05:01:17  <nathansobo>what i'd like to happen is be able to register a callback from the renderer thread (because that's where the javascript vm will be running)
05:01:30  <nathansobo>and also have that callback execute on the renderer thread
05:01:33  <nathansobo>but do the blocking on io
05:01:41  <nathansobo>both libeio and libev seem to offer affordances
05:01:50  <nathansobo>to that effect
05:02:13  <nathansobo>in case you're interested, there's an example in the libev docs: http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod#THREAD_LOCKING_EXAMPLE
05:02:27  <indutny>the thing is
05:02:32  <indutny>that your concept is invalid
05:02:37  <nathansobo>oh, bummer.
05:02:39  <indutny>:)
05:02:40  <indutny>yeah
05:02:47  <nathansobo>where am i going wrong?
05:02:55  <indutny>you can't load node in chromium's v8 context
05:03:10  <indutny>aaah
05:03:29  <nathansobo>is there something different about the v8 context that node initializes for itself in node::Start
05:03:30  <nathansobo>?
05:03:35  <indutny>so you want two loops to run in one thread, each in the times where another is sleeping
05:03:48  <indutny>well, that may work
05:03:58  <indutny>sorry, it's late night in my place
05:04:10  <nathansobo>oh no worries i *really* appreciate you talking to me about this
05:04:30  <indutny>I think you may want to look at uv_run_once
05:04:55  <indutny>and patch node to run it when chromium's loop isn't busy
05:05:10  <nathansobo>that would be for running two event loops in one thread, rather than trying for two threads
05:05:18  <indutny>two threads won't work
05:05:20  <indutny>really
05:05:43  <indutny>there are a lot of nuances, and proper locking would cost much more
05:05:45  <indutny>(I suppose)
05:05:48  <nathansobo>it's weird because the libev / libeio docs seem to be all about it
05:06:02  <nathansobo>but perhaps trying to do it in one thread is better
05:06:12  <nathansobo>what i was reading in uv.h is that run once actually blocks
05:06:14  <nathansobo>if there are no events
05:06:32  <nathansobo>but i think uv could be hacked a bit to poll with a zero timeout
05:06:37  <indutny>hm... yes
05:06:39  <indutny>you're right
05:06:44  <indutny>well, there's another way
05:07:01  <nathansobo>but that just seems sad because now we're polling the shit out of this function instead of just blocking on the renderer event loop
05:07:02  <indutny>using fibers
05:07:12  <indutny>or, in another words, v8 isolates
05:07:22  <indutny>err...
05:07:37  <nathansobo>maybe that doesn't matter, the polling
05:07:43  <indutny>though I'm using wrong words, but you probably get what I'm saying about
05:08:00  <nathansobo>about v8 isolates?
05:08:02  <indutny>the thing is that you can just do locking around parts where you're accessing v8's context
05:08:32  <indutny>hm...
05:08:33  <nathansobo>yeah i saw v8 has a Locker class
05:08:38  <indutny>right!
05:08:49  <indutny>I'm to tired to really consider that
05:08:54  <indutny>let me think about it for some time
05:08:59  <nathansobo>so in that scenario you could just run node's event loop entirely on another thread
05:09:09  <nathansobo>and if it ever needs to execute js or anything
05:09:10  <indutny>yes, that's what I'm talking about
05:09:13  <nathansobo>it just does it in a locker
05:09:16  <indutny>yeah, that won't work
05:09:26  <indutny>if you want to execute some node's C++ code from JS
05:09:38  <indutny>because it'll be executed in another thread
05:09:42  <nathansobo>yeah to subscribe, for example
05:09:45  <indutny>indeed
05:09:49  <nathansobo>that will be on the renderer thread
05:09:57  <indutny>so you'll need to be able to block renderer thread somehow
05:10:17  <indutny>like sending async signal to node's thread and waiting on semaphore
05:10:30  <nathansobo>i'd be really interested to hear what you think about that libev example i pasted a link to, when you get time
05:10:43  <indutny>k
05:10:49  <nathansobo>yeah in their example they take advantage of this feature that libev offers
05:11:00  <nathansobo>basically you can take control of the function that processes pending events
05:11:07  <nathansobo>with ev_set_invoke_pending_cb
05:11:37  <isaacs>indutny: master or v0.8?
05:11:39  <nathansobo>and in their example, you use that callback to signal the main thread (renderer thread in this case) that events are ready to process
05:11:46  <indutny>isaacs: master, and probably 0.8
05:11:55  <isaacs>indutny: what os?
05:12:01  <indutny>yeah, 0.8 too
05:12:01  <indutny>osx
05:12:04  <isaacs>weird.
05:12:08  <indutny>tried with both siege and ab
05:12:22  <isaacs>indutny: can you post an issue with the reproduction steps?
05:12:26  <indutny>https://gist.github.com/550a330bc2dc039a696a
05:12:26  <isaacs>indutny: it seems to work for me
05:12:31  <indutny>that's my ulimit -a
05:12:39  <indutny>probably 2560 open files is too small for it?
05:12:40  <nathansobo>and then on the renderer thread you basically process all the events and then signal a condition variable
05:12:46  <nathansobo>which tells the event loop thread that you're done
05:12:49  <nathansobo>and it goes back to sleep
05:12:56  <isaacs>indutny: yes, most likely
05:13:01  <isaacs>indutny: what's your ab setting?
05:13:11  <indutny>ah
05:13:20  <indutny>no, it still hangs
05:13:26  <indutny>ab -n 12000 -c 100 http://localhost:8080/
05:13:27  <isaacs>indutny: what's the ab command?
05:13:31  <isaacs>-c100 should be fine
05:13:32  <indutny>even with ulimit -n 10000
05:13:35  <indutny>indeed
05:13:37  <isaacs>well within 2560
05:13:50  <isaacs>indutny: does it work with
05:13:54  <isaacs>instead of "localhost"?
05:13:56  <indutny>one sec
05:14:09  <indutny>nope
05:14:18  <indutny>more interesting
05:14:24  <indutny>it hangs on second run of that ab command
05:14:38  <isaacs>indutny: what's the server?
05:14:40  <indutny>and opening page from chrome works atm
05:14:48  <indutny>http.createServer(function(req, res) {
05:14:48  <indutny> res.end('');
05:14:48  <indutny>}).listen(8080, function() {
05:14:48  <indutny> console.log('listening');
05:14:48  <indutny>})
05:15:01  <isaacs>also, review? https://github.com/isaacs/node/commit/e3aafac6c6df992141c1303af0ba11a71e8a946f
05:15:02  <indutny>nothing relevant in server's code
05:15:06  <indutny>one sec
05:15:14  * mikealquit (Quit: Leaving.)
05:16:10  <isaacs>indutny: works for me just fine
05:16:30  <indutny>ohh
05:16:38  <indutny>may be that's my VPN killing it
05:16:40  <indutny>one sec
05:16:55  <isaacs>lol
05:16:57  <indutny>nope, it isn't
05:17:04  <isaacs>yes, vpn's often have problems with stuff like that :)
05:17:07  <isaacs>oh, ok
05:17:12  <indutny>at least I turned it off
05:17:17  <indutny>and it still does the same thing
05:17:27  <isaacs>well... like i said, works here.
05:17:29  <indutny>most interesting thing is that node isn't actually hanging
05:17:34  <isaacs>i'm on the 0.8.2-release
05:17:41  <indutny>so if I'll add setInterval(...) it'll constantly log data
05:17:47  <isaacs>yeah, probably ab is busted.
05:17:55  <isaacs>it's not a very good program, it' sjust the one everyone uses
05:18:11  <indutny>probably
05:18:13  <indutny>but siege fails too
05:18:21  <indutny>isaacs: btw, on your commit
05:18:48  <indutny>lgtm
05:18:49  <indutny>:D
05:18:55  <isaacs>kewl, thanks
05:19:06  <isaacs>i was looking for this for so long
05:19:14  <isaacs>it's a small memory leak, but a definite one.
05:19:21  <indutny>yeah, I read zlib's readme
05:19:26  <indutny>it deallocates some internal buffer
05:19:29  <isaacs>yep
05:19:50  * indutnyis that related to unzip() leak problem?
05:19:52  <indutny>oops
05:20:03  * piscisaureus_joined
05:20:25  <CIA-108>node: isaacs v0.8 * rd2cd456 / src/node_zlib.cc : zlib: Call inflateEnd for UNZIP. Fixes memory leak. - http://git.io/gCNaFw
05:21:16  <isaacs>indutny: yes, that's the one.
05:21:24  * mikealjoined
05:21:34  <isaacs>gunzip() works fine, but unzip() leaks like 2048b of memory, exactly, every iteration
05:21:46  <isaacs>like clockwork
05:22:19  <isaacs>oh, wait, a fucking typo.
05:22:30  <indutny>k
05:22:38  <indutny>I've created an issue: https://github.com/joyent/node/issues/3670
05:22:48  <indutny>going to sleep now
05:22:49  <indutny>ttyl
05:23:05  <CIA-108>node: isaacs v0.8 * rbf539f9 / src/node_zlib.cc : zlib: Call inflateEnd for UNZIP. Fixes memory leak. - http://git.io/25NvbQ
05:23:05  <isaacs>you saw nothing.
05:23:23  <indutny>hahah
05:23:27  <indutny>FORCE PUSH
05:23:35  <isaacs>10-second rule :)
05:23:50  <indutny>=== is my often C++ error
05:23:55  <indutny>s/error/mistake
05:23:59  <indutny>ok, time for shower
05:23:59  <isaacs>i'd put some debugging code around there, and didn't re-check it after removing it
05:24:04  <indutny>haha
05:24:07  <indutny>yeah, that happens
05:24:11  <isaacs>but yeah, === in C++
05:24:14  <isaacs>lame
05:24:21  <isaacs>we should just define it as an operator in node.h
05:24:27  <isaacs>so === gets converted to == silently for us
05:26:14  <isaacs>i'm off too. gnite
05:26:42  * paddybyersjoined
05:47:38  * piscisaureus_quit (Ping timeout: 246 seconds)
06:03:57  * mikealquit (Quit: Leaving.)
06:10:54  * mikealjoined
06:18:23  * piscisaureus_joined
06:37:39  * brsonquit (Quit: leaving)
07:11:38  * stephankquit (Quit: *Poof!*)
07:12:49  * rendarjoined
18:22:08  <@isaacs>I think it's clear that we need more real-world use case examples, though
18:22:15  <AndreasMadsen>isaacs: perhaps, but if had two isolated clusters in the same process, it would only mean that if anything breaks both clusters breaks and you will have a lot of downtime.
18:22:35  <@isaacs>right
18:22:36  <AndreasMadsen>isaacs: I agree with that, the testcase use case is not important.
18:23:11  <@isaacs>what you usually want is for the master to be basically nothing but a thing that manages workers, and does nothing else.
18:23:27  <@isaacs>and then workers do all the work, and if they crash or blow up, then the master creates a new one
18:23:30  <AndreasMadsen>isaacs: exactly
18:23:44  <@isaacs>if you have two tasks, then just create two clusters.
18:23:47  <@isaacs>two master.
18:23:54  <AndreasMadsen>isaacs: the master should be so simple that you will never need to restart it
18:24:05  <@isaacs>and your SFM or init or whatever can take care of that
18:24:21  <@isaacs>node task-one.js --workers=2 ; node task-two.js --workers=6
18:24:47  <@isaacs>i think i basically agree with you. it's just that the API is a little bit inflexible.
18:25:07  <@isaacs>but if we're going to add a bunch of complexity JUST for some flexibility that might one day be useful, then that's not really the right choice.
18:25:31  <@isaacs>someone has to actually need it today, in a way that can't be handled more simply by doing some other thing.
18:26:06  <@isaacs>AndreasMadsen: have you seen my cluster-master module?
18:26:13  <@isaacs>AndreasMadsen: i'm using it on npm-www, and it's really nice.
18:26:20  <AndreasMadsen>isaacs: yes, i did
18:26:20  <@isaacs>i push new code, then HUP the master, and it reloads all the workers gracefully
18:26:41  <@isaacs>it's still a bit vulnerable to fast crashes
18:27:00  <@isaacs>like, if there's a route that throws, and everyone hits it at once, then it'll bring down the server, cuz it's kinda dumb.
18:27:01  <AndreasMadsen>isaacs: using disconnect? I looked at it, before that feature was introduced.
18:27:39  <@isaacs>AndreasMadsen: well, the worker just calls server.close() when there's an error caught by a domain. then the master sees the disconnect, and spawns a new worker to replace it, and if it's not gone by 2 seconds, it kills it forcibly.
18:28:01  <AndreasMadsen>isaacs: ahh, perfect :)
18:28:26  <@isaacs>also, when you HUP, it spawns a new worker first, waits to make sure it stays up for 2 seconds, and then does a graceful restart.
18:29:10  <AndreasMadsen>isaacs: I thougt about extending process.send(Server) so Server don't take any connections if Server.lisenters('connect') === 0, that way cluster could almost be an extern module without using any internal API.
18:29:19  <@isaacs>but the close-on-error thing is not quite resilient. what i need to do is have the worker send a message to the master, saying, "I had an error, can I shut down?" and then the master gives it permission.
18:30:03  <@isaacs>AndreasMadsen: yeah, i'd like for cluster to not rely on any internal APIs.
18:30:17  <@isaacs>AndreasMadsen: if it was userland-able, then we could tell people to just write their own cluster module, using the builtin as an example.
18:30:44  <AndreasMadsen>isaacs: ^ that should be perfectly doable, process.send is sync (wtf)
18:30:56  <AndreasMadsen>> "I had an error, can I shut down?"
18:31:00  <@isaacs>yeah, we need to fix that.
18:31:04  <@isaacs>process.send should not be sync
18:31:23  <@isaacs>AndreasMadsen: well, the error is caught by a domain.
18:31:37  <@isaacs>but if 8 people hit the same error in rapid succession, then all the workers will shut down
18:31:43  <@isaacs>ie, server.close()
18:31:47  <@isaacs>now i'm not serving requests.
18:32:11  <@isaacs>they should wait to call server.close() until the master has spawned their replacement
18:33:04  <@isaacs>worker-A domain.error -> "i got an error, lmk when i can shut down", master spawns fresh worker-B, master tells worker-A "ok, close", worker-A calls server.close()
18:33:14  <@isaacs>2 seconds later, if worker-a is still alive, master kills it
18:33:20  <AndreasMadsen>isaacs: the right solution would be for the OS to pause the new requests until a new server is attached. I never liked the idea about delaying throws. But what you suggest is perfectly doable with process.send.
18:33:43  <@isaacs>well, there should never be a throw that actually crashes the server.
18:33:58  <@isaacs>it'll just hit the domain, and then the domain will know what to do about it
18:34:20  <@isaacs>but keep in mind, there are 7 other workers
18:43:06  <@isaacs>of course, if send is async, you might get one or two reqs in before you get a response, but meh.
18:43:07  <AndreasMadsen>isaacs: as I said the perfect right solution would be for the OS/master to pause the request until a worker is online. Restarting a worker don't take long time, so the client won't really notic. The important thing is that there request didn't die.
18:43:10  <@isaacs>you might have gotten that anyway
18:43:19  <@isaacs>restarting a worker takes a long time.
18:43:25  <@isaacs>several milliseconds.
18:43:33  <AndreasMadsen>isaacs: can you count that
18:43:39  <AndreasMadsen>in your head
18:43:42  <@isaacs>sometimes several hundred
18:43:53  <@isaacs>i read config files, do sync IO for require(), etc.
18:44:16  <@isaacs>you'll see more reqs hit in that time than in the time it takes to ask for permission to close
18:44:52  <AndreasMadsen>isaacs: right, but the 100 ms it takes to restart a worker, is not much in an Internet request.
18:44:53  <@isaacs>and server.close() just takes it out of rotation. it'll still serve requests that it's already accepted
18:46:28  <isaacs>AndreasMadsen: it's slow enough taht if i do `svcadm restart npm-www` and then hit refresh, I get a blank screen 3 or 4 times.
18:46:44  * dapquit (Quit: Leaving.)
18:46:44  <isaacs>and that's just before the first worker comes on
18:46:57  <isaacs>if i HUP it, there's no downtime.
18:47:27  <isaacs>why should other requests wait for 200ms or more for a new worker?
18:47:47  <isaacs>what if there are half a dozen people all hitting the same error? then we wait for all the workers to be online?
18:48:10  <AndreasMadsen>isaacs: no, just one
18:49:51  <AndreasMadsen>isaacs: ideally they shouldn't and it is only in the edgecase. But it will prevent downtime (I'm only interested in what there will actually provoke the client) and dirty workers.
18:50:29  <isaacs>yeah, so, the choice is: a) downtime, b) dirty workers, or c) slower responses.
18:50:52  <AndreasMadsen>isaacs: yep, pick one
18:51:29  <isaacs>i don't think (b) is all that bad (since you can't *fully* prevent it anyway), and it's easy to minimize.
18:51:52  <isaacs>you can't fully prevent it becasue there will probably already be multiple requests being served at the time that the error occurs.
18:52:03  <isaacs>and those others are now being served by a dirty worker.
18:52:04  <AndreasMadsen>isaacs: I agree.
18:52:06  <isaacs>so what? it'll work mostly.
18:52:35  <isaacs>if your server is built properly, in fact, there's probably nothing wrong with just keeping it alive forever, even if there is an error.
18:53:00  <AndreasMadsen>isaacs: perfection is an illusion. You can come seriously close, but you will never hit perfection.
18:53:05  <isaacs>but since errors are necessarily unintended and unexpected, it's not very wise to assume that that's ok
18:53:09  <isaacs>yes
18:54:54  <AndreasMadsen>isaacs: something totally different, would this offend you https://github.com/isaacs/npm/issues/2588
18:54:59  <AndreasMadsen>?
18:55:33  <isaacs>AndreasMadsen: never gonna happen, sorry.
18:55:50  <isaacs>AndreasMadsen: it's assumed in SOO many places that the folder will be named 'node_modules'
18:55:57  <isaacs>AndreasMadsen: just have piccolo use the same folder name.
18:56:07  <tjfontaine>or check for one or the other?
18:56:16  <isaacs>tjfontaine: if only it was that simple...
18:56:27  <tjfontaine>isaacs: I mean for piccolo, not npm
18:56:41  <AndreasMadsen>isaacs: that can happen, sorry. That would result in node modules being send to the client.
18:56:51  <AndreasMadsen>s/can/can't
18:57:40  <isaacs>$ git grep node_modules | egrep -v 'gitignore|npmignore|doc|man|package.json|README' | wc -l
18:57:43  <isaacs> 96
18:59:20  <isaacs>the shitstorm that arose from even trying that out for a brief moment back in 0.3 caused such huge catastrophes that took me weeks to fix.
18:59:28  <isaacs>everyone wants to name it something different.
18:59:33  <isaacs>then nothing works.
19:00:43  <AndreasMadsen>isaacs: I do understand why it can't happen. It is just so sad, that this sort of complexity is whats killing (not exactly) a really good project.
19:00:46  <isaacs>npm install --node_modules=isaacs_awesome_modules ..
19:01:23  <isaacs>npm's not so big. just fork it and perl -pi -e it
19:01:31  <isaacs>how bad can it be!?
19:01:52  <isaacs>i do want to move more of its guts out into the open.
19:02:17  <isaacs>which should make it easier to build sort of what your'e asking for, using the parts of npm that make sense, but without adding complexity to npm itself about that
19:04:26  <AndreasMadsen>isaacs: it can be bad, npm publish is very understood and highly used. If I have to implement that in another subroutine/program that would require users to do both npm publish && pick (or whatever) publish - since piccolo is hardly known that would be a real blocker.
19:05:45  <isaacs>AndreasMadsen: or you can build a system that requires less ocean-boiling. just accept that people will publish modules to npm, and figure out how to leverage what's already there.
19:06:09  <isaacs>AndreasMadsen: browserify does this very well, i think
19:06:25  <isaacs>ok, i'm gonna get lunch and head into the office. have a nice evening :)
19:07:29  <AndreasMadsen>interesting, will look intro it.
19:25:08  * mjr___joined
19:25:08  * mjr_quit (Read error: Connection reset by peer)
19:25:09  * mjr___changed nick to mjr_
19:31:04  * mikealquit (Quit: Leaving.)
19:43:47  * AndreasMadsenquit (Remote host closed the connection)
19:44:04  * xaqquit (Read error: Connection reset by peer)
19:44:07  * AndreasMadsenjoined
19:44:38  * xaqjoined
19:48:52  * mjr_quit (Read error: Connection reset by peer)
19:49:09  * mjr_joined
19:52:38  <TooTallNate>bnoordhuis: hey i can get you shell access to an OSX machine if you wanna work on the /dev/tty thing
19:53:44  * mikealjoined
19:57:15  * hzjoined
19:57:47  * mikealquit (Client Quit)
19:59:15  <piscisaureus_>hah
19:59:19  <piscisaureus_><3 automation
19:59:20  <piscisaureus_>https://gist.github.com/3078540
20:09:08  <TooTallNate>piscisaureus_: nice :)
20:09:51  <piscisaureus_>hopefully it won't be too painful to do libuv releases now :-)
20:10:04  <piscisaureus_>Now to upload tarballs to libuv.org
20:10:39  <piscisaureus_>now for bnoordhuis to add an autoconf script
20:10:45  <piscisaureus_>and the distros are ours _p
20:11:40  * mmaleckiquit (Quit: leaving)
20:17:08  * loladiroquit (Ping timeout: 264 seconds)
20:18:56  * loladirojoined
20:25:04  * `3rdEdenquit (Quit: Leaving...)
20:26:58  <piscisaureus_>
20:26:58  <piscisaureus_>http://www.youtube.com/watch?v=nIwrgAnx6Q8&feature=youtube_gdata_player
20:27:06  <piscisaureus_>sleepy time now
20:27:09  * piscisaureus_quit (Quit: ~ Trillian Astra - www.trillian.im ~)
20:29:50  * bradleymeckjoined
20:30:22  <bradleymeck>is the js http parser lying around on some fork/branch or is it not developed yet?
20:30:56  * loladiro_joined
20:33:14  * loladiroquit (Ping timeout: 244 seconds)
20:33:14  * loladiro_changed nick to loladiro
20:37:01  * EhevuTovquit (Quit: Leaving)
20:47:21  * `3rdEdenjoined
20:48:48  * EhevuTovjoined
20:52:34  * bradleymeckquit (Quit: bradleymeck)
20:54:53  * `3rdEdenquit (Quit: Linkinus - http://linkinus.com)
21:02:45  * arlolrajoined
21:03:03  * loladiroquit (Ping timeout: 245 seconds)
21:08:52  * paddybyers_joined
21:10:27  * paddybyersquit (Ping timeout: 240 seconds)
21:10:27  * paddybyers_changed nick to paddybyers
21:25:45  * AndreasMadsenquit (Remote host closed the connection)
21:27:30  * loladirojoined
21:45:23  * paddybyers_joined
21:46:44  <bnoordhuis>back
21:46:46  * EhevuTovquit (Quit: This computer has gone to sleep)
21:47:52  * bradleymeckjoined
21:48:53  * paddybyersquit (Ping timeout: 245 seconds)
21:48:53  * paddybyers_changed nick to paddybyers
21:58:27  * mattstevensjoined
22:01:08  * hzquit
22:06:08  * c4miloquit (Remote host closed the connection)
22:06:44  * c4milojoined
22:07:43  * rendarquit
22:08:22  <bnoordhuis>isaacs: https://github.com/joyent/node-in-the-industry/pull/1 <- who manages that?
22:09:00  <isaacs>bnoordhuis: me
22:09:06  <bnoordhuis>okay, cool
22:09:28  <isaacs>bnoordhuis: the legalese is not sanctioned yet, so it might chnage if anyone really tests it.
22:09:53  <isaacs>but the point is just to make clear that we control who gets in there, and you can't pay joyent for time on the nodejs.org site.
22:09:58  <isaacs>ie, it's an ad for US, not for THEM
22:10:50  * EhevuTovjoined
22:11:46  * mikealjoined
22:11:47  * c4miloquit (Ping timeout: 265 seconds)
22:13:54  * mikealquit (Client Quit)
22:22:11  * mikealjoined
22:31:31  * EhevuTovquit (Quit: Leaving)
22:32:20  <bnoordhuis>isaacs: i want to compile with -O2 instead of -O3 from now on
22:32:49  <bnoordhuis>people are still reporting that 'pure virtual method called' build error even with -fno-strict-aliasing...
22:33:15  <isaacs>yeah, sounds fine to me.
22:33:26  <isaacs>what's the difference between o2 and o3 anyway?
22:35:55  <bnoordhuis>isaacs: -fgcse-after-reload -finline-functions -fipa-cp-clone -fpredictive-commoning -ftree-vectorize -funswitch-loops apparently :)
22:36:14  <bnoordhuis>no doubt it's the tree vectorizer that's buggy
22:36:22  <bnoordhuis>let's test that just in case
22:43:49  <bnoordhuis>okay, maybe it's not -ftree-vectorize
22:44:11  <bnoordhuis>-fgcse-after-reload is my second best guess
22:45:44  <tjfontaine>huh I would have guess tree as well
22:47:39  <isaacs>bnoordhuis: if you wanna mess around with those flags, be my guest.
22:47:56  <bnoordhuis>isaacs: yep, applying the principle of exclusion as we speak
22:48:01  <isaacs>bnoordhuis: but the biggest perf improvement i've seen is usually between -O0 and -O1
22:48:13  <isaacs>the diff between 2 and 3 is usually prety minor in practice
22:48:27  <bnoordhuis>isaacs: actually... i ran some -O2 vs -O3 benchmarks a while ago and -O2 was slightly faster
22:48:41  <isaacs>see? even more argument for not doing O3
22:49:42  <bnoordhuis>that's on my puny core 2 duo with 32k of L1 icache though
22:49:46  <tjfontaine>clearly what we should be testing here is gcc -O3 vs clang -O3
22:49:57  <isaacs>pquerna: you around?
22:50:16  <bnoordhuis>tjfontaine: clang is not affected
22:50:33  <bnoordhuis>haven't benchmarked it though :)
22:50:38  <tjfontaine>bnoordhuis: I meant for the benchmarks, not the gcc weirdness
22:51:22  <bnoordhuis>apparently it's also not -fgcse-after-reload...
22:52:13  * c4milojoined
22:53:11  * EhevuTovjoined
23:04:27  * paddybyersquit (Quit: paddybyers)
23:14:36  <bnoordhuis>it's -finline-functions...
23:15:31  <tjfontaine>how much of the inline keyword are you abusing?
23:16:03  <tjfontaine>or rather do you know which method it's barfing on?
23:18:56  <bnoordhuis>tjfontaine: yes. v8::internal::CodeStub::FindCodeInCache(v8::internal::Code**)
23:19:14  <bnoordhuis>tjfontaine: https://github.com/joyent/node/issues/2912#issuecomment-5135600
23:19:43  <tjfontaine>jeepers
23:23:12  * loladiroquit (Quit: loladiro)
23:44:32  <bradleymeck>anyone know who is in charge of the fabled JS HTTP parser i have heard of?
23:55:17  * dapjoined