00:02:35  * isaacsjoined
00:08:22  * mikealquit (Quit: Leaving.)
00:12:37  * mikealjoined
00:13:07  * mikealquit (Client Quit)
00:14:28  * mikealjoined
00:54:08  * mikealquit (Quit: Leaving.)
01:17:05  <isaacs>so, yeah, that dos attack thing? still here.
01:17:09  <isaacs>still affects numeric keys.
01:29:13  * perezdquit (Quit: perezd)
01:30:03  * brsonquit (Quit: leaving)
01:30:25  * dshaw_joined
01:32:33  <isaacs>mraleph: ping
01:46:26  * isaacsquit (Quit: isaacs)
01:48:40  * mralephquit (Quit: Leaving.)
01:49:02  * perezdjoined
02:20:11  * isaacsjoined
02:23:53  * bnoordhuisquit (Read error: Operation timed out)
02:58:31  * perezdquit (Read error: Connection reset by peer)
02:59:05  * perezdjoined
03:23:14  * perezdquit (Read error: Connection reset by peer)
03:23:30  * perezdjoined
03:38:06  * Raynosquit (Write error: Broken pipe)
03:38:13  * mrb_bkquit (Read error: Connection reset by peer)
04:14:23  * Raynosjoined
04:53:25  * isaacsquit (Quit: isaacs)
04:55:23  * Raynosquit (Remote host closed the connection)
05:10:20  * mrb_bkjoined
05:21:20  <indutny>heya
05:21:22  <indutny>anyone around?
05:22:40  <rmustacc>What's up/
05:22:41  <rmustacc>*?
05:23:35  <indutny>rmustacc: oh, I've a interesting idea, prioritizing threads in node
05:24:17  <indutny>rmustacc: like setting process._priority and all worker threads created after current event loop tick will have that priority
05:24:33  <rmustacc>What problem are you trying to solve here?
05:24:36  <indutny>rmustacc: this way you can increase priority of http request
05:24:46  <indutny>rmustacc: I'm trying to implement spdy streams priority
05:25:13  <rmustacc>But what worker threads are you referring to, the libev ones?
05:25:28  <rmustacc>ERr, not libev, but the libeio ones.
05:26:51  <indutny>rmustacc: yes
05:27:19  <indutny>rmustacc: so for example you receive a http request with a specified priority
05:27:24  * Raynosjoined
05:27:27  <rmustacc>So, you want to change the order of the events we're dispatching to libeio.
05:27:30  <indutny>rmustacc: all workers created from that tick will have samepriority
05:27:34  <indutny>rmustacc: yes
05:27:35  <rmustacc>Not the actual thread priorties of libeio.
05:27:52  <indutny>rmustacc: may be thread priorities too
05:27:58  <indutny>not sure how that will play with libeio
05:28:27  <rmustacc>Changing the thread priorties doesn't make sense.
05:28:32  <indutny>why not?
05:28:40  <rmustacc>Why do it?
05:28:48  <indutny>you may do .fsReadFileSync for two big files
05:28:48  <rmustacc>And what if the schedular doesn't care.
05:28:55  <indutny>in two different things
05:28:57  <indutny>ooh
05:29:01  <indutny>s/things/ticks
05:29:10  <indutny>with different priorities
05:29:23  <indutny>and if thread pool is full
05:29:29  <indutny>or something like that
05:29:31  <rmustacc>Huh, you do readfilesync and it won't hit the thread pool.
05:29:39  <rmustacc>You'll do it right then and there.
05:29:39  <indutny>ok, bad example
05:29:48  <indutny>.readFile
05:29:50  <indutny>sorry :D
05:30:07  <indutny>my fingers are typing automatically sometimes
05:30:11  <rmustacc>I would say that all you want to order is the order of disaptching to libeio. Nothing more than that.
05:30:46  <indutny>rmustacc: probably! this is just a idea
05:30:55  <indutny>rmustacc: not sure, how it should be implemented so far
05:31:20  <rmustacc>Well, is actually supporting the spdy priorties important?
05:31:47  <indutny>rmustacc: that's not clear now
05:31:55  <indutny>rmustacc: but I see some pros of that
05:31:56  <rmustacc>And if it really is, then you're just going to want to implement the libeio dispatch as a heap.
05:32:24  <rmustacc>The problem is if it's not done correctly, you'll have pathological behavior.
05:32:34  <rmustacc>You're also going to have to somehow associate that with the whole transaction and apply it forward.
05:32:40  <indutny>rmustacc: like something holding in dispatch thread forever?
05:33:20  <rmustacc>For the association, you need to make sure that all the different thread pool requests, etc. all have that priority. So say you do n operations there, all n need to have that priority.
05:33:36  <rmustacc>As for the pathological behavior. You need to make sure that other stuff can actually advance.
05:33:57  <indutny>yep, so I need to implement scheduler
05:34:02  <rmustacc>You don't want it to be the case that a client who constantly gives you higher priority stuff effectively blocks out the lower stuff.
05:34:03  <indutny>:)
05:34:24  <rmustacc>And once you start trying to reinvent something the OS does, you're probably doing it wrong.
05:34:26  <indutny>that's why I want to use system's thread priority somehow
05:34:35  <rmustacc>That just doesn't make sense here.
05:34:48  <rmustacc>Because you want the thread pool always going at a constant rate.
05:34:53  <rmustacc>Just the dispatch order changing.
05:34:58  <indutny>libev is using kqueue for threads
05:35:04  <indutny>aaah
05:35:07  <indutny>yeah, I understand
05:35:45  <rmustacc>The issue here isn't anything touching the backend through libev.
05:35:48  <indutny>so if I created one task of libeio - it won't fire callback for next task until current one will be completed?
05:36:12  <indutny>s/of/for
05:36:25  * isaacsjoined
05:36:27  <rmustacc>Right.
05:36:38  <rmustacc>And if there are more tasks than threads it waits in line.
05:36:51  <rmustacc>It's just a queue, iirc.
05:37:08  <indutny>rmustacc: ok, that's clear
05:37:14  <indutny>FIFO
05:37:51  <rmustacc>So you can change that queue to a heap, but you basically also want to make sure that nothing rots at the end of the heap forever.
05:38:04  <indutny>rmustacc: may be I can have n-queues
05:38:09  <indutny>for all possible priorities
05:38:18  <rmustacc>No.
05:38:21  <rmustacc>Just use a heap.
05:38:31  <rmustacc>That was why this data structure was invented.
05:38:33  <indutny>sorry, that is not clear
05:38:41  <indutny>what do yo mean?
05:38:49  <rmustacc>Are you familiar with the heap data structure?
05:38:58  <indutny>rmustacc: do you mean balanced sorted tree?
05:39:02  <rmustacc>It's whole purpose is to handle the case of priority insertion.
05:39:14  <rmustacc>No, I mean a heap.
05:39:23  <indutny>rmustacc: that heap -> http://en.wikipedia.org/wiki/Heap_(data_structure) ? :D
05:39:31  <rmustacc>Yup
05:39:48  <indutny>rmustacc: ah, I remembered
05:39:48  <indutny>ok
05:39:55  <rmustacc>The gotcha is that it allows you to construct a scenario with livelock.
05:40:29  <indutny>yes
05:40:37  <indutny>and n-queues won't allow this
05:40:49  <indutny>if I'll allow only 5 different priorities for example
05:41:10  <rmustacc>And how do you select which queue you use?
05:41:35  <indutny>huh? looks like I need to take a closer look at libeio
05:42:08  <indutny>rmustacc: can't it do parallel actions?
05:42:16  <indutny>like reading two files simultaneosly?
05:42:30  <rmustacc>indutny: The only way to think of it is that there's a single queue.
05:42:38  <rmustacc>And as a thread finishes up, it grabs the top of the queue.
05:43:01  <indutny>thread may know number of queue associated with it
05:43:55  <rmustacc>Huh?
05:44:00  <indutny>so probably creating n instances of libeio may help
05:44:01  <rmustacc>You're going to dedicate a queue to a given priority?
05:44:06  <indutny>yes
05:44:24  <rmustacc>Um, this is not going to implement priority.
05:44:42  <indutny>why not? I'll use thread priorities from OS for them
05:44:56  <rmustacc>You'll be making things worse this way.
05:45:05  <indutny>rmustacc: probably
05:45:08  <rmustacc>The vast majority of things won't be prioritized and thus fall into a single queue.
05:45:17  <rmustacc>Which isn't what you want.
05:45:27  <indutny>rmustacc: heh, you're right
05:45:49  <rmustacc>Go with a heap, and then every tick, decrement the priority of something.
05:46:03  <indutny>like what's older has most priority anyway
05:46:11  <rmustacc>That way eventually even something that starts out with a low priority, will eventually get better.
05:46:19  <indutny>that sounds good
05:47:02  <indutny>I'll try it and will came back here later with any results
05:47:16  <indutny>rmustacc: thank you!
05:47:17  <rmustacc>You're going to have to do a lot of tuning on that.
05:47:47  <indutny>rmustacc: I'm pretty sure :)
05:59:19  * mikealjoined
06:01:38  * mikealquit (Client Quit)
06:35:23  * sh1mmerjoined
06:49:07  * slaskisjoined
07:00:51  <indutny>rmustacc: looking at eio source, looks like they have some priority suport...
07:00:54  <indutny>s/suport/support
07:01:20  <indutny>rmustacc: see EIO_PRI_MIN and EIO_PRI_MAX
07:01:31  * mikealjoined
07:01:35  <indutny>rmustacc: yt?
07:03:02  <rmustacc>I'm here.
07:05:36  <rmustacc>I don't see anything in that priority system that's going to prevent priority starvation.
07:06:39  <rmustacc>Well, at least not at a cursory glance.
07:07:34  <indutny>rmustacc: they are using same concept as I proposed
07:07:43  <indutny>n queues
07:07:45  <indutny>let see
07:07:46  <rmustacc>No, they aren't.
07:07:58  <rmustacc>None of those queues are bound to a specific thread.
07:08:15  <rmustacc>You just search for something to run starting at the highest priority.
07:08:30  <indutny>rmustacc: it'll just run callbacks with higher priority first
07:08:34  <indutny>rmustacc: ok I got it
07:08:48  <indutny>rmustacc: you've n queues and shift them from top priority to bottom
07:08:55  <indutny>rmustacc: and run callbacks in that order
07:09:00  <rmustacc>Which means it's possible to construct a pathelogical situation where something in the bottom never can run.
07:09:06  <indutny>rmustacc: nope
07:09:16  <indutny>rmustacc: orr
07:09:26  <indutny>rmustacc: yes, you're right
07:09:29  <indutny>probably
07:09:51  <rmustacc>Simply insert events at a constant rate at a higher priority.
07:09:58  <indutny>rmustacc: I understand
07:10:43  <rmustacc>And maybe you can convince yourself that that pathological situation won't arrive, but I'm skeptical.
07:10:49  <rmustacc>Especially if the client is requesting the priority.
07:10:54  <indutny>rmustacc: hehe, agreed with you
07:11:18  <indutny>rmustacc: ok, at least I've some basement
07:20:32  <indutny>rmustacc: probably, situation that you describe will not happen ever
07:20:39  <indutny>rmustacc: beacuse eio invokes event loop
07:20:54  <rmustacc>Huh.
07:20:56  <indutny>rmustacc: which may be put into events queue
07:21:09  <rmustacc>I'm fairly certaion eio doesn't invoke the event loop.
07:21:11  <indutny>rmustacc: and HI_PRIORITY requests won't happen immediately
07:21:19  <indutny>rmustacc: libuv does
07:21:28  <rmustacc>Sure, but that's an entirely separte thread.
07:21:49  <rmustacc>This is a pretty simple case.
07:21:58  <rmustacc>You have four threads in libeio servicing requests.
07:22:33  <rmustacc>Your event loop drops events into it faster than it can service.
07:23:04  <rmustacc>As long as you always have at least four high priority events in the queue, nothing lower will ever run.
07:23:33  <indutny>oh, you're right again :D
07:25:09  <rmustacc>That's why schedulars have a decay effect of sorts to guarantee other stuff will eventually run.
07:25:14  * mikealquit (Quit: Leaving.)
07:25:21  <rmustacc>Basically every tick you aren't scheduled, it increases your priority.
07:25:36  <rmustacc>That's every scheduling quanta, so probably ~10s
07:25:45  <rmustacc>At least, with most generic schedulars.
07:35:18  * mikealjoined
07:40:40  * mikealquit (Quit: Leaving.)
07:41:17  <rmustacc>indutny: This method also only solves a rather small percent of the cases you need priority.
07:41:43  <rmustacc>There basically isn't a concept of priority in the various polling mechanisms.
07:42:06  <rmustacc>If for example, you have your spdy server and you make a db call, now all of a sudden you need the notion of priority for its queue.
07:42:21  <indutny>rmustacc: yep, that's clear
07:42:46  <indutny>rmustacc: but some internal things can be prioritized
07:43:08  * isaacsquit (Read error: Connection reset by peer)
07:43:20  <rmustacc>As a client, why would I ever say everything isn't high priority?
07:43:20  * isaacsjoined
07:43:41  <indutny>rmustacc: for example you've a gallery page, when browsers scrolls down you need more images
07:43:47  <indutny>until that all they are lowest priority
07:43:51  <indutny>as you don't really need them
07:43:52  <rmustacc>Sure, but I'm a bad client.
07:44:03  <rmustacc>I have a plugin that rewrites all the js to make sure every req is high pri.
07:44:21  <indutny>rmustacc: so you won't get anything from it :D
07:44:31  <indutny>rmustacc: everything will have same priority
07:44:33  <rmustacc>But you do in this model.
07:44:42  <indutny>and mod_spdy do too
07:44:53  <indutny>ah, they just using threads priorities
07:44:55  <rmustacc>Because your current design has all clients high priority as the same.
07:45:02  <indutny>righto
07:45:17  <rmustacc>Why would I ever be a good actor in that model?
07:45:40  <indutny>rmustacc: probably stream priority concept is incompatible with event-loop
07:46:18  <rmustacc>Well, the event loop doesn't have any notion of priority.
07:46:30  <rmustacc>All you could do is throttling before sending it there or after getting a response.
07:46:45  <rmustacc>Or reordering of what's going to be handled next by node, not the kernel.
07:47:25  <rmustacc>But with this model, it's now pretty easy to dos anyone who isn't using high priority.
07:47:41  * indexzerojoined
07:48:49  <indutny>rmustacc: I can implement per-connection scheduler
07:48:57  <indutny>that'll catch parallel responsses and sort them
07:49:09  <indutny>but it'll add extra-delay
07:49:48  <rmustacc>So now, specifying priority just guarantees a minimum increase in latency.
07:50:53  <rmustacc>Implementing qos correctly is hard.
07:52:00  <indutny>rmustacc: yes
07:52:23  <indutny>rmustacc: even more, implementing it in js is slow
07:53:00  <rmustacc>Well, I'd reserve that judgement until you get a working and implemented algorithm.
07:53:25  <indutny>rmustacc: hahaha :)
07:55:04  <rmustacc>I'd go spend some time digging up how the networking guys solve this problem in switches.
07:55:16  <rmustacc>That's basically the model you want.
07:57:57  <rmustacc>Or take a look at some of the either freebsd or openbsd work in this area, don't remember which.
08:16:38  * TkTechpart
08:29:13  * indexzeroquit (Quit: indexzero)
08:45:06  * ErikCorry2joined
08:48:44  * mralephjoined
08:48:45  * mikealjoined
08:49:19  * mikealquit (Client Quit)
08:50:20  * paddybyersjoined
08:51:14  * mikealjoined
08:51:33  * mikealquit (Client Quit)
09:00:29  * paddybyersquit (Read error: No route to host)
09:08:23  * mikealjoined
09:28:21  * `3rdEdenjoined
09:31:09  * indexzerojoined
09:32:21  * indexzeroquit (Client Quit)
09:38:00  * perezdquit (Quit: perezd)
09:42:20  * AndreasMadsenjoined
09:44:26  * AndreasM_joined
09:44:26  * AndreasMadsenquit (Read error: Connection reset by peer)
09:47:14  * AndreasM_quit (Read error: Connection reset by peer)
09:47:39  * AndreasMadsenjoined
10:10:30  * isaacsquit (Quit: isaacs)
10:21:13  * AndreasMadsenquit (Remote host closed the connection)
10:21:53  * mrb_bkquit (Read error: Connection reset by peer)
10:21:54  * Raynosquit (Remote host closed the connection)
10:22:53  * AndreasMadsenjoined
10:30:31  <CIA-111>node: Maciej Małecki master * r4b4d059 / lib/tls.js : (log message trimmed)
10:30:32  <CIA-111>node: tls: make `tls.connect` accept port and host in `options`
10:30:32  <CIA-111>node: Previous API used form:
10:30:32  <CIA-111>node: tls.connect(443, "google.com", options, ...)
10:30:32  <CIA-111>node: now it's replaced with:
10:30:32  <CIA-111>node: tls.connect({port: 443, host: "google.com", ...}, ...)
10:30:32  <CIA-111>node: It simplifies argument parsing in `tls.connect` and makes the API
10:30:33  <CIA-111>node: Maciej Małecki master * rdf0edf5 / lib/https.js :
10:30:34  <CIA-111>node: https: make `https` use new `tls.connect` API
10:30:34  <CIA-111>node: Refs #1983. - http://git.io/hHXARQ
10:30:35  <CIA-111>node: Maciej Małecki master * r39484f4 / (14 files):
10:30:35  <CIA-111>node: test tls: make tests use new `tls.connect` API
10:30:36  <CIA-111>node: Refs #1983. - http://git.io/EA6rpA
10:30:36  <CIA-111>node: Maciej Małecki master * r0321adb / doc/api/tls.markdown :
10:30:37  <CIA-111>node: tls doc: update docs to reflect API change
10:30:37  <CIA-111>node: Refs #1983. - http://git.io/ePttqA
10:31:05  * mrb_bkjoined
10:37:20  * Raynosjoined
10:39:06  * AndreasMadsenquit (Ping timeout: 240 seconds)
10:42:10  * travis-cijoined
10:42:10  <travis-ci>[travis-ci] joyent/node#201 (master - 0321adb : Maciej Małecki): The build is still failing.
10:42:10  <travis-ci>[travis-ci] Change view : https://github.com/joyent/node/compare/8e5674f...0321adb
10:42:10  <travis-ci>[travis-ci] Build details : http://travis-ci.org/joyent/node/builds/492856
10:42:10  * travis-cipart
11:07:59  * AndreasMadsenjoined
12:10:24  * slaskisquit (Quit: slaskis)
12:10:28  * TkTechjoined
12:17:07  * TkTechquit (Ping timeout: 240 seconds)
12:17:15  * TkTechjoined
12:17:24  * TkTechpart
12:26:19  * paddybyers_joined
12:26:54  * paddybyers_quit (Client Quit)
12:27:25  * paddybyersjoined
12:41:15  * mralephquit (Quit: Leaving.)
12:47:05  * AndreasMadsenquit (Remote host closed the connection)
13:13:23  * AndreasMadsenjoined
13:24:33  * AndreasMadsenquit (Remote host closed the connection)
14:23:41  <indutny>oh crap
14:23:54  <indutny>Pita published collision generator https://github.com/Pita/V8-Hash-Collision-Generator/blob/master/generateCollisions.c
14:27:57  * slaskisjoined
14:40:52  <`3rdEden> indutny it's already gone
14:44:39  <mmalecki>hey, what's up with `errno` variable?
14:45:48  <mmalecki>oh, my patch got merged, cool
14:50:17  * AndreasMadsenjoined
14:56:56  * paddybyersquit (Quit: paddybyers)
15:07:57  * bnoordhuisjoined
15:09:19  <indutny>what's the heck is EPIPE?
15:09:20  <indutny>bnoordhuis: %
15:09:24  <indutny>bnoordhuis: ^
15:09:40  <indutny>Error: write EPIPE at errnoException (net.js:640:11) at Object.afterWrite [as oncomplete] (net.js:478:18)
15:09:59  <indutny>on 0.7.0-pre
15:11:17  <indutny>syncing my 0.7.0-pre with master
15:11:23  <indutny>hope that'll fix problem
15:14:20  <indutny>bnoordhuis: nope, still fails
15:17:32  <indutny>bnoordhuis: ok, I lifted it up from spdy to express level
15:17:40  <indutny>bnoordhuis: and express is kinda handling it fine :D
15:25:31  * AndreasMadsenquit (Remote host closed the connection)
15:31:14  <mmalecki>indutny: I know we've had this problem in node-http-proxy
15:31:25  <mmalecki>doesn't node throw this when stream you're piping to gets closed?
15:31:26  <indutny>mmalecki: great, and how had we solved it?
15:31:36  <mmalecki>indutny: we didn't - we're still using on('data')
15:31:37  <indutny>mmalecki: aah, makes sense
15:31:46  <mmalecki>and haibu had something like that, let me see
15:45:42  * AndreasMadsenjoined
15:49:20  <indutny>bnoordhuis: is EPIPE related to stream piping?
16:16:21  <txdv>eletronic pipe
16:17:40  * paddybyersjoined
16:47:48  * ErikCorry2quit (Ping timeout: 258 seconds)
17:05:53  * dshaw_quit (Quit: Leaving.)
17:34:37  * isaacsjoined
17:44:40  <rmustacc>indutny: you create a pipe no?
17:44:59  <rmustacc>All it means is that the other end of the pipe disappeared.
17:45:23  <rmustacc>You get the errno instead of a signal because we sigignore SIGPIPE
17:49:22  * perezdjoined
17:50:21  <mmalecki>yay, I finally understand it, thanks rmustacc :)
17:55:43  * mikealquit (Quit: Leaving.)
18:00:01  * mikealjoined
18:00:19  * slaskisquit (Quit: slaskis)
18:03:59  * mikealquit (Ping timeout: 240 seconds)
18:25:23  * mikealjoined
18:25:55  * dshaw_joined
18:34:44  * mikealquit (Quit: Leaving.)
18:35:45  * mikealjoined
18:35:55  <indutny>:)
18:35:59  <indutny>sorry, I was afk
18:36:04  <indutny>rmustacc: thanks
18:41:08  * AndreasMadsenquit (Read error: Connection reset by peer)
18:41:15  * AndreasMadsenjoined
18:43:58  * AndreasMadsenquit (Remote host closed the connection)
18:49:35  * AndreasMadsenjoined
19:22:15  * TkTechjoined
19:24:05  * TkTechpart
20:28:09  * isaacsquit (Quit: isaacs)
20:34:13  <CIA-111>node: Ben Noordhuis v0.6 * r472a72d / (Makefile configure):
20:34:13  <CIA-111>node: build: honour the PYTHON environment variable
20:34:13  <CIA-111>node: Overrides the path to the python binary. Defaults to `python`. - http://git.io/ICQHYw
20:41:17  * travis-cijoined
20:41:17  <travis-ci>[travis-ci] joyent/node#202 (v0.6 - 472a72d : Ben Noordhuis): The build passed.
20:41:17  <travis-ci>[travis-ci] Change view : https://github.com/joyent/node/compare/9ef3c62...472a72d
20:41:17  <travis-ci>[travis-ci] Build details : http://travis-ci.org/joyent/node/builds/494476
20:41:17  * travis-cipart
21:12:32  * perezdquit (Quit: perezd)
21:12:48  * AndreasMadsenquit (Remote host closed the connection)
21:29:58  * mikealquit (Quit: Leaving.)
21:30:26  * mikealjoined
21:41:27  * mjr_quit (Quit: mjr_)
22:25:15  <bnoordhuis>indutny: https://github.com/joyent/node/pull/2450 <- is that PR ready for merging? if so, can you squash the commits?
22:55:47  * dshaw_quit (Quit: Leaving.)
23:00:48  * brsonjoined
23:20:00  * mikealquit (Quit: Leaving.)
23:24:41  * dshaw_joined
23:46:22  * `3rdEdenquit (Quit: ZZZZZzzz)
23:57:52  * mikealjoined
23:58:09  * travis-cijoined
23:58:10  <travis-ci>[travis-ci] joyent/node#203 (v0.6 - 2808141 : Ben Noordhuis): The build passed.
23:58:10  <travis-ci>[travis-ci] Change view : https://github.com/joyent/node/compare/472a72d...2808141
23:58:10  <travis-ci>[travis-ci] Build details : http://travis-ci.org/joyent/node/builds/495185
23:58:10  * travis-cipart
23:59:08  * travis-cijoined
23:59:08  <travis-ci>[travis-ci] joyent/node#204 (v0.6 - 9a79bb6 : Ben Noordhuis): The build passed.
23:59:08  <travis-ci>[travis-ci] Change view : https://github.com/joyent/node/compare/2808141...9a79bb6
23:59:08  <travis-ci>[travis-ci] Build details : http://travis-ci.org/joyent/node/builds/495196
23:59:08  * travis-cipart