00:00:00  * ircretaryquit (Remote host closed the connection)
00:00:08  * ircretaryjoined
00:24:47  * thlorenz_joined
00:35:17  * thlorenz_quit (Ping timeout: 264 seconds)
00:41:03  * reqsharkjoined
00:47:17  * Ralithquit (Ping timeout: 264 seconds)
00:49:06  * Ralithjoined
00:57:49  * brsonquit (Quit: Lost terminal)
00:58:06  * brsonjoined
01:06:16  * az7ar_awayquit (Ping timeout: 256 seconds)
01:06:31  * reqsharkquit (Quit: Be back later ...)
01:07:42  * reqsharkjoined
01:14:39  * qardquit (Quit: leaving)
01:16:14  * octetclo1joined
01:16:52  * octetcloudquit (Ping timeout: 256 seconds)
01:16:54  * erikjquit (*.net *.split)
01:16:54  * hij1nxquit (*.net *.split)
01:17:30  * az7ar_awayjoined
01:20:16  * erikjjoined
01:20:16  * hij1nxjoined
01:28:23  * stagasquit (Ping timeout: 252 seconds)
01:35:25  * dap_quit (*.net *.split)
01:35:26  * ferossquit (*.net *.split)
01:35:28  * rphillipsquit (*.net *.split)
01:35:28  * Drajwer_quit (*.net *.split)
01:35:28  * az7ar_awayquit (*.net *.split)
01:35:29  * brsonquit (*.net *.split)
01:35:30  * indutnyquit (*.net *.split)
01:35:31  * DrPizzaquit (*.net *.split)
01:35:33  * wrlquit (*.net *.split)
01:35:35  * normanmquit (*.net *.split)
01:35:35  * tjfontainequit (*.net *.split)
01:35:35  * isaacsquit (*.net *.split)
01:35:36  * brycebarilquit (*.net *.split)
01:35:36  * [afk]Soarezquit (*.net *.split)
01:39:28  * az7ar_awayjoined
01:39:28  * brsonjoined
01:39:28  * dap_joined
01:39:28  * indutnyjoined
01:39:28  * ferossjoined
01:39:28  * DrPizzajoined
01:39:28  * [afk]Soarezjoined
01:39:28  * wrljoined
01:39:28  * rphillipsjoined
01:39:28  * Drajwer_joined
01:39:28  * normanmjoined
01:39:28  * tjfontainejoined
01:39:28  * isaacsjoined
01:39:28  * brycebariljoined
01:40:17  * dap_quit (*.net *.split)
01:40:19  * ferossquit (*.net *.split)
01:40:23  * rphillipsquit (*.net *.split)
01:40:24  * Drajwer_quit (*.net *.split)
01:40:25  * az7ar_awayquit (*.net *.split)
01:40:25  * brsonquit (*.net *.split)
01:40:28  * indutnyquit (*.net *.split)
01:40:32  * DrPizzaquit (*.net *.split)
01:40:38  * wrlquit (*.net *.split)
01:40:42  * normanmquit (*.net *.split)
01:40:42  * tjfontainequit (*.net *.split)
01:40:43  * isaacsquit (*.net *.split)
01:40:45  * brycebarilquit (*.net *.split)
01:40:46  * [afk]Soarezquit (*.net *.split)
01:40:49  * mikolalysenkoquit (*.net *.split)
01:40:50  * nsmquit (*.net *.split)
01:41:42  * az7ar_awayjoined
01:41:42  * brsonjoined
01:41:42  * indutnyjoined
01:41:42  * ferossjoined
01:41:42  * DrPizzajoined
01:41:42  * [afk]Soarezjoined
01:41:42  * wrljoined
01:41:42  * rphillipsjoined
01:41:42  * Drajwer_joined
01:41:42  * normanmjoined
01:41:42  * tjfontainejoined
01:41:42  * isaacsjoined
01:41:42  * brycebariljoined
01:43:50  * inolenquit (Quit: Leaving.)
01:59:30  * mikolalysenkojoined
01:59:30  * nsmjoined
02:04:15  * brsonquit (Ping timeout: 246 seconds)
02:05:37  * jgiquit (Quit: jgi)
02:10:43  * mikolalysenkoquit (*.net *.split)
02:10:43  * nsmquit (*.net *.split)
02:11:07  * Ralithquit (Ping timeout: 245 seconds)
02:16:26  * mikolalysenkojoined
02:16:26  * nsmjoined
02:19:35  * thlorenz_joined
02:24:29  * thlorenz_quit (Ping timeout: 264 seconds)
02:32:51  * Ralithjoined
02:37:40  * jgijoined
03:05:01  * jgiquit (Quit: jgi)
03:23:10  * jgijoined
03:26:09  * jgiquit (Client Quit)
03:36:35  * warehouse13quit (Remote host closed the connection)
03:40:50  * qardjoined
03:51:32  * rmgquit (Read error: Connection reset by peer)
03:52:34  * rmgjoined
04:08:25  * thlorenz_joined
04:10:23  * jgijoined
04:12:15  * brsonjoined
04:13:01  * thlorenz_quit (Ping timeout: 264 seconds)
04:18:55  * jgiquit (Quit: jgi)
04:22:27  * jgijoined
04:31:21  * toothrotquit (Ping timeout: 252 seconds)
04:33:53  * rmgquit (Read error: Connection reset by peer)
04:34:00  * rmgjoined
04:34:59  * AvianFluquit (Remote host closed the connection)
04:37:08  * reqsharkquit (Quit: Be back later ...)
04:53:26  * rmgquit (Remote host closed the connection)
04:53:29  * inolenjoined
05:06:41  * avalanche123joined
05:10:23  * octetclo1quit (Ping timeout: 240 seconds)
05:39:39  * avalanche123quit (Remote host closed the connection)
05:54:08  * rmgjoined
05:57:10  * thlorenz_joined
05:57:13  * brsonquit (Quit: leaving)
05:59:38  * rmgquit (Ping timeout: 265 seconds)
06:01:51  * thlorenz_quit (Ping timeout: 256 seconds)
06:06:30  * octetclo1joined
06:09:17  * wolfeidauquit (Remote host closed the connection)
06:09:32  * wolfeidaujoined
06:17:58  * jgiquit (Quit: jgi)
06:19:14  * seishunjoined
06:19:42  * inolenquit (Quit: Leaving.)
06:19:48  * jgijoined
06:23:33  * jgiquit (Client Quit)
06:37:12  * jgijoined
06:40:14  * avalanche123joined
06:45:17  * avalanche123quit (Ping timeout: 245 seconds)
06:55:29  * stagasjoined
06:56:11  * inolenjoined
07:07:47  * octetclo1quit (Ping timeout: 265 seconds)
07:14:31  * jgiquit (Quit: jgi)
07:17:09  * stagasquit (Quit: Bye)
07:20:22  * jgijoined
07:21:41  * seishunquit (Ping timeout: 244 seconds)
07:23:55  * stagasjoined
07:43:20  * rmgjoined
07:45:58  * thlorenz_joined
07:47:31  * rmgquit (Ping timeout: 244 seconds)
07:47:35  * qardquit (Remote host closed the connection)
07:50:48  * thlorenz_quit (Ping timeout: 265 seconds)
07:52:10  * jgiquit (Quit: jgi)
07:53:07  * jgijoined
08:02:16  * jgiquit (Quit: jgi)
08:28:20  * SergeiRNDjoined
08:58:23  * chris_99joined
09:14:22  * stagasquit (Ping timeout: 240 seconds)
09:25:08  * thhpjoined
09:33:04  <thhp>hello, can anyone tell me whether the uv thread pool apis (specifically uv_queue_work) are thread safe?
09:34:46  * thlorenz_joined
09:39:33  * thlorenz_quit (Ping timeout: 265 seconds)
09:50:10  <saghul>thhp: nope, it's not thread-safe
09:50:23  <saghul>you need to call uv_queue_work from the loop thread
09:50:42  <saghul>the only thread-safe API function involving the loop or any handle is uv_async_send
09:51:20  <thhp>thanks saghul
10:33:27  <txdv>a question that is asked all the time
10:49:00  <thhp>txdv: I think what mislead me was the docs which state that there's one global thread-pool shared by all the uv main loops
10:49:29  <thhp>which made me mistakenly assume that the threadpool api would be thread-safe
10:49:52  <thhp>maybe there is an faq entry I missed :-|
10:51:15  * zju3joined
10:51:24  * zju2joined
10:51:35  * zju1quit (Ping timeout: 252 seconds)
10:52:22  * zju4quit (Ping timeout: 240 seconds)
11:00:16  * SergeiRNDquit (Quit: Leaving.)
11:16:33  <txdv>the thread pool is per loop
11:16:48  <txdv>where exactly is it written that there is one global thread pool?
11:17:01  <txdv>can you point me to the file and line?
11:17:10  <txdv>or to the documentation site? wherever you read it
11:20:49  * rmgjoined
11:23:37  * thlorenz_joined
11:25:14  * rmgquit (Ping timeout: 245 seconds)
11:26:08  <thhp>txdv: I was looking in doc/ in my libuv git tree, but it's the same thing here: http://docs.libuv.org/en/v1.x/threadpool.html
11:26:34  <thhp>paragraph 3 in the top section titled "Thread pool work scheduling"
11:27:17  <thhp>"The threadpool is global and shared across all event loops"
11:28:02  * thlorenz_quit (Ping timeout: 252 seconds)
11:33:06  <txdv>hm
11:33:13  <txdv>let me check it up
11:34:21  <txdv>that documentation is wrong
11:34:42  <txdv>why would uv_queue_work ened a loop variable if the thread pool were global
11:34:46  <txdv>altough
11:36:32  * SergeiRNDjoined
11:36:38  <txdv>saghul: is there a thread pool per loop or is there a global thread pool?
11:39:55  <txdv>thhp: i just checked up the code
11:39:58  <txdv>there is one global thread pool
11:40:27  <thhp>ack
11:40:34  <txdv>but that doesnt make uv_queue_work thread safe
11:40:35  * piscisaureusjoined
11:40:44  <txdv>as in if you call it from one loop on another, it wont work
11:40:52  <thhp>no, I understand that
11:41:18  <thhp>my mention of the global thread pool earlier was really just to explain where my confusion arose from
11:42:05  * az7ar_awayquit (Ping timeout: 246 seconds)
11:42:12  <thhp>but I think it's clear now, thanks :-)
11:44:19  <txdv>metioning it that it uses a global thread pool you think that is thread safe?
11:45:50  <thhp>it's more the mention of multiple uv loops: my naive thinking was "OK, I can call this from multiple event loops, hence the API should be thread-safe"
11:46:12  <thhp>on the basis that you'd need a thread per executing call to uv_run
11:47:16  <thhp>and based on a skim of the code (at least in my git tree, which I think is a little old now) it looks like the thread-pool per-se is probably thread-safe, but the loop modifications that uv_queue_work carries out are not
11:49:24  <txdv>https://github.com/libuv/libuv/pull/227
11:49:29  <txdv>yes exactly
11:50:14  <thhp>cool, I think that makes things clear in the docs :-)
11:51:08  <thhp>I'm not sure if others asking about uv_queue_work were labouring under the same misapprehension, but hopefully this will help
12:15:45  * Left_Turnjoined
12:22:09  * saghul_joined
12:33:16  * chris_99quit (Remote host closed the connection)
12:34:52  * chris_99joined
13:05:31  * saghul_quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
13:09:37  * rmgjoined
13:12:21  * saghul_joined
13:12:22  * thlorenz_joined
13:13:29  * az7ar_awayjoined
13:14:03  * rmgquit (Ping timeout: 244 seconds)
13:16:53  * thlorenz_quit (Ping timeout: 246 seconds)
13:31:07  * toothrotjoined
13:35:55  * saghul_quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
13:43:09  * saghul_joined
13:45:14  * saghul__joined
13:47:39  * saghul_quit (Ping timeout: 256 seconds)
13:53:50  * inolenquit (Read error: Connection reset by peer)
13:58:46  * inolenjoined
14:13:10  * thlorenz_joined
14:17:54  * thlorenz_quit (Ping timeout: 264 seconds)
14:47:23  * Fishrock123joined
14:58:23  * rmgjoined
14:59:41  * warehouse13joined
15:01:53  * Left_Turnquit (Ping timeout: 256 seconds)
15:02:36  * rmgquit (Ping timeout: 246 seconds)
15:03:50  * SergeiRNDquit (Quit: Leaving.)
15:10:49  * saghul__quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
15:19:18  * deleishajoined
15:19:41  * saghul_joined
15:20:27  * deleisha_joined
15:23:57  * saghul_quit (Ping timeout: 246 seconds)
15:24:33  * deleisha_quit (Client Quit)
15:24:42  * deleishaquit (Quit: Page closed)
15:26:50  * saghul_joined
15:27:09  * reqsharkjoined
15:28:36  * saghul_quit (Client Quit)
15:35:29  * reqsharkquit (Ping timeout: 246 seconds)
15:59:07  * rmgjoined
16:02:03  * thlorenz_joined
16:03:31  * rmgquit (Ping timeout: 244 seconds)
16:06:22  * thlorenz_quit (Ping timeout: 240 seconds)
16:07:54  * AvianFlujoined
16:19:18  * pgicxplzsjoined
16:20:44  * rmgjoined
16:21:45  * warehouse13quit (Ping timeout: 250 seconds)
16:24:09  * toothrotquit (Ping timeout: 252 seconds)
16:27:30  * inolenquit (Quit: Leaving.)
16:49:01  * rmgquit (Read error: Connection reset by peer)
16:49:42  * rmgjoined
16:51:36  * rmgquit (Read error: Connection reset by peer)
16:52:10  * rmgjoined
16:56:47  * jgijoined
17:00:56  * jgiquit (Client Quit)
17:04:42  * thhppart
17:14:45  * octetclo1joined
17:32:34  * jgijoined
17:32:51  * thlorenz_joined
17:37:31  * Fishrock123quit (Remote host closed the connection)
17:38:12  * thlorenz_quit (Ping timeout: 272 seconds)
17:39:23  * inolenjoined
17:43:01  * Fishrock123joined
17:49:01  * seishunjoined
18:08:09  * chris_99quit (Remote host closed the connection)
18:08:55  * chris_99joined
18:24:15  * reqsharkjoined
18:24:36  * Ralithquit (Ping timeout: 256 seconds)
18:34:19  * dap_joined
18:39:38  * yunongjoined
18:41:45  * jgiquit (Quit: jgi)
18:42:20  * jgijoined
18:43:17  * AvianFlu_joined
18:44:03  * AvianFluquit (Ping timeout: 250 seconds)
18:45:38  * AvianFlu_changed nick to AvianFlu
18:48:40  * thlorenz_joined
18:53:09  * thlorenz_quit (Ping timeout: 250 seconds)
19:13:33  * Ralithjoined
19:21:03  * brsonjoined
19:34:18  * chris_99quit (Quit: Ex-Chat)
19:36:04  * chris_99joined
19:41:30  * isaacschanged nick to isaacs_the_rubbe
19:41:42  * isaacs_the_rubbechanged nick to izs_rubber_duck
19:41:59  * izs_rubber_duckchanged nick to isaacs
19:54:05  * Ralithquit (Ping timeout: 250 seconds)
19:57:39  * Ralithjoined
20:05:52  <MI6>joyent/node: Robert Kowalski v0.12 * 5d821fe : doc: add explanations for querystring (+1 more commits) - http://git.io/ANy1
20:08:14  <Domenic>piscisaureus: byte streams question. It's never truly zero-copy, right, because you need to copy from kernel space to user space? How does that manifest in terms of syscalls---is that automatically done by fread(3)?
20:10:00  <MI6>joyent/node: Julien Gilli refs/tags/jenkins-accept-pull-request-temp * 4fc03c6 : src: fix builtin modules failing with --use-strict - http://git.io/AN9M
20:10:40  <MI6>joyent/node: Julien Gilli refs/tags/jenkins-accept-commit-temp * 4eeb9e6 : src: fix builtin modules failing with --use-strict - http://git.io/AN9x
20:11:08  <piscisaureus>Domenic: well in theory it would be possible that the kernel reads straight into a user buffer
20:11:18  <Domenic>interesting
20:11:27  <piscisaureus>Domenic: at least iocp allows a model like that (although I don't think it happens in practice)
20:12:35  <piscisaureus>Domenic: in the same vein the kernel can send data straight from user memory (but there too some limitations apply. It requires setting the kernel send buffer to 0 on windows)
20:13:12  <piscisaureus>in reality it's hard to achieve true zero-copy because the kernel needs to build IP packets, it doesn't send raw data
20:13:49  <piscisaureus>Domenic: the unix poll() model always requires a copy between user and kernel space
20:13:58  <piscisaureus>except when splice() or sendfile() is used
20:14:00  * lanceballchanged nick to lance|afk
20:15:24  <Domenic>piscisaureus: new question. when you fread into a buffer in a threadpool, the calling thread could in theory observe the buffer filling up, right? or is it atomic somehow?
20:16:03  <piscisaureus>Domenic: you're right, it's observable in theory
20:16:21  <Domenic>ok. this could make async read problematic.
20:16:41  <Domenic>(but there are workarounds)
20:16:42  <piscisaureus>Domenic: why?
20:17:05  <piscisaureus>Domenic: it would only be problematic if the user buffer is exposed to javascript before the operation starts (e.g. readInto)
20:17:10  <Domenic>because in browsers at least it's anathema to let JS observe that the universe is multithreaded. E.g. you shouldn't be able to see the values of variables changing from line to line
20:17:13  <Domenic>Right
20:17:14  <Domenic>Exactly
20:17:32  <piscisaureus>Domenic: also in node this behavior is actually observable (with fs.read)
20:17:38  <Domenic>interesting i was wondering about that
20:17:45  <Domenic>node has so many buffer copies though i would've thought that could be avoided :P
20:17:54  <piscisaureus>It doesn't actually
20:18:09  <piscisaureus>there are no buffer copies except when a string is converted to a buffer or vice versa
20:18:21  <Domenic>hmm ok, well trevnorris was complaining about them but maybe that was for http
20:18:30  <Domenic>what is https://github.com/libuv/libuv/blob/v1.x/src/unix/fs.c#L1030 doing then?
20:18:30  <piscisaureus>Domenic: or maybe streams2 made it worse. In node 0.6 there were no copies :)
20:18:40  <Domenic>yeah I think streams2 did make things worse
20:18:48  <piscisaureus>Domenic: that line only copies the iovec
20:19:00  <piscisaureus>(e.g. it captures the pointers and lengths of buffers, but not the buffers themselves)
20:19:01  <Domenic>ah i see
20:19:06  <piscisaureus>and only if you writev() with more than 4 buffers
20:19:26  <Domenic>where is the actual call to fread(3) hiding in libuv, btw?
20:19:40  <piscisaureus>Domenic: the call is read() btw
20:19:49  <piscisaureus>Domenic: fread is for "high level io" in C
20:19:50  <Domenic>oh there it is
20:19:53  <Domenic>huh
20:20:23  <piscisaureus>fread() takes a struct FILE* as the first argument whereas read() takes a file descriptor
20:20:44  <Domenic>makes sense
20:20:48  <piscisaureus>high-level io in C does userspace buffering, and - on windows - newline conversion
20:20:58  <piscisaureus>but libuv/node doesn't use that
20:21:05  <Domenic>how does fs.read implement its offset parameter?
20:21:09  * jgiquit (Quit: jgi)
20:21:11  <piscisaureus>with pread()
20:21:22  <Domenic>sorry its position parameter
20:21:23  <Domenic>i seee
20:22:52  <Domenic>so if you accept the constraint that JS should not observe buffers changing, that rules out naive async readInto, I am pretty sure. I have a workaround in mind but I am curious what your thoughts are on where to go from there.
20:23:45  <piscisaureus>I am not sure that I really agree that observation is so problematic
20:24:11  <Domenic>it's a dealbreaker for browsers i am pretty sure :-/
20:24:19  <Domenic>there was a huge fight over it regarding web audio
20:24:21  <piscisaureus>the write path is much more problematic btw, you could imagine `sock.write(buf); buf[1] = 42`.
20:24:45  <piscisaureus>Now it's undefined what gets written to the socket, it may or may not include 42
20:24:49  <Domenic>that's easy. i guess i'll give away my solution to answer it :). neuter the passed arraybuffer when you pass it to write
20:25:08  <piscisaureus>makes sense but bad for performance and typical use cases
20:25:23  <Domenic>Why?
20:25:31  <piscisaureus>Domenic: for example, sock1.write(buf); sock2.write(buf); <-- can no longer do that
20:25:38  <Domenic>hmm
20:25:48  <piscisaureus>Domenic: what if I want to pre-load a file in memory and use the cached contents to serve a static file
20:26:00  <Domenic>good point
20:26:01  <piscisaureus>Domenic: "read only" buffers would be much more useful in that regard
20:26:06  <Domenic>ugggh
20:26:23  <piscisaureus>Domenic: there's an old issue in node about it, but there was no v8 infrastructure (at least, not at the time)
20:26:34  <piscisaureus>... to support it
20:27:04  <piscisaureus>Domenic: I think not supporting readInto initially is okay actually
20:27:38  <piscisaureus>It's more like a "nice to have" but you can leave it out until people start clamoring for it
20:27:47  * warehouse13joined
20:28:17  <Domenic>so you just support read(desiredNumberOfBytes)?
20:28:35  <piscisaureus>yeah, that works right?
20:28:42  * reqsharkquit (Quit: Be back later ...)
20:29:42  <Domenic>yeah
20:29:50  * pgicxplzsquit (Ping timeout: 256 seconds)
20:29:54  <Domenic>why did I want readInto again?
20:30:43  <Domenic>oh i know
20:31:10  <Domenic>i have these three use cases that I am trying to use to test various designs: https://gist.github.com/domenic/e251e37a300e51c5321f
20:31:32  <Domenic>the first is about consolidating an entire file into a single array buffer
20:31:39  <Domenic>which i guess could be done with read(n)
20:31:45  <Domenic>the second two are about re-using arraybuffers
20:37:26  * thlorenz_joined
20:38:04  <piscisaureus>Domenic: I think node should probably move to document it's low-level API (although I'd want it to be event-free which is currently not the case)
20:38:13  <piscisaureus>which is close to posix
20:38:15  * lance|afkchanged nick to lanceball
20:40:05  <piscisaureus>Domenic: indutny will be unhappy if we don't have readInto support because his recent TLSWrap optimizations rely on it
20:40:20  <Domenic>yeah it seems important
20:40:26  * saghul_joined
20:40:43  <Domenic>I think I have a fix but it's pretty ridiculous
20:40:51  <Domenic>It relies on https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer/transfer
20:41:06  <piscisaureus>Domenic: the biggest issue with observable buffer changes is that it gives vm implementors a hard time
20:41:08  <piscisaureus>e.g.
20:41:16  <piscisaureus>while (buf[0] == 42) { ... }
20:41:21  <Domenic>you basically just keep transferring the ArrayBuffer's backing memory to a new ArrayBuffer all the time.
20:41:24  <Domenic>Oh jeez
20:41:25  <piscisaureus>they can optimize this assuming that buf[0] doesn't change
20:42:18  <piscisaureus>(in c this problem is solved by making the behavior in this case "undefined". To actually observe it, buf[] would need to be defined as volatile)
20:42:30  * thlorenz_quit (Ping timeout: 272 seconds)
20:43:17  <piscisaureus>Domenic: yes, changing the backing memory is probably the cleanest
20:43:44  <Domenic>The user-facing API is pretty silly, I am writing up a few samples now.
20:47:43  * avalanche123joined
20:48:03  * jgijoined
20:48:42  <indutny>Domenic: why here?
20:49:02  <Domenic>indutny: why am I asking these questions in #libuv, or...?
20:49:05  <indutny>:)
20:49:07  <indutny>yeah
20:49:25  <indutny>I like concept of readInto
20:49:26  <Domenic>they are kinda libuv-level I felt.
20:49:53  <indutny>but
20:49:55  <indutny>there is a problem
20:50:03  <indutny>considering that it is async
20:50:09  <indutny>it requires allocation of the buffer ahead of time
20:50:17  <piscisaureus>indutny: what does?
20:50:21  <indutny>readInto
20:50:34  <Domenic>what's the alternative
20:50:49  <indutny>alloc/read :)
20:50:51  <indutny>as libuv does
20:50:59  <indutny>alloc is called right before the read
20:51:00  <piscisaureus>indutny: libuv should really shed this model :(
20:51:07  <indutny>piscisaureus: you have better idea?
20:51:24  <piscisaureus>indutny: we should just support poll() and then try_read()
20:51:30  <indutny>yeah
20:51:33  <indutny>this would work too
20:51:44  <piscisaureus>indutny: and fall back to pre-alloc buffers only if there's no other option (for files and windows)
20:51:54  <Domenic>poll()/try_read() is one underlying source model. but files is the other interesting one.
20:52:46  <piscisaureus>indutny: in libuv the alloc_cb sometimes doesn't get called just before the read callback (but just *after* read_start, instead)
20:52:50  <piscisaureus>and synchronous even
20:53:06  <piscisaureus>it's a weird api
20:54:02  <indutny>but it covers the differences between platforms
20:54:04  <indutny>and that's good
20:54:10  <piscisaureus>yes, it did the job
20:54:41  <piscisaureus>indutny: we should really move toward the dual async_read and poll/try_read internally though
20:54:56  <piscisaureus>just because it's consistent and easy to reason about
20:55:04  <indutny>idk
20:55:08  <piscisaureus>and if I'm not mistaken, equally performant
20:55:11  <indutny>I don't feel like we need it
20:55:21  <indutny>I like the idea of making read a request
20:55:29  <indutny>but separating one into three
20:55:34  <indutny>or two into three
20:55:58  <piscisaureus>well we already support poll(), it's just a separate api now
20:56:13  <Domenic>so our status in streams is right now we went all in on poll()/try_read() basically. and piscisaureus convinced me that fails badly for files. so now I need to fix it.
20:56:31  <indutny>haha
20:56:33  <piscisaureus>urr
20:56:34  <indutny>alloc/read
20:56:34  <piscisaureus>:)
20:56:36  <Domenic>I was hopeful we could just end up with async read. Why do you say we should do async read + poll()/try_read()
20:56:36  <indutny>really )
20:56:47  <piscisaureus>I think async_read is perfect for js-land
20:56:50  <indutny>seriously
20:56:54  <indutny>alloc and read
20:56:57  <piscisaureus>it's just c-land where the issue arises
20:56:59  <indutny>or
20:57:01  <indutny>beforeRead
20:57:02  <indutny>afterRead
20:57:10  <indutny>where beforeRead supplies the data
20:57:16  <Domenic>alloc + poll + try_read is also in the running yeah
20:57:16  <indutny>and afterRead takes the dat
20:57:29  <piscisaureus>Domenic: so I stick to my opinion that all you need in javascript land is async read
20:57:32  <Domenic>but partially just because it's a smaller delta from our existing model, which is not a good bias to be making decisions on :P
20:57:43  <indutny>well
20:57:49  * Rolinhjoined
20:57:59  <indutny>gosh, I can't really negotiate :)
20:58:03  <indutny>see ya, going to return back to reading
20:58:08  * indutny&
20:58:11  <Domenic>nooo your insight is valuable
20:58:24  <Domenic>ok then piscisaureus why is async read problematic in C land but not in JS land?
20:58:44  <piscisaureus>Domenic: because "buffer ownership" is not an issue in javascript.
20:59:11  <piscisaureus>Domenic: in c async read would be fine too, if libuv was allowed to just malloc() read buffers and make the user responsible for freeing the buffer
20:59:25  <piscisaureus>but this upsets real c hackers
20:59:45  <indutny>hahah
20:59:48  * indutnyfg
20:59:52  <indutny>piscisaureus: noooo
20:59:59  <indutny>allocating is slow
21:00:15  <piscisaureus>ah c'mon :)
21:00:15  <indutny>I'm actually struggling with absence of .readInto in streams3 API in io.js
21:00:21  <indutny>piscisaureus: c'mon
21:00:38  <indutny>piscisaureus: we can't compete with C/C++ right now
21:00:40  <indutny>mostly because of it
21:00:50  <indutny>I mean in stream parsing and stuff like that
21:00:59  <indutny>because it allocates a lot
21:01:31  <piscisaureus>Domenic just pointed out that pre-allocating a buffers in javascript is no-go territory because the buffer filling up (in another thread) would be observable in javascript
21:01:43  <indutny>yeah
21:01:47  <Domenic>i am contemplating fixing this by neutering then un-neutering hte array buffer
21:01:54  <Domenic>un-neutering doesn't exist right now
21:02:00  <Domenic>so we'd have to fix that first
21:02:09  <piscisaureus>So not only is alloc/readInto or poll/try_read unnecessary, it even can't be done
21:02:20  <Domenic>also if we really wanted this to work it'd have to be neutering per subslice of the arraybuffer
21:02:37  <indutny>it would be interesting
21:02:40  <piscisaureus>With an async read api, the runtime can decide when to alloc the buffer, and can do it as late as it wants (and use a fast allocator if you think malloc is too slow)
21:02:46  <indutny>to create a Interface
21:02:48  <indutny>for allocator
21:02:57  <indutny>and supply it during read request
21:03:03  <indutny>or stream init
21:03:17  * indutny& again
21:04:02  <piscisaureus>Domenic: copy-on-write buffers maybe?
21:04:14  <Domenic>hmm
21:04:20  <Domenic>how does that help exactly?
21:04:30  <indutny>it does not help at all
21:04:36  <indutny>same as unconditionally allocating
21:05:44  <piscisaureus>hmm it doesn't help indeed
21:05:47  <piscisaureus>I was thinking about write
21:05:51  <piscisaureus>for reading it does nothing
21:06:18  <piscisaureus>Domenic: maybe don't support reading into a buffer slice?
21:06:37  <piscisaureus>No that would be lame
21:07:00  <Domenic>Yeah. Again I have these three use cases I am experimenting with. (If they are bad or there are better ones let me know.) https://gist.github.com/domenic/e251e37a300e51c5321f
21:09:40  * saghul_quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
21:10:43  <piscisaureus>Domenic: the use cases are good. I am not cheering for rbs.ready / pause / resume but that shouldn't be news/
21:11:57  <piscisaureus>Domenic: do you still live in ny?
21:12:00  <indutny>yay
21:12:12  <Domenic>piscisaureus: yes
21:12:58  <indutny>piscisaureus: going to NY anywhere soon?
21:13:18  <Domenic>piscisaureus: rbs.ready + rbs.read/readInto + rbs.setAllocator seems *OK*, although perhaps not great. It might be equally bad to rbs.readInto(sourceBuffer, ...) -> Promise<{ newTransferredBuffer, bytesRead }>. But, the moment you introduce pause + resume you completely lose
21:15:32  <piscisaureus>Domenic: no I was wondering if you were in the bay area
21:15:53  <piscisaureus>I'll have to invent an excuse to go to ny
21:16:49  <Domenic>^_^
21:16:59  <Domenic>You're in Amsterdam right? Or did you move to SF?
21:17:15  <piscisaureus>Domenic: so poll/read, readInto and write etc. of course can work for files if we're willing to put up with an extra data copy
21:17:42  <Domenic>yeah but i mean what's even the point then :P
21:17:43  <piscisaureus>So that could be the browser's solution, trading speed for predictability and saner semantics
21:17:52  <piscisaureus>yeah
21:17:54  <piscisaureus>:)
21:18:07  <piscisaureus>Domenic: I am in amsterdam still but next week I'll move to sf
21:18:14  <Domenic>oh wow ok!
21:18:22  <Domenic>congrats?
21:18:27  <piscisaureus>?
21:18:30  <piscisaureus>who knows :)
21:18:41  <piscisaureus>sounds like change of scenery
21:29:25  * avalanche123quit (Remote host closed the connection)
21:30:42  * AvianFluquit (Remote host closed the connection)
21:35:39  <Domenic>what is the shape of the allocator APIs in libuv/node? Roughly? E.g. what params does the allocator take, what does it return
21:36:23  <piscisaureus>Domenic: in libuv the user has to implement `uv_buf_t on_alloc(size_t suggested_size)`
21:36:38  <piscisaureus>Domenic: where uv_buf_t is a { char* pointer, size_t length } tuple
21:36:46  <Domenic>interesting
21:36:53  <Domenic>what is the interplay between suggested_size and length?
21:37:01  <Domenic>length >= suggested_size?
21:37:19  <piscisaureus>Domenic: so in javascript it's probably `function on_alloc(suggested_size); // must return a Buffer`
21:37:32  * stagasjoined
21:38:05  <piscisaureus>Domenic: suggested_size is purely up to you. Realistically speaking there may be a minimum length (8 or 16 bytes) sometimes.
21:38:16  <piscisaureus>Domenic: libuv may not use the entire buffer if it's too bog
21:38:17  * thlorenz_joined
21:38:30  <Domenic>what happens if someone requests 1024 bytes and gets back a buffer 256 bytes long
21:38:37  <piscisaureus>the typical suggested_size is 65536 for tcp streams
21:39:01  <piscisaureus>Domenic: then they get back the buffer and libuv indicates it read only 256 bytes
21:39:20  <Domenic>ah makes sense
21:41:02  <piscisaureus>Domenic: so the idea is that if libuv actually fills up the entire buffer, you probably allocated a too-small buffer
21:41:30  <piscisaureus>And when it happens it triggers another alloc and another nonblocking read
21:42:49  <Domenic>btw here is the write problem in web specs https://lists.w3.org/Archives/Public/public-html-media/2014Feb/0019.html
21:42:51  * thlorenz_quit (Ping timeout: 250 seconds)
21:49:06  * seishunquit (Ping timeout: 264 seconds)
21:54:44  <piscisaureus>Ok, checking out.
21:54:46  * piscisaureus&
21:54:47  * piscisaureusquit (Quit: ~ Trillian Astra - www.trillian.im ~)
22:21:07  * avalanche123joined
22:21:51  * qardjoined
22:26:22  * stagasquit (Ping timeout: 240 seconds)
22:28:20  * mscdexjoined
22:29:01  <mscdex>question: why does libuv stop the event loop when a handle becomes inactive (but stil ref'd)?
22:29:58  <mscdex>i haven't tested on node v0.12/io.js yet, but i'm seeing that behavior on node 0.10
22:30:01  * chris_99quit (Remote host closed the connection)
22:30:32  <mscdex>i verified that it's still ref'd by checking the handle's flags property
22:48:19  * lanceballchanged nick to lance|afk
22:49:53  * rmgquit (Read error: Connection reset by peer)
22:50:33  * rmgjoined
22:52:26  * AvianFlujoined
22:53:27  * avalanche123quit (Remote host closed the connection)
23:05:35  * avalanche123joined
23:06:32  * avalanche123quit (Remote host closed the connection)
23:08:29  * Fishrock123quit (Remote host closed the connection)
23:23:17  * avalanche123joined
23:27:02  * thlorenz_joined
23:31:46  * thlorenz_quit (Ping timeout: 255 seconds)
23:39:51  <jgi>robertkowalski: ping
23:40:12  <jgi>robertkowalski: I want to take a look at https://github.com/joyent/node-documentation-generator/pull/11 asap, but I’ve been too busy to do that :(
23:40:16  <jgi>robertkowalski: sorry for the delay
23:58:41  * reqsharkjoined