00:00:01  <isaacs>mikeal: yeah, fuck that.
00:00:05  <mikeal>yeah, i agree
00:00:15  <isaacs>mikeal: well... actualy, maybe, i dunno.
00:00:20  <mikeal>pipe(x); then later: pipe(x) again should reset the reference
00:00:27  <isaacs>mikeal: we'd break the idea of using the built in .pipe() to send multiple things to one thing
00:00:30  <dominictarr>what if you check whether ('function' == typeof source.read) on pipe()?
00:00:40  <isaacs>but you could still create a joiner stream that can do it
00:00:49  <mikeal>there is also some concern with streams getting accidentally dumped and left open too long
00:00:50  <isaacs>dominictarr: yeah, we'd have something like that
00:01:02  <isaacs>but first we should explore how this would work if this was just the way the world works
00:01:09  <mikeal>rightnow
00:01:12  <dominictarr>multiple readable streams to one writable stream isn't very useful.
00:01:17  <mikeal>i will pipe again to the same stream
00:01:19  <dominictarr>so it's no big deal.
00:01:31  <mikeal>but i'm never emitting from the first one again
00:01:42  <mikeal>basically, i'm resetting the reference
00:02:22  <dominictarr>mikeal, your piping to the same stream twice?
00:02:28  <mikeal>yeah
00:02:31  <mikeal>this is what happens
00:02:40  <dominictarr>what is your usecase?
00:02:45  <mikeal>filed('./some.txt').pipe(request.put(url))
00:02:54  <dominictarr>ok
00:03:03  <mikeal>in the future, after stat, filed actually creates a new readStream and pipes it to the request
00:03:12  <dominictarr>sure
00:03:16  <mikeal>that filed object never emits data
00:03:20  <isaacs>mikeal: well, filed's .pipe(0 method isn't strictly stream.pipe()
00:03:28  <isaacs>it's "eventually call stream.pipe()"
00:03:36  <mikeal>well….
00:03:45  <mikeal>it emits a pipe event on the destination at the point you pipe it
00:03:51  <mikeal>or else things get horribly broken
00:04:08  <isaacs>oic, so the request gets 2 pipe events?
00:04:14  <mikeal>yup
00:04:32  <dominictarr>hang on, where is the 2nd pipe event?
00:04:55  <isaacs>mikeal: what if filed emitted a 'stat' event that the request knew to listen to or whatever, and then just proxied the fs readable streams' 'readable' events, and .read() method?
00:05:23  <isaacs>this._x.on('readable', this.emit.bind(this, 'readable')); this.read = this._x.read.bind(this._x)
00:05:51  * xaqquit (Remote host closed the connection)
00:05:52  <isaacs>readable streams don't need .pause() and .resume() in this model.
00:06:00  <mikeal>filed already *does* emit a stat event that request listens for :)
00:06:02  <isaacs>or any state other than 'readable'
00:06:08  <isaacs>mikeal: ok :)
00:06:12  <mikeal>the problem is that
00:06:26  <mikeal>request objects don't require end() be called on them like core
00:06:28  <isaacs>(where this === filed thing, and _x === the fs stream)
00:06:47  <mikeal>so it sits on nextTick and if it's gotten a pipe() event it waits until there is a write() before it kicks off the real request
00:07:04  <isaacs>"sits on nextTick" <-- ?
00:07:07  <isaacs>what's that mean, exactly?
00:07:20  <isaacs>just one nextTick, or loopin gon it?
00:07:27  <mikeal>self.on('pipe', function (src) {self.src = src}
00:07:35  <isaacs>ok, right
00:07:54  <mikeal>nextTick(function () { if (!self.src) self.start() })
00:08:02  <isaacs>i've seen request/main.js a few times. i think i'm familiar with that :)
00:08:15  <mikeal>and in the write() method there is code that calls start() if it hasn't been started
00:08:20  <isaacs>right
00:08:47  <mikeal>getting two pipe events should be considred an "upgrade"
00:08:53  <mikeal>this is my source now
00:08:57  <dominictarr>what does start() do?
00:09:05  <dominictarr>starts 'data'?
00:09:14  <isaacs>dominictarr: it calls .end() on the underlying http.request() object
00:09:18  <mikeal>no, it creates the underlying http.request() object
00:09:23  <isaacs>oh, right
00:09:25  <mikeal>and calls end
00:09:32  <isaacs>you don't create it event
00:09:45  <dominictarr>it mean's "I'm ready, when ever you are"?
00:09:46  <isaacs>i mean, creating it is just a JS object until you .end() it anyway
00:10:08  <mikeal>there's other shit too in start()
00:10:11  <isaacs>so, multiple piping is still just a kinda weird edge case anyhow
00:10:13  <mikeal>like doing all the auth signing
00:10:20  <isaacs>mikeal: right, and all that proxy mumbo jumbo
00:10:20  <mikeal>which needs to happen as late as possible
00:10:42  <mikeal>yeah
00:10:45  <mikeal>this is all i'm saying
00:10:48  <mikeal>with this read() refactor
00:11:04  <mikeal>if you pipe twice to one object, it's not parallel, it's an "upgrade"
00:11:12  <mikeal>and there is only ever one source stream
00:11:19  <mikeal>unless you write your own crazy shit
00:11:39  <dominictarr>there is nearly never any need to pipe twice to one object.
00:11:43  <isaacs>hm... i think we could make it work as a merge, as well.
00:11:58  <isaacs>but it's a matter of which choice is best, not dictated by this refactor necessarily
00:12:06  <dominictarr>pipeing from one to many streams is useful though
00:12:33  <dominictarr>piping many to one is already broken, I think,
00:12:41  <mikeal>it shouldn't be
00:12:47  <dominictarr>because of removing _pipeCount
00:12:57  <isaacs>however, piping from one to many IS a bit trickier with the .read() method
00:13:06  <mikeal>we're going to just have to break it
00:13:41  <mikeal>the truth is, anyone doing it at scale had to write their own objects
00:13:44  <dominictarr>either way, with read() the stream must implement read() so it's defacto optional.
00:14:11  <mikeal>this does make it a little harder to write
00:14:14  <isaacs>here's the thing: if you create a new sort of userland writable stream, you have to implement .write() and .end(), and tha'ts basically it.
00:14:30  <isaacs>.write() has to return false if you want to take a break, in which case you have to implement "drain" event
00:14:36  <isaacs>but that's all pretty straightforward
00:14:40  <dominictarr>you can implement read() interms of on('data',...
00:14:48  <isaacs>dominictarr: other way around.
00:14:50  <dominictarr>it could event be in Stream@...
00:15:01  <isaacs>currently, readable streams are much hard.
00:15:03  <isaacs>*much harder.
00:15:05  <mikeal>most people fuck up write()
00:15:19  <isaacs>mikeal: not as bad as they fuck up .pause() and .resume()
00:15:31  <dominictarr>I've been working on base classes that are easy to extend
00:15:46  <mikeal>yeah, stream.Stream needs to be an extendable base class
00:15:53  <mikeal>and we need createFilter()
00:16:03  <isaacs>the extnesion should be: inherits(MyThing, stream.Readable); MyThing.prototype.read = function (n) { return n bytes, or null }
00:16:06  <isaacs>and that's it
00:16:06  <dominictarr>there are multiple types of stream.
00:16:08  <mikeal>if we do that, people will just write this stuff themselves less and we can get it right in one place
00:16:40  <dominictarr>a filter for example, should pass pause state straight through to the writer.
00:17:05  <mikeal>does stream.Stream not have a read() method by default and a listen is put on "pipe" in the constructor to pass it through to the src
00:17:23  <dominictarr>but a duplex stream shouldn't do that.
00:17:32  <isaacs>mikeal: i'm not sure about stream.Stream being a pass-through by default.
00:17:45  <mikeal>once we add pause() buffering, it needs to be
00:17:58  <mikeal>writes and reads get way more complicated
00:18:01  <isaacs>mikeal: i'm saying, let's get rid of pause()
00:18:02  <dominictarr>how the readable stream is coupled to the writable side is it's own business.
00:18:16  <isaacs>and duplexes are almost never even remotely passthrough
00:18:33  <dominictarr>no, usually they are not.
00:18:37  <isaacs>any more than when i say "hello" to you on the phone, i'd expect to then hear my own voice.
00:18:49  <dominictarr>exactly
00:18:59  <mikeal>ok
00:19:04  <mikeal>so....
00:19:13  <mikeal>i write a socket server
00:19:19  <mikeal>i accept a connection
00:19:21  <dominictarr>there are useful passthrough/filter
00:19:23  <mikeal>i forget to do anything with the data
00:19:24  <dominictarr>streams though
00:19:32  <mikeal>that just fills up available memory?
00:19:42  <isaacs>mikeal: no, it never pulls it out of the tcp buffer
00:19:58  <isaacs>mikeal: so that fills up the tcp buffer in the networking layer, which stops receiving from the wire.
00:20:01  <mikeal>ok
00:20:05  <mikeal>i write a socket server
00:20:14  <mikeal>i go talk to my database after a new connection and then i pipe it
00:20:19  <isaacs>Socket.read() would call an underlying uv_tcp_read()
00:20:29  <mikeal>how does the object i'm piping it to know that the "readable" event already happened
00:20:35  <isaacs>mikeal: it doesn't.
00:20:47  <isaacs>mikeal: it just starts calling .read() and then passing the contents downstream
00:20:49  <mikeal>so, the default is to just always try to read
00:20:56  <mikeal>well
00:20:56  <isaacs>when .read() returns null, it says, "Ok, I'm over" and waits for "readable"
00:21:07  <mikeal>if read() returns null it should wait for a "readable" event
00:21:15  <isaacs>yes
00:21:21  <dominictarr>read() === null means over? or pause?
00:21:27  <isaacs>means pause
00:21:30  <isaacs>(conceptually)
00:21:37  <isaacs>"end" event means "no more coming"
00:21:42  <dominictarr>so what means end?
00:21:44  <mikeal>hrm…..
00:21:49  <isaacs>read() == null -> wait for "end" or "readable"
00:21:55  <isaacs>write() === false -> wait for "drain"
00:21:57  <mikeal>i don't like that the first thing new socket connections do is pause their input
00:22:03  <isaacs>mikeal: nothing is paused.
00:22:04  <dominictarr>what if there is a valid message that is "end"
00:22:07  <dominictarr>?
00:22:09  <isaacs>if you .read() it right away, you'll get the data out.
00:22:11  <mikeal>that means that every new connection accepted will suffer a roundtrip
00:22:19  <mikeal>i'm talking about at the network layer
00:22:20  <isaacs>mikeal: the thing is, we are just doing this for you now.
00:22:31  <isaacs>mikeal: yeah, but that's no different than what we do now
00:22:35  <mikeal>if we aren't putting data in to memory then that means we aren't taking it off the network
00:22:52  <isaacs>if you just call .read() right away, then that's excatly the same as what we have now
00:23:02  <mikeal>no we aren't, or at least we shouldn't be, the first TCP message we send should not be "stop sending me data"
00:23:15  <isaacs>mikeal: no, we never send a "stop sending me data" message
00:23:28  <mikeal>but i don't want to read, i want you to buffer some data while i talk to redis instead of eating a roundtrip :)
00:23:35  <isaacs>mikeal: yeah
00:24:10  <mikeal>so which is it?
00:24:11  <isaacs>today: sock.on("readable", function () { var c; while ((c = this.read()) !== null) { this.emit('data', c) })
00:24:30  <mikeal>ok
00:24:31  <isaacs>missed a } sorry
00:24:42  <isaacs>so, we're always pulling all of the data we can out of hte networking layer as fast as we can
00:24:48  <mikeal>right
00:24:53  <mikeal>we're moving away from that now
00:24:54  <isaacs>with a .read() method, you don't need to explicitly pause
00:25:01  <isaacs>you just read() what you can once you can
00:25:02  <isaacs>but not before
00:25:18  <isaacs>so, the TCP buffer in your networking layer can keep filling up while you talk ot redis
00:25:23  <mikeal>right, but we pull everything off in the same tick
00:25:29  <isaacs>if that takes too long, then yeah, it'll push back on the router etc
00:25:40  <mikeal>wait
00:25:50  <dominictarr>isaacs, I think you should do that the other way
00:25:58  <mikeal>do i get a socket object before the TCP buffer is full?
00:26:09  <dominictarr>on('data', function (d) { buffer.push(d) })
00:26:14  <dominictarr>and then
00:26:15  <mikeal>and readable is then called when it's full
00:26:34  <dominictarr>read = function () { return buffer.shift() }
00:26:35  <isaacs>dominictarr: that's an extra copy, though
00:26:47  <isaacs>dominictarr: also, an extra userland buffer
00:26:57  <isaacs>mikeal: you get a socket object as soon as the connection is established, before any bytes are in.
00:27:03  <isaacs>mikeal: in fact, it's probably not readable yet
00:27:03  <mikeal>ok
00:27:05  <dominictarr>yeah, but it's not as slow as net work io.
00:27:16  <mikeal>ok
00:27:17  <isaacs>mikeal: "readable" emits whenever there are some bytes to consume
00:27:21  <dominictarr>if you don't want to buffer
00:27:29  <isaacs>you can then consume up to n bytes by doing read(n)
00:27:32  <dominictarr>stream.read = false
00:27:35  <isaacs>or just read() to say "give me what you've got"
00:27:37  <mikeal>this will be slow if we don't implement a pasthrough by default that is just a big pointer list back to the origin
00:28:00  <mikeal>because we have the same number of events emitted and listened to
00:28:02  <isaacs>"pointer list back to the origin"?
00:28:04  <mikeal>as we did for "data"
00:28:12  <dominictarr>and then pipe checks if read is a function, else pipes off 'data'
00:28:12  <mikeal>and then we're *also* doing a function call
00:28:30  <mikeal>if it's a function call for every stream in the pipe chain even when they aren't mutating, that's gonna be slow
00:28:47  <isaacs>mikeal: i don't believe that'll be slow.
00:28:55  <isaacs>in fact, i have a feeling it'll be very fast.
00:29:18  <isaacs>and we won't be calling .pause() every time a write isn't flushable, but still getting the benefits of tcp backpressure
00:29:24  <mikeal>so
00:29:34  <mikeal>"readable" gets emitted and nobody calls read()
00:29:45  <mikeal>in a socket, we pause the network input?
00:30:07  <isaacs>mikeal: effectively.
00:30:13  <isaacs>but remember, *this is already how it works under the hood*
00:30:29  <isaacs>.pause() is just our way to tell libuv to stop calilng read() every time the thing is readable
00:30:42  <mikeal>yeah, i was just trying to think if there was any performance increase we might see in filling up the window more
00:30:48  <isaacs>then the underlying network system buffer fills up, and stops consuming the input from the hardware
00:30:51  <mikeal>yeah, i get it
00:31:00  <dominictarr>it actually sends a NAK packet, though right?
00:31:12  <mikeal>fuck, this will break all the codes :)
00:31:17  <mikeal>but i think i'm for it
00:31:19  <isaacs>yes, this will break all the codes :)
00:31:39  <isaacs>it's fairly easy to add some kludges to support "data" events if anyone listens to them, thoguh
00:32:09  <isaacs>Stream.prototype.on = function (ev, fn) { if (ev === "data") { this._emitDataEvents(); } EventEmitte.rprorotype.on.call(this, ev, fn) }
00:32:14  <dominictarr>but wont read's buffer get enormous?
00:32:20  <mikeal>i can't think of a way to polyfill for people who just listen to "data"
00:32:23  <mikeal>which is most people
00:32:29  <isaacs>it's a simple polyfill
00:32:37  <isaacs>you just listen to readable, and read() everything, and then emit "data" with the results
00:32:47  <isaacs>and we can just polyfill when a "data" listener is added
00:33:34  <mikeal>so we're going to call that function that checks for listeners by name on every read
00:33:42  <dominictarr>I have an idea how we could do both apis
00:33:52  <isaacs>mikeal: nono, just polyfill the addListener method
00:34:06  <isaacs>i mean, it's either/or
00:34:06  <mikeal>oh god, that's so gross
00:34:08  <isaacs>yeah
00:34:09  <mikeal>why can't we just break node
00:34:10  <isaacs>it sucks :)
00:34:12  <mikeal>like in the good old days
00:34:15  <isaacs>if only
00:34:15  <mikeal>:)
00:34:23  <dominictarr>ryah would break it.
00:34:27  <mikeal>for 1.0 we're going to kill this polyfill
00:34:38  <mikeal>that's what 1.0 should mean
00:34:39  <isaacs>mikeal: 2.0
00:34:40  <mikeal>:)
00:34:56  <isaacs>1.9 maybe, i dunno
00:35:03  <mikeal>i think this is gonna break a lot
00:35:06  <dominictarr>I mean, 0.9 does not imply 1.0 is next
00:35:15  <mikeal>because we don't get out of pause/resume
00:35:31  <isaacs>dominictarr: no, but what we have in node js today will be in 1.0
00:35:32  <mikeal>because we have to do it in the polyfill
00:35:46  <isaacs>dominictarr: we're just going to get libuv equivalently stable, and unfuck http
00:36:13  <dominictarr>this buffering thing is pretty much a side issue for me, I want a consistent terminal state.
00:36:21  <dominictarr>that is my most important issue.
00:36:42  <dominictarr>you can always pipe to a buffering stream.
00:37:23  <isaacs>i've gotta run
00:38:06  <dominictarr>okay, as long as you realize that I'm gonna keep hassling you about this.
00:38:11  <isaacs>:D
00:38:17  <isaacs>dominictarr: yeah
00:38:21  <isaacs>this is what we're going to do for 0.9
00:38:36  <isaacs>but to support it, we need to get this interface into libuv
00:38:47  <isaacs>i know that piscisaureus wants to do some libuv cleanup stuff.
00:38:59  <dominictarr>https://gist.github.com/3117184
00:39:06  <isaacs>right now, libuv is very similar to our "emit a data event with whatever you have" interface.
00:39:07  <mikeal>i like that we're getting rid of pause
00:39:14  <isaacs>mikeal: yeah, pause is so hard.
00:39:20  <dominictarr>these are my thoughts for streams from a userland perspective
00:39:26  <mikeal>this works more easily
00:39:34  <mikeal>less application code
00:39:42  <mikeal>and node just figures out what you want to do
00:40:14  <isaacs>yep
00:40:20  <isaacs>i like the symmetry, also
00:40:27  <dominictarr>what do you mean, getting rid of pause?
00:40:48  <isaacs>you implement read()/"readable"/"end" for readable streams, and you implement write()/end()/"drain" for writable streams
00:41:13  <isaacs>dominictarr: you don't pause(), you just fail to read() and it's the same effect
00:41:29  <dominictarr>hmm, I'll have to thing about that.
00:41:39  <dominictarr>s/g/k
00:41:42  <isaacs>ok, i'm out. have a nice day :)
00:41:45  * isaacsaway
00:44:41  <dominictarr>mikeal, I didn't mean that unpipe is part of the API,
00:44:53  <dominictarr>I just meant that it's possible to unpipe
00:45:18  <mikeal>we need and unpipe
00:45:20  <mikeal>er an
00:46:16  <dominictarr>yeah, at the moment destroy() functions like an unpipe
00:46:38  <dominictarr>because 'close' is not propagated sourcewards.
00:46:55  <dominictarr>also, 'error'.
00:47:04  <dominictarr>just leaves the pipline hanging
00:58:51  <rowbit>Hourly usage stats: []
00:59:24  <dominictarr>mikeal, I think the important thing here is to propagate the termination of the stream.
00:59:50  <dominictarr>'error' should trigger destroy()
00:59:58  <dominictarr>and so should 'close'
01:03:32  <mikeal>right
01:03:50  <mikeal>'close' event should never come before 'end'
01:03:58  <mikeal>so either 'end' or 'error' should always happen
01:04:19  <dominictarr>yeah.
01:04:41  <dominictarr>that is what I've found works best with the current pipe()
01:05:15  <dominictarr>semantically, 'close' just means end() && 'end'
01:05:29  <dominictarr>from the perspective of pipe()
01:06:22  <dominictarr>you could separate 'close' into two events, one for the readable, and one for the writable side...
01:06:57  <dominictarr>but that seems quite complex to me right now, and the use-case seems minimal.
01:07:18  <SubStack>it's like all this guy does is spam people about x-tag https://twitter.com/csuwldcat
01:07:56  <SubStack>considering clicking "report for spam"
01:08:21  <dominictarr>SubStack, he likes the seahawks, what do you expect?
01:08:41  <SubStack>what is that even, some kind of sports?
01:08:55  <jesusabdullah>he looks like a wldcat from here
01:09:09  <jesusabdullah>SubStack: Yeah, the Seattle Seahawks
01:09:25  <jesusabdullah>I forget which sports they play honestly, I think it's football
01:09:51  <chapel>not fooseball?
01:10:06  <dominictarr>It sound like a "dad rock" band
01:10:27  <dominictarr>as my friend distateradio says.
01:10:46  <jesusabdullah>Oh, no, they couldn't even HANDLE real rock n' roll, kid
01:11:19  <jesusabdullah>"Seahawks" sounds closest to a mash-up between Great White and The Scorpions
01:11:30  <jesusabdullah>maybe a dash of Meatloaf
01:11:37  <dominictarr>or it could be a $CITYNAME $SPORTSTEAM
01:11:44  <dominictarr>who knows
01:11:45  <dominictarr>?
01:12:04  <jesusabdullah>It's definitely a $CITYNAME $TEAMNAME
01:12:25  <jesusabdullah>$SPORTSTEAM is the better variable name
01:13:10  <dominictarr>whatever
01:13:27  <dominictarr>I mean "I STAND CORRECTED"
01:13:49  <jesusabdullah>oho
01:15:17  <dominictarr>it would be a great dad rock band name.
01:15:26  <SubStack>HOW ABOUT THAT LOCAL SPORTS TEAM? I AM TRYING TO MAKE SMALLTALK HERE. IT IS VERY DIFFICULT>
01:15:27  <LOUDBOT>DONT PANIC
01:15:36  <dominictarr>only problem is that no one is starting new dad rock bands
01:15:52  <dominictarr>they are all founded in the 70's
01:16:41  <dominictarr>next person to start a band your dad likes === "big hit"
01:41:58  <jesusabdullah>What about Kid Rock
01:42:17  <jesusabdullah>that "ride free" album or whatever's totally a lynard skynard style album
01:44:22  * captain_morganjoined
01:45:04  <dominictarr>sure, if you dad can get over the hip hop thing.
01:58:51  <rowbit>Hourly usage stats: []
02:25:57  <jesusabdullah>dominictarr: Yeah, you just ignore the early albums XD
02:26:38  <AvianFlu>I NEVER MAKE SMALL TALK IN SUCH LARGE LETTERS
02:26:40  <LOUDBOT>YOU DO NOT FLOW, YOU ARE NOT THERE, YOU DON'T EXIST TO THE WORLD
02:32:18  <dominictarr>FUTURE CLASSIC
02:32:19  <LOUDBOT>ACHIEVEMENT UNLOCKED: RUTHLESS REFACTOR
02:49:17  <isaacs>mikeal: https://gist.github.com/3179964
02:50:21  <isaacs>dominictarr: ^
02:58:51  <rowbit>Hourly usage stats: []
02:59:36  <AvianFlu>LOUDBOT: twitlast
02:59:36  <LOUDBOT>AvianFlu: http://twitter.com/loudbot/status/228323434193620992 (HighBit/##church-of-loudbot)
03:05:13  <isaacs>dominictarr, mikeal: another bonus about this approach: no need for a random readable/writable flag.
03:05:20  <isaacs>DONT ASK PERMISSION, JUST DO IT!
03:05:21  <LOUDBOT>HE IS NOT VERSION 2
03:05:38  <isaacs>if it has a read function, it's readable. if it has a write method, it's writable.
03:07:42  * ryan_stevensquit (Quit: Leaving.)
03:07:50  <SubStack>YES
03:08:14  <SubStack>isaacs: I like this idea of dropping readable/writable in favor of feature detecting write/read greatly.
03:08:32  <isaacs>SubStack: also, there's way less state to maintain here.
03:09:59  <isaacs>https://github.com/isaacs/readable-stream
03:17:05  * mikealquit (Quit: Leaving.)
03:21:39  * mikealjoined
03:28:57  * timoxleyjoined
03:48:28  * timoxleyquit (Quit: Computer has gone to sleep.)
03:52:33  * ryan_stevensjoined
03:58:51  <rowbit>Hourly usage stats: []
04:00:55  <rowbit>SubStack, pkrumins: Encoders down: 50.57.226.209(free4)
04:14:09  <Raynos>I finally got round to buying a bike
04:14:14  <Raynos>I wonder whether It will be good
04:17:21  * timoxleyjoined
04:49:40  * ryan_stevensquit (Quit: Leaving.)
04:56:55  * mikealquit (Quit: Leaving.)
04:57:58  * ryan_stevensjoined
04:58:51  <rowbit>Hourly usage stats: []
05:15:26  * captain_morganquit (Remote host closed the connection)
05:22:02  * captain_morganjoined
05:24:03  * AvianFluchanged nick to AvianusAsleepus
05:29:41  * captain_morganquit (Ping timeout: 276 seconds)
05:30:13  <SubStack>Raynos: excellent!
05:30:31  * captain_morganjoined
05:35:32  * captain_morganquit (Ping timeout: 276 seconds)
05:53:59  * captain_morganjoined
05:58:51  <rowbit>Hourly usage stats: []
06:00:28  * captain_morganquit (Ping timeout: 255 seconds)
06:01:39  * captain_morganjoined
06:14:57  * captain_morganquit (Ping timeout: 252 seconds)
06:16:45  * captain_morganjoined
06:34:36  * saijanai_quit (Quit: saijanai_)
06:35:03  * timoxleyquit (Quit: Computer has gone to sleep.)
06:41:28  * timoxleyjoined
06:44:26  * captain_morganquit (Ping timeout: 276 seconds)
06:51:52  * captain_morganjoined
06:58:51  <rowbit>Hourly usage stats: []
07:01:45  * wiwilliajoined
07:13:40  <dominictarr>isaacs, isn't it gonna be slower to _always_ buffer?
07:13:52  <dominictarr>you only really need that on a subset of streams
07:45:48  * dominictarrquit (Ping timeout: 255 seconds)
07:56:39  * mikealjoined
07:58:51  <rowbit>Hourly usage stats: []
08:58:51  <rowbit>Hourly usage stats: []
09:49:56  * tblobaum_quit (Ping timeout: 246 seconds)
09:58:51  <rowbit>Hourly usage stats: []
10:21:45  <SubStack>http://browserling.com:9005/
10:22:09  <SubStack>https://github.com/substack/graph-stream
10:34:25  <rowbit>SubStack, pkrumins: Encoders down: 173.203.67.76(free3)
10:58:51  <rowbit>Hourly usage stats: []
11:37:40  * dominictarrjoined
11:42:19  <SubStack>dominictarr: http://browserling.com:9005/
11:42:30  <SubStack>https://github.com/substack/graph-stream
11:58:51  <rowbit>Hourly usage stats: []
12:07:09  * tm604quit (Read error: Operation timed out)
12:09:18  * tm604joined
12:19:25  <rowbit>SubStack, pkrumins: Encoders down: 173.203.67.76(free3)
12:21:57  <dominictarr>SubStack, OHH AWESOME!
12:22:25  <rowbit>SubStack, pkrumins: Encoders down: 173.203.67.76(free3)
12:23:19  <dominictarr>SubStack, what is the data?
12:27:31  <SubStack>random sampling of some crime data from alameda county
12:27:58  <SubStack>it's random sampling because crunching through the whole dataset finishes too quickly
12:30:08  * tblobaum_joined
12:30:16  <SubStack>it's just a running total of some absolute counts for now
12:30:21  <SubStack>fancier stuff can come later
12:40:33  <dominictarr>a moving graph for like, current rates and stuff would be great too.
12:58:51  <rowbit>Hourly usage stats: []
13:05:33  * stlsaintquit (Quit: leaving)
13:29:54  * stlsaintjoined
13:30:02  * stlsaintquit (Changing host)
13:30:02  * stlsaintjoined
13:33:30  * captain_morganquit (Read error: Connection reset by peer)
13:48:45  * AvianusAsleepuschanged nick to AvianFlu
13:58:51  <rowbit>Hourly usage stats: []
14:06:45  * captain_morganjoined
14:14:12  <tblobaum_>graph-stream, pretty cool
14:14:53  * ITproquit (Ping timeout: 276 seconds)
14:15:31  * ITprojoined
14:20:50  <rowbit>/!\ ATTENTION: (default-local) hey@... successfully signed up for developer browserling plan ($20). Cash money! /!\
14:20:50  <rowbit>/!\ ATTENTION: (default-local) paid account successfully upgraded /!\
14:32:25  <rowbit>SubStack, pkrumins: Encoders down: 50.57.226.209(free4)
14:34:55  <rowbit>SubStack, pkrumins: Encoders down: 50.57.226.209(free4)
14:47:17  * xaqjoined
14:58:51  <rowbit>Hourly usage stats: []
15:26:57  * xaqquit (Read error: Connection reset by peer)
15:27:48  * xaqjoined
15:38:25  <rowbit>SubStack, pkrumins: Encoders down: 173.203.68.112(dev3)
15:38:26  <rowbit>SubStack, pkrumins: Developers waiting in the queue for ie9 (Queue length: 1 on 1 servers. Total servers: 5)
15:56:12  * ryan_stevensquit (Quit: Leaving.)
15:58:51  <rowbit>Hourly usage stats: []
16:12:43  <rowbit>/!\ ATTENTION: (default-local) ross@... successfully signed up for developer browserling plan ($20). Cash money! /!\
16:12:43  <rowbit>/!\ ATTENTION: (default-local) paid account successfully upgraded /!\
16:19:55  * ryan_stevensjoined
16:22:47  * ryan_stevensquit (Client Quit)
16:58:51  <rowbit>Hourly usage stats: []
17:03:54  * _sorensenjoined
17:05:52  <isaacs>dominictarr: making it buffer doesn't make it slower.
17:06:05  <isaacs>dominictarr: preventing unnecessary buffering isn't about speed, it's about memory utilization.
17:06:24  <isaacs>dominictarr: in general, though, buffering tends to make it *faster*, until you start swapping
17:06:38  <isaacs>dominictarr: but, this isnt' buffering any more than the current implementation.
17:07:01  <isaacs>talked to piscisaureus about it, though. it might be tricky to implement at the low-level on windows. we don't want shims all over the place; libuv should work this way.
17:35:55  <rowbit>SubStack, pkrumins: Developers waiting in the queue for ie8 (Queue length: 1 on 1 servers. Total servers: 3)
17:45:34  * mikealquit (Quit: Leaving.)
17:57:14  * mikealjoined
17:58:51  <rowbit>Hourly usage stats: []
18:58:21  * wiwilliaquit (Ping timeout: 255 seconds)
18:58:51  <rowbit>Hourly usage stats: []
18:59:24  * wiwilliajoined
19:07:15  * zz_shykeschanged nick to shykes
19:08:01  * xaqquit (Read error: Connection reset by peer)
19:08:34  * xaqjoined
19:32:30  * wiwilliaquit (Ping timeout: 244 seconds)
19:41:05  * xaqquit (Read error: Connection reset by peer)
19:41:35  * xaqjoined
19:58:51  <rowbit>Daily usage stats: []
19:58:51  <rowbit>Hourly usage stats: []
20:34:25  <rowbit>SubStack, pkrumins: Encoders down: 50.57.226.209(free4)
20:49:08  * AvianFluquit (Quit: Leaving)
20:49:55  <rowbit>SubStack, pkrumins: Encoders down: 50.57.226.209(free4)
20:58:51  <rowbit>Hourly usage stats: []
21:23:49  * saijanai_joined
21:54:56  * ryan_stevensjoined
21:58:51  <rowbit>Hourly usage stats: []
22:03:02  <isaacs>SubStack: https://new.npmjs.org/keyword/browserify
22:05:33  * st_lukequit (Remote host closed the connection)
22:09:46  * farnsworthquit (Ping timeout: 265 seconds)
22:09:51  * cubertquit (Ping timeout: 246 seconds)
22:16:05  * farnsworthjoined
22:17:30  * simcop2387quit (Excess Flood)
22:20:57  * simcop2387joined
22:21:49  * tblobaum_quit (Read error: Connection reset by peer)
22:22:31  * farnsworthquit (Read error: Connection reset by peer)
22:25:46  * AvianFlujoined
22:29:11  * farnsworthjoined
22:30:31  * simcop2387quit (Excess Flood)
22:33:58  * simcop2387joined
22:34:17  * farnsworthquit (Ping timeout: 272 seconds)
22:37:56  * cubertjoined
22:40:35  * farnsworthjoined
22:42:25  <rowbit>SubStack, pkrumins: Developers waiting in the queue for ie9 (Queue length: 1 on 1 servers. Total servers: 5)
22:45:06  * cubertquit (Read error: Connection reset by peer)
22:45:10  * farnsworthquit (Read error: No route to host)
22:47:25  * simcop2387quit (Excess Flood)
22:47:32  * simcop2387joined
22:47:32  * simcop2387quit (Changing host)
22:47:32  * simcop2387joined
22:47:55  <rowbit>SubStack, pkrumins: Developers waiting in the queue for ie9 (Queue length: 1 on 1 servers. Total servers: 5)
22:49:06  * cubertjoined
22:50:42  <SubStack>sweet
22:50:55  <SubStack>for isaacs's thing I mean
22:51:42  * farnsworthjoined
22:53:25  <rowbit>SubStack, pkrumins: Developers waiting in the queue for ie9 (Queue length: 1 on 1 servers. Total servers: 5)
22:58:51  <rowbit>Hourly usage stats: []
22:58:55  <rowbit>SubStack, pkrumins: Developers waiting in the queue for ie9 (Queue length: 1 on 1 servers. Total servers: 5)
23:14:11  * farnsworthquit (Ping timeout: 272 seconds)
23:14:12  * cubertquit (Ping timeout: 272 seconds)
23:26:56  * farnsworthjoined
23:26:56  * ryan_stevensquit (Read error: Connection reset by peer)
23:27:14  * cubertjoined
23:27:56  * ryan_stevensjoined
23:28:49  * simcop2387quit (Excess Flood)
23:28:57  * simcop2387joined
23:35:12  * tblobaum__joined
23:40:06  * farnsworthquit (Read error: No route to host)
23:40:15  * cubertquit (Read error: Connection reset by peer)
23:42:29  * simcop2387quit (Excess Flood)
23:43:21  * cubertjoined
23:43:22  * ryan_stevensquit (Read error: Connection reset by peer)
23:44:27  * simcop2387joined
23:45:19  * ryan_stevensjoined
23:48:06  * farnsworthjoined
23:56:04  * captain_morganquit (Remote host closed the connection)
23:58:51  <rowbit>Hourly usage stats: []