00:00:00  * ircretaryquit (Remote host closed the connection)
00:00:08  * ircretaryjoined
00:01:31  <jcrugzz>ls
00:01:44  * mikolalysenkoquit (Ping timeout: 255 seconds)
00:02:57  <jcrugzz>oh this isnt a terminal haha. and owen1, feel free to let me know if you have any questions regarding godot :)
00:05:03  <owen1>wolfeidau: can u elaborate about that flow? syslog (had to google for it) is a unix tool ('logger Hello') that sends it to /var/log/system.log. i would like to understand it from my node app to the syslog up that chain u wrote.
00:05:51  <owen1>jcrugzz: just read the blog from nodejitsu about godot. is it being used outside of nodejitsu?
00:06:16  * shuaibquit (Ping timeout: 256 seconds)
00:06:25  <wolfeidau>owen1: I have stuff on a number of services so i send all the "logging" to a central syslog server for archiving and feeding into things like godot or sending to logging services like loggly or papertrail
00:06:27  <owen1>it will be cool to have 1 dashboard for the entire system - uptime/cpu/hardrive etc
00:06:52  * shuaibjoined
00:07:11  <wolfeidau>owen1: Which blog post?
00:07:42  <owen1>wolfeidau: http://blog.nodejitsu.com/waiting-for-godot
00:08:38  <jcrugzz>owen1: some. https://github.com/kessler-y/godot-dash for example has popped up recently. I've been trying to make some time to get better docs so its easier to understand
00:09:00  <jcrugzz>finishing up the internal stuff thats based on it :)
00:09:59  * tmcwjoined
00:10:29  <wolfeidau>That is an awesome post
00:12:49  <owen1>wolfeidau: so in your node apps, insteaf of console.log and console.error do you var syslog = require('syslog'); var logger = syslog.createClient(514, 'localhost'); logger.info('Hello') ?
00:13:33  <owen1>jcrugzz: a screenshot would be nice for gotod-dash, since it's a UI app
00:13:45  <owen1>it will save people time
00:14:15  <wolfeidau>owen1: Yeah some do, others are hosted on services which take stdout and send it to us via syslog
00:14:39  <wolfeidau>owen1: We typically use winston for logging
00:14:58  <wolfeidau>So just add a syslog plugin
00:18:47  <jcrugzz>owen1: nudge kessler about that cause yes that would be awesome :).
00:20:12  <jcrugzz>ill be around a bit later guys
00:22:57  <owen1>wolfeidau: let's see if i got it. u have a central syslog server, log.foo.com, that multiple 'clients' sends their logging to. inside foo you monitor /var/log/system.log and based on some regex you say: 'hi, this line should be sent to godot, this line should be sent to graphite, this one to atlasboard ?
00:24:42  <wolfeidau>owen1: Yes pretty much, atm I am tailing a log file but i would like to move this to using unix sockets in the future
00:25:07  * jcrugzzquit (Ping timeout: 264 seconds)
00:26:39  <owen1>wolfeidau: what app do u use to read this log file? just some simple node script that accepts stdin and based on a regex do stuff?
00:27:56  <wolfeidau>owen1: yeah, i wrote a module to do that but it is pretty rough as tailing files in node, especially ones that are rotated is quite difficult
00:28:42  <owen1>also, let's say u want to send heartbit info and cpu/memory levels to godot. why not sending directly from the host instead of sending to the central syslog?
00:29:04  <wolfeidau>I don't want the app to know about godot
00:29:40  <wolfeidau>If i decide to change from godot to something else i have to redeploy my app
00:30:15  <wolfeidau>Also i already have my app sending data to syslog for other thngs
00:32:02  <owen1>which package do u use? https://github.com/cloudhead/node-syslog
00:32:23  * defunctzombiechanged nick to defunctzombie_zz
00:32:33  * shuaibquit (Ping timeout: 248 seconds)
00:32:56  <owen1>also, by log rotate do u mean - every night sends the local syslog from each 'client' to the central syslog?
00:33:14  <owen1>sorry about the spam of questiens
00:33:22  <wolfeidau>I use rsyslogd and logratote (linux tool for log rotation on a nightly basis)
00:33:46  <owen1>let me look..
00:33:53  <wolfeidau>I believe we use https://github.com/indexzero/winston-syslog
00:33:54  <wolfeidau>atm
00:40:31  <owen1>does each client save the log localy and using logrotate you end each file to the centeral syslog server (the one with rsyslog)? if that's the case, u'r not having real-time notifications.
00:40:39  <owen1>end/send
00:45:52  <owen1>wolfeidau: ^
00:47:09  <wolfeidau>owen1: no all client send to central log syslog server, logrotate is independent of the system and is just used to ensure the log files produced by the syslog server don't get too big
00:47:57  <owen1>oh, so it's realtime. interesting
00:48:40  <owen1>the nice thing about it is you can have your gotod producers in one place.
00:49:54  <wolfeidau>owen1: Yeah it is has it's challenges, like redundancy but it is simple
00:50:13  <wolfeidau>ideally I would have the syslog server setup as n+1
00:50:40  <wolfeidau>This would mean i have two syslog servers, one idling while the other is active
00:50:55  <wolfeidau>If the main one fails, the other one takes over
00:51:13  <owen1>is there a clustering built-in to rsyslog?
00:51:24  <owen1>or whatever u call that idea.
00:51:30  <owen1>hot-redundency?
00:51:37  <owen1>not sure what to name that
00:52:56  <wolfeidau>owen1: No again i would keep that out of the application and use another component to do the switch
00:53:27  <wolfeidau>There are a few options which can do it, the hardest thing is sharing a file store
00:54:35  <owen1>wolfeidau: what application? i thought we are talking about the rsyslog. the central log server? we want to make sure we got more than one of those.
00:55:23  * jcrugzzjoined
00:55:43  <wolfeidau>owen1: yeah rsyslog is a pretty simple service so if i was to disable the broken one and start another on another host i wouldn't miss much
00:56:31  <wolfeidau>There are few linux tools that can do the switch based on a heartbeat
00:56:42  <owen1>maybe instead of rsyslog u can use a db (mongo/cassandra)? it might be easier since they are easy to replicate/auto failover
00:57:26  <wolfeidau>I would prefer to use those for the backend and have some way of making the append idempotent
00:58:31  <wolfeidau>having mongo or cassandra parse syslog entries would not be ideal
00:58:54  <owen1>yeah, it's not json
00:59:15  <wolfeidau>I want lots of simple single role things, not an "oracle" server :)
00:59:29  <owen1>the 'clients' on my system spit json info to a mongodb
01:00:04  <owen1>and every night i upload daily data from mongo to hadoop
01:00:27  <owen1>for archiving and maybe querying later on if needed
01:00:43  <wolfeidau>owen1: Yeah I prefer to stay well clear of mongo :)
01:01:05  <jcrugzz>mongo has always been kind of interesting to me
01:01:14  <jcrugzz>its like half way between sql and nosql
01:01:21  <owen1>try rethinkdb. i really want to try it soon
01:01:32  <owen1>they are a few weeks away from production ready
01:01:47  <wolfeidau>owen1: But yeah i try not to have my "clients" talk directly to my db
01:01:58  * shuaibjoined
01:01:59  <jcrugzz>you want to analyze them first
01:02:01  <owen1>your db is rsyslog..
01:02:01  <wolfeidau>I prefer to have something in the middle
01:02:22  <wolfeidau>my db atm is the filesystem, and what ever i get rsyslog to send a copy of the data too
01:02:31  <wolfeidau>rsyslog is like mux demux
01:02:41  <owen1>WOW
01:02:47  <jcrugzz>wolfeidau: shouldnt you at least be using leveldb :p
01:02:51  <owen1>please elaborate
01:03:02  <owen1>about mux demux
01:03:08  <owen1>LOUDBOT?
01:03:23  <wolfeidau>jcrugzz: I do actually, some of the data going into my node service ends up in a leveldb store :)
01:03:58  <wolfeidau>rsyslogd can take log messages in and send them to a file, and or email them and or send them out a unix socket
01:04:27  <owen1>why do u call it rsyslogd, r u sure it's not rsyslog
01:04:44  <jcrugzz>wolfeidau: thats interesting. it is definitely a unique utility
01:04:47  <wolfeidau>as i am using syslog as a transport of errors, info and metrics they are essentially mux'd
01:05:17  <wolfeidau>and rsyslogd (sorry keep missing the d) acts as a demux
01:06:00  * defunctzombie_zzchanged nick to defunctzombie
01:06:01  <wolfeidau>jcrugzz: classic unix, lots of regex, files, stuff streaming into other things :)
01:07:29  * mikolalysenkojoined
01:08:06  <jcrugzz>wolfeidau: sounds about right. So is your plan to send stuff from syslog to godot?
01:08:50  <wolfeidau>jcrugzz: yeah i pulled apart jesusabdullah producer but i still need to understand more about these TTLs
01:09:29  <jcrugzz>wolfeidau: so the TTL is just how long you want the message to live (or how often it is emit)
01:09:34  <wolfeidau>jcrugzz: I have a batch of metrics come in ever 20 seconds, so i sort of need to pull them into a batch as an event, then send them through to an interperter
01:10:08  <jcrugzz>wolfeidau: are you running calculations on these metrics over a 20 second wndow then?
01:10:09  <wolfeidau>jcrugzz: Yeah i can understand the emit thing if i am the one polling say CPU usage
01:10:22  <wolfeidau>jcrugzz: Down stream has already done that
01:10:43  <jcrugzz>otherwise you can just right to a client socket
01:10:47  <jcrugzz>if its not based off a TTL
01:10:59  <wolfeidau>So this is the "value" of those metrics for the 20 seconds before that log entry
01:11:06  <wolfeidau>jcrugzz: That is what i wanted!
01:11:16  <wolfeidau>Great idea
01:11:24  <jcrugzz>wolfeidau: yea you dont need to deal with the producer aspect then :)
01:11:29  <jcrugzz>this is what we do on our balancers
01:11:50  <wolfeidau>jcrugzz: aha well that makes it much easier :)
01:12:05  * mikolalysenkoquit (Ping timeout: 246 seconds)
01:12:07  <wolfeidau>so i will go from text -> json then send them through the socket
01:12:08  <jcrugzz>yea sry i didnt fully get what you were doing last night, i was partially distracted
01:12:31  <wolfeidau>jcrugzz: np at all, i read a lot of code and picked up some ideas along the way
01:13:25  <wolfeidau>jcrugzz: Initially i will probably just use godot to relay data to graphite and alert based on threshold
01:13:43  <jcrugzz>so you just want to write JSON to the client.
01:13:52  <jcrugzz>and do some TCP framing if its a lot of messages
01:14:04  <wolfeidau>yeah I am cool with that
01:14:11  <wolfeidau>these are all very small messages
01:14:20  <jcrugzz>so you can write either an array or a single data object
01:14:24  <wolfeidau>just one metric, sort of like statsd
01:14:31  <wolfeidau>ok
01:14:38  <wolfeidau>yeah the array would be better
01:14:57  <wolfeidau>I get them in batches of 12 or so values
01:15:27  <jcrugzz>yea we do some detection on what it is you are trying to write to the socket
01:15:29  <jcrugzz>ok
01:15:57  <jcrugzz>https://github.com/nodejitsu/godot/blob/master/lib/godot/net/client.js#L107-L119
01:16:45  * dguttmanquit (Quit: dguttman)
01:23:39  * thl0joined
01:25:02  <owen1>speaking of statd, how is it compare to wolfeidau's architecture?
01:26:50  <wolfeidau>owen1: I use statsd atm, but it is difficult to work with if you want to manipulate the metrics as they accrue inside the service
01:27:06  <jcrugzz>statsd is simply just a data aggregator from my understanding
01:27:14  <wolfeidau>jcrugzz: that is awesome thanks1
01:27:21  <wolfeidau>jcrugzz: Yes exactly
01:27:23  <jcrugzz>yea doesnt seem ideal for calculations on the data
01:27:42  <jcrugzz>and np :)
01:28:42  <owen1>i understand statd as i way to send data to a central place from multiple clients. sounds like syslogd.
01:29:45  <owen1>but i think it's used for specific events, not the entire server log
01:30:04  <owen1>counters, cpu, etc
01:32:27  * tmcwquit (Remote host closed the connection)
01:33:23  <wolfeidau>yes it is a cache for metrics
01:33:33  <wolfeidau>which are emitted based on a timer
01:34:02  <wolfeidau>I want something more flexible so hence me looking at godot
01:36:15  <jcrugzz>wolfeidau: also, we will be open sourcing our little process monitor written in libuv thats meant to be an agent for godot
01:36:19  <jcrugzz>so keep an eye out :)
01:37:07  <owen1>right, but u also monitor the logs from all your servers in one place, something that i don't think statd is even related to.
01:39:08  <wolfeidau>jcrugzz: that sounds damn good :)
01:42:54  <wolfeidau>jcrugzz: I have one suggestion for a pull request already will shoot that over once i test it thoroughly, big hurdle which i think you have fixed is getting data in :)
01:43:57  <jcrugzz>wolfeidau: well definitely poke me when its ready to be seen :)
01:49:48  * tmcwjoined
01:50:23  * dguttmanjoined
01:58:59  * tmcwquit (Remote host closed the connection)
02:01:06  * tmcwjoined
02:14:38  <substack>isaacs: how do I even write a streams2 thing?
02:14:44  <substack>there are no good examples anywhere
02:15:01  <substack>nothing in core and the readable-stream repo only has REALLY LONG examples
02:15:10  <substack>nobody is going to use this stuff if there are no good examples anywhere
02:16:24  <substack>the shortest example shows how to wrap a classic stream into a streams2 stream
02:16:38  <substack>that's a pretty harsh indictment of the api surface area
02:16:40  <jesusabdullah>thing is, I think it's supposed to be really easy >_<
02:16:53  <substack>easy things are short
02:16:57  <jesusabdullah>there's a built-in Transform class
02:16:59  <jesusabdullah>is what I mean
02:17:02  <substack>nothing I can find shows me a terse example
02:17:05  <substack>nobody should ever use that
02:17:07  <jesusabdullah>the examples are probably just not good
02:17:14  <jesusabdullah>use Transform? why?
02:17:17  <jesusabdullah>I use through
02:17:20  <jesusabdullah>I assume it's similar
02:17:22  <substack>people shouldn't use things that don't have good examples
02:17:30  <jesusabdullah>well
02:17:36  <jesusabdullah>someone has to write the examples I guess
02:17:42  <jesusabdullah>but I feels ya
02:18:34  <jesusabdullah>oh man, substack they're making a movie about Steve Jobs starring Ashton Kutcher, but he looks more like YOU than Schteve
02:18:52  <jesusabdullah>cause of his haircut and beard
02:20:51  <jesusabdullah>jcrugzz: eta on process monitor? I needs that
02:22:09  <jcrugzz>jesusabdullah: patience :). it should be real soon though
02:22:15  * defunctzombiechanged nick to defunctzombie_zz
02:24:21  * mikealjoined
02:24:44  <mikeal>boom! I benchmarked couchup vs Apache CouchDB in write performance. Spoiler Alert: couchup won :) https://t.co/IKegolbXM5
02:25:09  <mikeal>that was suppose to be a link to https://twitter.com/mikeal/status/348989872175988737
02:25:14  <mikeal>i fail and copy/paste
02:25:53  <substack>mikeal: do you have the http part working yet?
02:26:02  <mikeal>yeah, i needed it for the benchmark
02:26:03  <substack>I was just poking around at using it with dominictarr's shadow-npm
02:26:08  <substack>sweeeet
02:26:11  <mikeal>well
02:26:14  <mikeal>its not done yet
02:26:21  <substack>it doesn't need to be done
02:26:28  <mikeal>like…. if you GET a document that isn't there you get a 500 instead of a 404 :)
02:26:36  <substack>who cares
02:26:38  <mikeal>also, i only have the document store working
02:26:42  <mikeal>views aren't working yet
02:26:42  <substack>it just needs enough to get npm working as a local repo
02:26:57  <mikeal>npm uses all kinds of retarded couchdb features i never want to support
02:26:59  <substack>but anyways this is rad
02:27:08  <mikeal>like list functions
02:27:14  <substack>I can just use node for those parts
02:27:19  <mikeal>but, if someone did implement them in couchup, they would be like 100x faster than couchdb
02:29:57  <wolfeidau>mikeal: I would love to see how it performed with SPDY vs HTTP :)
02:30:19  <mikeal>HTTP isn't the bottleneck
02:30:40  <mikeal>change the way I did writes caused a 10x improvement in write performance
02:30:48  <mikeal>s/change/changing
02:31:34  <wolfeidau>mikeal: are you on SSD?
02:32:30  <mikeal>yeah
02:34:14  <wolfeidau>It would be interesting to find out wether couchdb is just futzing around with the data or something else is slowing it down
02:34:15  <jcrugzz>mikeal: id like to see a test on a server :). But awesome stuff for sure
02:35:03  <wolfeidau>Main thing i would like to know is if they are both using fsync
02:39:37  * tmcwquit (Remote host closed the connection)
02:40:40  <mikeal>jcrugzz: yeah, definitely
02:41:11  <mikeal>wolfeidau: CouchDB has to do a read on every write
02:41:27  <wolfeidau>aha ok well that has to suck lol
02:41:32  <mikeal>couchup has a mutex in the write pipeline with an LRU
02:41:46  <mikeal>so for any write on *hot* documents it's much faster
02:42:01  <mikeal>but still, on new writes, which incur a read on both, couchup is a little faster
02:43:25  <substack>nice
02:43:36  <wolfeidau>yeah with levelup i found reading and writing at the head of the input stream was awesome
02:43:55  * mikolalysenkojoined
02:44:04  <wolfeidau>sorry reading at the head of the stream
02:44:21  <mikeal>writing new data is much better than writing over existing data, or doing deletes
02:44:31  <mikeal>i've optimized the writer for that now
02:44:33  <wolfeidau>yeah
02:45:03  <mikeal>if i can find a way to speed up level-peek I can probably get better new write performance as well
02:45:08  <wolfeidau>also the amount of memory used to cache is tunable
02:45:52  <mikeal>yeah, i can also expose an option for the LRU cache, so you could cache more than a thousand revs at once
02:46:15  <mikeal>i'm going to try one more thing
02:46:36  <mikeal>i'm going to buffer all pending writes until a write returns
02:47:04  <mikeal>under load that'll bring me from 10 batch writes to 1
02:52:21  <substack>isaacs: another thing: the readStart() example is sooooo incomplete
02:52:30  <substack>it shouldn't even be in the docs if you can't even run it
02:52:48  <substack>mikeal: do you know of any short streams2 examples?
02:52:59  <substack>all I can find is incomplete fragments and really long, 100+ line examples
02:53:00  <mikeal>not offhand
02:53:21  <mikeal>the "simple" case is in the docs isn't it?
02:53:25  <substack>it's not
02:53:39  <substack>https://github.com/substack/node-browserify/pull/426
02:53:42  <substack>ignore that
02:53:46  <substack>http://nodejs.org/docs/latest/api/stream.html
02:54:01  <substack>it starts out with a BROKEN example
02:54:06  <substack>and then it has an INCOMPLETE example
02:54:37  <substack>then it has a 96-line example
02:54:57  <substack>then it has an actually short example, but it only shows how to WRAP an old stream to be a new one
02:55:12  <substack>then it has another stupid long example
02:55:24  <substack>that's it
02:55:46  <substack>the best example is how to keep using classic streams with the .wrap() function
02:55:51  <substack>that's really terrible
02:55:53  <jcrugzz>yea they arent too clear
02:55:57  <jcrugzz>this is true
03:02:44  * dguttmanquit (Quit: dguttman)
03:03:49  <mikeal>substack: what aboutthis http://nodejs.org/api/stream.html#stream_readable_push_chunk_encoding
03:03:55  <mikeal>the example there is pretty small
03:05:11  <substack>mikeal: that example isn't complete!
03:05:18  <substack>readStart() isn't implemented
03:05:23  <substack>what does it even do
03:05:29  <substack>readStop() likewise
03:05:36  <mikeal>no idea
03:05:56  <mikeal>like most core docs, they aren't written for humasn
03:06:16  <substack>all the other core docs are pretty good
03:06:19  <substack>short, complete examples
03:06:25  <substack>this one is garbage
03:06:28  * timoxleyjoined
03:06:48  <substack>timoxley: check out the new trumpet http://github.com/substack/node-trumpet
03:07:18  * timoxleychecking
03:07:35  <substack>everything is streaming yay!
03:07:42  <substack>and it uses the latest sax and is way simpler
03:08:15  * thl0quit (Remote host closed the connection)
03:10:42  <timoxley>substack wow, big improvement
03:10:44  <timoxley>nice one
03:15:13  <substack>there's a stream-adventure challenge about it too
03:15:50  <substack>timoxley: will you be at nodeconf?
03:17:28  <timoxley>substack not this time unfortunately, I'm all conferenced out for a few months.
03:26:00  <substack>understandable
03:26:07  <substack>so many confs
03:30:08  * dguttmanjoined
03:33:37  <dlmanning>substack: perhaps it's not exactly the sort of example you're looking for, but did you see that there are examples in the readable-streams module?
03:34:10  <dlmanning>A few transform stream implementations iirc
03:40:46  * shuaibquit (Ping timeout: 256 seconds)
03:44:58  * dguttmanquit (Quit: dguttman)
03:53:39  * dguttmanjoined
04:01:11  * shuaibjoined
04:02:34  * defunctzombie_zzchanged nick to defunctzombie
04:05:22  * timoxleyquit (Quit: Computer has gone to sleep.)
04:08:47  <Domenic_>guys i am supposed to give a talk on "the state of JS" what should i talk about
04:09:55  <dlmanning>nested callbacks?
04:10:15  <substack>package management
04:10:24  <substack>and the userland ecosystem
04:11:37  <jesusabdullah>es6 obviously
04:11:40  <jesusabdullah>;)
04:14:01  <Domenic_>+1 to all those
04:14:32  <Domenic_>anything crazy on the horizon i should maybe highlight that not too many people know about?
04:14:43  <jesusabdullah>idk
04:14:48  <jesusabdullah>my head's swiss cheese right now
04:19:00  <dlmanning>Did the real modulo operator make it into ES6? Cause that's gonna change everything
04:19:21  <dlmanning>EVERY. THING.
04:19:21  <LOUDBOT>SELECT * FROM WHATEVER THE FUCK ETC ETC
04:20:25  <jesusabdullah>real modulo?
04:20:33  <Domenic_>lol?
04:20:56  <jesusabdullah>Call me crazy but js has a modulo operator?
04:23:41  <jesusabdullah>dlmanning: plz2b clarify
04:23:45  <dlmanning>No, it has a remainder operator
04:23:54  <jesusabdullah>aha
04:23:58  <jesusabdullah>subtle difference?
04:24:40  <jesusabdullah>cause for real non-zero numbers they seem equivalent
04:24:46  <jesusabdullah>and by real I mean natural
04:24:54  <jesusabdullah>natural numbers
04:25:14  <dlmanning>It takes the sign of the dividend instead of the divisor
04:25:21  <jesusabdullah>aha
04:25:50  <dlmanning>http://wiki.ecmascript.org/doku.php?id=strawman:modulo_operator
04:26:52  <Domenic_>I guess node bots is an important part of the state of JS, should be sure to mention that
04:29:16  * defunctzombiechanged nick to defunctzombie_zz
04:36:14  * dguttmanquit (Quit: dguttman)
05:03:31  * shuaibquit (Ping timeout: 276 seconds)
05:03:47  <jesusabdullah>substack: https://github.com/nategood/commando listed as being inspired by optimist :D
05:03:55  <jesusabdullah>substack: up to no good over here <_<;
05:04:20  * timoxleyjoined
05:21:56  * jolissjoined
05:30:15  <jesusabdullah>jjjohnny: Thanks for introducing me to Republican Dalek
05:37:07  * mikolalysenkoquit (Ping timeout: 264 seconds)
05:52:42  <mikeal>rvagg: hey man
05:52:49  * shamaquit (Remote host closed the connection)
05:52:59  <mikeal>for some reason my batch del's aren't working
05:53:23  <dlmanning>mikeal: btw, jaws is awesome
05:54:07  <jesusabdullah>yeah dude I love that movie
05:54:21  <mikeal>dlmanning: thanks :)
05:54:32  <mikeal>we're using the shit out of it :)
05:54:40  <dlmanning>Best node web framework out there
05:54:48  <mikeal>its not a framework :P
05:54:53  <dlmanning>}:->
05:54:58  <mikeal>it's a cache :)
05:55:29  <dlmanning>Seriously, it's nice to work with. I'm planning to use it in production
05:56:16  <jesusabdullah>ITS A FRAMEWORK
05:56:16  <LOUDBOT>ZOFFIX LOOKING FOR SOMEWHERE TO VENT
05:56:28  <jesusabdullah>IT IS NOT A MOUTH-BASED VIDEO GAME
05:56:29  <LOUDBOT>WE JUST NEED A LITTLE TIME, SOME MONEY, AND SOME PIGS
05:59:12  * ralphtheninjajoined
06:08:25  * shuaibjoined
06:11:44  <mikeal>dlmanning: we are too
06:12:17  <mikeal>rvagg: actually, wow, this is insane
06:12:27  <mikeal>i don't get the left most key in my range
06:12:37  <mikeal>it's the strangest fuckin thing
06:14:45  * mikealquit (Quit: Leaving.)
06:15:30  * mikealjoined
06:16:53  * owen1quit (Quit: WeeChat 0.4.0)
06:18:09  * owen1joined
06:25:12  <substack>https://github.com/substack/decode-prompt
06:27:56  <jesusabdullah>substack every one of these browsers is failing at testing what is this
06:27:59  <jesusabdullah>ಠ_ಠ
06:28:39  <jesusabdullah>substack: srsly tho, looks handy
06:28:43  <jesusabdullah>substack: you using this for anything?
06:28:55  <substack>bashful
06:29:16  <jesusabdullah>word
06:29:19  <substack>it's a browserify bug currently infecting everything
06:29:26  <jesusabdullah>wah wah wahhhhh
06:29:35  <jesusabdullah>you'll get it I'm sure
06:29:37  <jesusabdullah>that reminds me
06:29:49  <jesusabdullah>what happens, I wonder, if you make exterminate use powershell?
06:30:18  <jesusabdullah>cause exterminate is miles and miles and miles ahead of cmd and theoretically cross-platform
06:30:22  <jesusabdullah>except maybe color codes
06:30:23  <substack>no
06:30:37  <jesusabdullah>no?
06:30:37  <substack>I'll just get bashful running
06:30:42  <substack>on exterminate
06:30:44  <substack>real bash
06:30:48  <substack>powershell is a dead end
06:30:52  <jesusabdullah>I mean, for YOU yeah
06:30:57  <substack>for everybody
06:31:02  <jesusabdullah>yeah, no
06:31:14  <jesusabdullah>I just want a cmd replacement not a ps replacement
06:31:17  <jesusabdullah>in this case
06:31:28  <jesusabdullah>single concerns
06:31:33  <jesusabdullah>or rather
06:31:37  <jesusabdullah>completely separable concerns
06:31:38  <substack>you can already do that
06:32:39  <jesusabdullah>yeah, I just don't know what it looks like, theoretically it should be fine
06:32:58  <jesusabdullah>I'm thinking about putting together a windows "dev pack" that installs a bunch of shit to make windows dev vaguely tolerable
06:33:05  <jesusabdullah>for REASONS
06:33:26  <jesusabdullah>mostly so I can work with windows people without freaking out over how to make cross-platform tooling
06:33:28  <substack>just make a bash for windows that doesn't suck
06:33:48  <jesusabdullah>that's an aspect of it, sure
06:33:51  <jesusabdullah>like, that helps
06:33:55  <jesusabdullah>you also need unxutils
06:33:56  <substack>the solution, just like with browsers
06:34:07  * ralphtheninjaquit (Ping timeout: 264 seconds)
06:34:07  <jesusabdullah>and a terminal that doesn't make you want to punch babies
06:34:09  <substack>is to take the platform that sucks and to make it more like the platform that doesn't suck
06:34:15  <jesusabdullah>and a text editor that doesn't make you want to punch babies
06:34:15  <substack>don't meet windows half-way, it's not worth it
06:34:31  <substack>yes, a text editor is super important
06:34:54  <jesusabdullah>I just want to take these on as separable pieces
06:35:00  <jesusabdullah>also there are bash ports for windows of multiple kinds
06:35:05  <jesusabdullah>ignoring bashful I mean
06:35:19  <jesusabdullah>but yeah I want a one-time install of ALL that shit
06:35:34  <jesusabdullah>"THERE now we can actually work together"
06:35:42  <substack>YOU'RE WELCOME
06:35:42  <LOUDBOT>BORED LINUX USER ATTEMPTS TO TROLL BSD USER, FAILS MISERABLY. MATHEMATICIANS INVENT NEW CLASS OF NUMBERS TO ACCURATELY COUNT LINUX USER'S FAILURE ATTEMPTS
06:37:20  <jesusabdullah>hah
06:38:01  <jesusabdullah>substack: I may have mentioned this: I did a wordpress site with a friend. The biggest frustration, by far, wasn't even php/wordpress, it was the lack of cross-platform tooling for fucking managing it
06:38:31  <substack>tooling is always the hardest thing
06:38:35  <jesusabdullah>substack: I ended up writing a fucking gruntfile and like, the terminal is so shitty and everything is so missing I can't even in good conscience walk him through being able to use it
06:38:44  <jesusabdullah>substack: and grunt is a POS, I hate it
06:38:55  <substack>implement a makefile parser
06:38:57  <substack>or bash
06:39:01  <jesusabdullah>substack: I don't care what anyone else says, it's dumb, annoying, annoying and dumb
06:39:02  <substack>if windows had bash
06:39:06  <jesusabdullah>substack: this all crossed my mind
06:39:09  <substack>yes I don't like grunt either
06:39:19  <jesusabdullah>substack: and it all exists but as separate projects, I need a "dev pack"
06:39:42  <jesusabdullah>substack: once I have a winbox I'll put this together, it'll basically be a NSIS installer that pulls down and configures ALL the things
06:39:49  <substack>great!
06:40:01  <chilts>jesusabdullah: did you ever see http://www.cygwin.com/ ... it's not great but I think it can help :)
06:40:10  <substack>is there a way to run IE chromelessly?
06:40:12  <jesusabdullah>substack: probably unxutils, winbash, whatever make port I can find, at least node but possibly also ruby and python and maybe perl, idk
06:40:14  <chilts>I don't use Windoze so I can't recommend it from use, but more from what I know other people do
06:40:19  <substack>chilts: cygwin is SO bad
06:40:22  <jesusabdullah>chilts: ohhh yeah I know cygwin and the rest
06:40:27  <jesusabdullah>chilts: yeah, msys is the wtg these days
06:40:34  <substack>don't even bother with cygwin
06:40:35  <chilts>yep, just checking you had heard of it, that's all :)
06:40:52  <jesusabdullah>ideally I'd have a package manager too but that's asking a lot
06:41:03  <chilts>I don't know what msys is - which I guess I consider to be a good thing :D
06:41:04  <substack>just use npm
06:41:12  * jibayjoined
06:41:47  <jesusabdullah>yeah, easiest thing is to delegate that to the installed tools (such as node)
06:43:03  * mikolalysenkojoined
06:47:38  * mikolalysenkoquit (Ping timeout: 255 seconds)
07:25:09  * djcoinjoined
07:57:52  <substack>dominictarr: so I think your insert-module-global patches broke browserify :(
07:57:59  <substack>I'm considering just reverting everything
07:59:27  * shuaibquit (Ping timeout: 256 seconds)
08:13:16  * timoxleyquit (Quit: Computer has gone to sleep.)
08:20:10  <substack>ok I found it
08:24:51  <substack>squashed it
08:28:21  <substack>mbalho: you were running into that bug too
08:28:22  <substack>it's fixed now
08:28:27  <substack>it wasn't dominictarr's patches either
08:34:44  * ins0mniajoined
08:34:49  * timoxleyjoined
08:42:00  <timoxley>dominictarr what was that config tool you were keen on I know, rc but I remember something else
08:42:20  <timoxley>*I know rc, but I remember something else
08:43:04  <substack>pow: http://ci.testling.com/substack/decode-prompt
08:45:29  * ins0mniaquit (Remote host closed the connection)
08:49:36  <timoxley>dominictarr nvm found it. https://github.com/dominictarr/config-chain
09:15:23  <jesusabdullah>I wonder how rc behaves on windows
09:15:48  <jesusabdullah>also hacked up https://gist.github.com/jesusabdullah/5848744 after a lengthy twitter convo with Ben Atkin and ELLIOTTCABLE
09:15:59  * eclaughs
09:16:01  <ec>jesusabdullah: ohai
09:16:08  <ec>jesusabdullah: still working on ripping out the relevant parts of npm.
09:16:20  <ec>learning waaaaaay more about the inner workings of isaacs' coed than I ever wanted to.
09:17:22  <jesusabdullah>ec: yeah, been there
09:17:42  <jesusabdullah>ec: lesson learned: package management is a Whole Fuckin' Thing
09:17:45  <ec>jesusabdullah: /join #ELLIOTTCABLE
09:17:52  <ec>HAH. “Whole Fuckin' Thing.”
09:17:53  <ec>riiiiight!?
09:20:45  <djcoin>ec: I guess being drunk doesn't help ? been there :s
09:28:49  <jesusabdullah>alright passing out
09:29:42  <djcoin>been there =)
10:36:17  * mcollinajoined
10:38:05  * dsfadfjoined
10:39:56  * rannmannquit (Ping timeout: 260 seconds)
10:49:19  * timoxleyquit (Ping timeout: 276 seconds)
11:02:02  * ralphtheninjajoined
11:02:11  * ralphtheninjaquit (Client Quit)
11:02:20  * ralphtheninjajoined
11:06:46  * st_lukejoined
11:13:32  * whit537joined
11:14:22  * mcollinaquit (Remote host closed the connection)
11:15:15  * st_lukequit (Remote host closed the connection)
11:16:45  * shuaibjoined
11:20:45  * ins0mniajoined
11:23:40  <ralphtheninja>ins0mnia: yo
11:28:15  * thl0joined
11:28:29  * Kesslerjoined
11:32:24  * thl0quit (Ping timeout: 240 seconds)
11:34:24  * thl0joined
11:40:35  * shuaibquit (Quit: Textual IRC Client: http://www.textualapp.com/)
11:42:27  <ins0mnia>ralphtheninja: yo
11:48:12  * timoxleyjoined
12:02:39  * nicholas_quit (Read error: Connection reset by peer)
12:02:48  * nicholasfjoined
12:21:32  * jcrugzzquit (Ping timeout: 256 seconds)
12:28:17  * ednapiranhajoined
12:56:26  * thl0quit (Remote host closed the connection)
12:58:27  * mcollinajoined
13:08:51  * kenperkinsquit (Quit: Textual IRC Client: http://www.textualapp.com/)
13:11:58  * kenperkinsjoined
13:12:26  * jcrugzzjoined
13:21:38  * brianloveswordsquit (Ping timeout: 248 seconds)
13:26:10  * brianloveswordsjoined
13:36:36  * whit537quit (Remote host closed the connection)
13:36:38  * whit537_joined
13:38:41  * harrowquit (Ping timeout: 248 seconds)
13:39:04  * harrowjoined
13:41:43  * mikolalysenkojoined
13:41:54  * whit537_changed nick to whit537
13:43:31  <Kessler>jesusabdullah: ping
13:45:37  * dguttmanjoined
13:47:01  * tmcwjoined
13:52:55  * mcollinaquit (Remote host closed the connection)
14:13:35  * mcollinajoined
14:15:23  * thl0joined
14:18:28  * brianloveswordsquit (Excess Flood)
14:20:10  * brianloveswordsjoined
14:29:01  * defunctzombie_zzchanged nick to defunctzombie
14:32:50  * jcrugzzquit (Ping timeout: 268 seconds)
14:53:11  * ednapira_joined
14:54:43  * nicholasfquit (Read error: Connection reset by peer)
14:55:02  * ednapiranhaquit (Ping timeout: 268 seconds)
14:55:08  * nicholasfjoined
14:56:23  * dguttmanquit (Quit: dguttman)
14:56:32  * ednapira_quit (Read error: Connection reset by peer)
14:56:39  * ednapiranhajoined
15:01:27  * dguttmanjoined
15:11:56  * ednapira_joined
15:14:34  <rowbit>/!\ ATTENTION: (default-local) jansimon83@... successfully signed up for developer browserling plan ($20). Cash money! /!\
15:14:34  <rowbit>/!\ ATTENTION: (default-local) paid account successfully upgraded /!\
15:15:14  * ednapiranhaquit (Ping timeout: 255 seconds)
15:16:53  * mikealquit (Quit: Leaving.)
15:17:11  * tmcwquit (Remote host closed the connection)
15:24:57  * tmcwjoined
15:25:14  * defunctzombiechanged nick to defunctzombie_zz
15:31:01  * thl0_joined
15:31:01  * thl0quit (Read error: Connection reset by peer)
15:44:00  * dsfadfchanged nick to rannmann
15:44:00  * rannmannquit (Changing host)
15:44:00  * rannmannjoined
15:46:27  <isaacs>substack: which example is broken?
15:46:46  * ednapira_changed nick to ednapiranha
15:46:51  <isaacs>substack: the readStart exampleis of course incomplete, because it is showing you how to interact with an underlying resource that only has pause/resume
15:47:08  <isaacs>substack: and if you're comparing wiht streams1 examples, well, there weren't any.
15:50:56  * whit537quit (Quit: whit537)
15:54:24  * djcoinquit (Quit: WeeChat 0.4.0)
16:02:14  * jcrugzzjoined
16:32:22  * dominictarrquit (Quit: dominictarr)
16:33:14  * whit537joined
16:34:50  * Kessler_joined
16:35:41  * st_lukejoined
16:37:40  * Kesslerquit (Ping timeout: 256 seconds)
16:39:42  * mcollinaquit (Remote host closed the connection)
16:39:56  * no9quit (Ping timeout: 246 seconds)
16:40:19  * dguttmanquit (Quit: dguttman)
16:43:36  * dominictarrjoined
16:45:03  * defunctzombie_zzchanged nick to defunctzombie
16:47:15  * tmcwquit (Remote host closed the connection)
16:55:35  * dominictarrquit (Quit: dominictarr)
17:02:38  * AvianFluquit (Remote host closed the connection)
17:03:30  * tmcwjoined
17:07:18  * mikealjoined
17:16:19  * pkruminspart
17:18:19  * Kesslerjoined
17:21:35  * Kessler_quit (Ping timeout: 256 seconds)
17:32:39  * dguttmanjoined
17:37:55  * dominictarrjoined
17:47:22  * st_lukequit (Remote host closed the connection)
17:51:59  <dominictarr>isaacs: you mean a stream that is piped to two outgoing streams, correct?
17:52:12  <dominictarr>in https://twitter.com/izs/status/349193439805587458
17:52:30  * defunctzombiechanged nick to defunctzombie_zz
17:57:00  * mikolalysenkoquit (Ping timeout: 256 seconds)
18:02:10  * mcollinajoined
18:02:19  * emilistoquit (Ping timeout: 264 seconds)
18:03:52  * emilistojoined
18:04:09  * AvianFlujoined
18:05:07  <jjjohnny>jesusabdullah: see also twitter.com/feardept
18:07:24  * Kesslerquit (Ping timeout: 246 seconds)
18:08:02  * mcollinaquit (Remote host closed the connection)
18:15:21  * no9joined
18:18:31  * jcrugzzquit (Quit: leaving)
18:19:29  * jcrugzzjoined
18:22:03  * yorickjoined
18:22:35  * timoxleyquit (Quit: Computer has gone to sleep.)
18:35:52  * tilgovijoined
18:44:32  * jcrugzzquit (Ping timeout: 246 seconds)
18:44:56  * mikolalysenkojoined
18:46:15  * jcrugzzjoined
18:56:24  * mikolalysenkoquit (Ping timeout: 246 seconds)
19:03:11  * defunctzombie_zzchanged nick to defunctzombie
19:06:30  * fallsemojoined
19:13:00  * defunctzombiechanged nick to defunctzombie_zz
19:14:21  <jjjohnny>does node implement the IANACHARSET?
19:14:27  <jjjohnny>for encodings?
19:16:08  <jjjohnny>i think not
19:18:36  * mcollinajoined
19:20:45  * ednapiranhaquit (Remote host closed the connection)
19:23:08  * mcollinaquit (Ping timeout: 256 seconds)
19:32:32  * brianloveswordsquit (Max SendQ exceeded)
19:33:41  * brianloveswordsjoined
19:40:14  <isaacs>dominictarr: yes, that's correct
19:40:39  <isaacs>dominictarr: in particular, one that emits 'drain' on the nextTick after write() and the other which takes, say, a second to drain after each write.
19:40:50  <dominictarr>right
19:40:56  <isaacs>dominictarr: note, also, that in 0.8 streams1 streams, it was legal to emit 'drain' basically "whenever"
19:41:16  <isaacs>so, for example, http streams would return true from a write(), and then emit 'drain' basically on nextTick
19:41:20  <dominictarr>I built my streams1 streams to a higher standard than that
19:41:36  <dominictarr>only emit drain once after you write() returned false
19:41:40  <isaacs>(not actual on nextTick, but effectively, since it was the next turn of the io loop)
19:41:46  <jjjohnny>how does one specify base64 in mime type / charsets ?
19:41:57  <isaacs>dominictarr: yes, but it takes a lot of undocumneted know-how like that to get streams1 streams to be correct.
19:42:09  <dominictarr>even resume() did not emit drain if a write()===false didn't happen
19:42:15  <dominictarr>that is correct
19:42:18  <isaacs>dominictarr: and, even in your case, did you overwrite prototype.pipe() to dtrt when piping to two tstreams, one fast and one slow?
19:42:24  <dominictarr>I did document my approach though
19:42:31  <dominictarr>but, it was non official
19:42:34  <isaacs>right
19:42:48  <isaacs>and, really, since *core* streams *didn't* do this, it didn't matter for most programs.
19:43:07  <dominictarr>isaacs: yeah, I didn't factor in double sink pipes
19:43:12  <isaacs>because if you pipe an http request to an http response and also a file stream, and the http response is slow to download it, you'll leak memory all over.
19:43:18  <isaacs>dominictarr: no one did, until we did :)
19:43:36  <dominictarr>isaacs: by the way, there are over 200 modules that depend on through
19:43:42  <isaacs>right
19:43:53  <isaacs>and streams2 makes through unnecessary
19:44:01  <dominictarr>so all those streams should do back pressusre, correctly, in the single dest case
19:44:19  <isaacs>of course, those modules would have to be rewritten to use core stream.Transform, and that's still future stuff for most people.
19:44:34  <isaacs>did you buffer on pause()?
19:44:37  <dominictarr>yes
19:44:39  <isaacs>kewl
19:44:59  <dominictarr>was buffering on pause way before streams2 ;)
19:45:04  <isaacs>yeah
19:45:10  <isaacs>we kinda half-way were in core, also
19:45:12  <isaacs>but not consistently
19:45:22  <isaacs>only for http, and *effectively* this happened with fs, but not by design
19:45:24  <isaacs>it was never a contract
19:46:02  <dominictarr>so, are you planning on removing back compat at some point?
19:46:07  <isaacs>no
19:46:23  <isaacs>i actually would like to change the language.
19:46:27  <isaacs>there's no "old mode" really
19:46:37  <isaacs>there's "flowing mode" and "not flowing mode"
19:46:43  <dominictarr>that is why I call them "classic streams"
19:46:48  <isaacs>haha
19:46:56  <isaacs>like coke classic
19:47:00  <dominictarr>exactly
19:47:05  <isaacs>the doc is confusing, though, clearly
19:47:13  <isaacs>it's an API doc, and it needs to be more of a tutorial
19:47:27  <isaacs>people are consuming streaming interfaces, and thinking that they have to implement _read to do it
19:47:30  <isaacs>etc.
19:47:37  <isaacs>or they're implementing streams, and calling _read sometimes
19:47:41  <isaacs>in their own resume() functions
19:47:52  <isaacs>it needs to be like, "Do this. The rest, don't fucking touch it"
19:47:58  <dominictarr>right.
19:48:11  <isaacs>with a section for consumers, and another for implementors
19:48:16  <isaacs>and less about "This is how it all works"
19:48:21  <dominictarr>when I give stream talks now, I avoid talking about the internals at all
19:48:23  <isaacs>that could be a third section, for anyone who cares.
19:48:25  <dominictarr>just .pipe
19:48:27  <isaacs>right
19:48:35  <jjjohnny>what is this supply side apinomics?
19:48:37  <isaacs>there ARE some cases where pipe is not enough
19:48:45  <dominictarr>on('end') and on('error')
19:48:50  <dominictarr>true
19:48:58  <dominictarr>but I don't want to scare people off
19:49:13  <dominictarr>if you have to go into the nitty gritty of streams1 and streams2...
19:49:20  <dominictarr>that doesn't fit nicely on a slide...
19:49:29  <isaacs>right
19:49:46  <isaacs>here's the difference between streams1 and streams2: Data doesn't flow until you flow it.
19:50:06  <isaacs>that could be by calling read() repeatedly, or doing on('data') or doing resume()
19:50:17  <isaacs>but, you can't have data just fucking fire out at you and land in /dev/null
19:50:37  <isaacs>and yes yes, there's this read() and on('readable') etc.
19:50:43  <isaacs>but that's implementation details
19:50:48  <isaacs>the core thing is: you don't lose data.
19:50:52  <isaacs>which is a trade-off.
19:51:07  <dominictarr>yes
19:51:12  <isaacs>i *would* like to fix it so that you can switch *back* into non-flowing mode from flowing mode.
19:51:24  <isaacs>or if any pause() just basically put it back where you could call read() and get data.
19:51:27  <isaacs>etc.
19:51:37  <dominictarr>sounds complicated
19:51:49  <isaacs>so you could do on('data', function(chunk) { if (someCondition) { this.pause(); this.read(10); this.resume() } })
19:51:57  <isaacs>i think it coudl actually be simpler.
19:52:02  <isaacs>the modalities are confusing.
19:52:10  <isaacs>and create unnecessary constraints.
19:52:37  <isaacs>since the implementation is still the same: provide a way for me to ask you for data, and call this.push(chunk) when you have some data.
19:52:52  * defunctzombie_zzchanged nick to defunctzombie
19:53:18  <isaacs>from there, instead of having old-mode and new-mode, you just have flowing-mode and paused-mode.
19:53:23  <isaacs>and it starts out in paused-mode.
19:53:39  <isaacs>and the streams machinery is what manages all that, so you don't have to
19:55:33  * mikolalysenkojoined
19:55:59  <dominictarr>hmm, that sounds good
19:56:18  <jjjohnny>streams and dams
19:56:50  <jjjohnny>stream.damit()
19:57:07  <dominictarr>isaacs: in leveldown, we wanted an un opinionated way to get streams, and given both streams1 and 2 we just used a lower level abstraction, that could be turned into either
19:57:15  <dominictarr>an iterator..
19:57:24  * ednapiranhajoined
19:57:36  <dominictarr>like this, basicaly https://npmjs.org/package/async-iterator
19:58:17  * tmcw_joined
20:00:17  * tmcwquit (Ping timeout: 268 seconds)
20:00:57  * st_lukejoined
20:04:00  <jjjohnny>mmckegg: i picked up where you left off with the node-browser fs
20:04:51  * whit537quit (Quit: BLAM!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!)
20:06:31  <isaacs>dominictarr: that works well when what you have is an iterable list, rather than a stream of data.
20:06:45  <isaacs>dominictarr: which leveldb is :)
20:07:14  <dominictarr>sure, but iterating over the chunks of data is the same
20:07:14  <isaacs>dominictarr: but there's no "ith item" when yor'e talking about chunks on a TCP stream, or zlib data.
20:07:21  <isaacs>it's not the same.
20:07:25  <dominictarr>you don't have to use that
20:07:49  <dominictarr>anyway, the point is that it has no pipe
20:08:07  <isaacs>i mean, yes, of course, you can express TCP as a "get the next chunk" kind of interface, but not without a bunch of buffering etc.
20:08:11  <dominictarr>but provides a standard interface that can be wrapped into something pipeable
20:08:17  <isaacs>sure.
20:08:23  <isaacs>makes sense for leveldb
20:08:46  <dominictarr>sure, but isn't "a bunch of buffering" what streams2 does?
20:09:43  <isaacs>not when yor'e flowing
20:10:02  <isaacs>that readStart/readStop example that offended substack earlier is actually how TCP streams work
20:10:20  <isaacs>but TCP has so much more complexity, and is duplex, so it doesn't make a very nice example
20:10:23  <isaacs>qv lib/net.js
20:15:07  <dominictarr>isaacs: to be completely honest, I'm not looking for a tight api that extracts every last drop of perf out
20:15:16  * mikolalysenkoquit (Ping timeout: 256 seconds)
20:15:22  <dominictarr>I want a flexible abstraction that is easy to work with
20:15:28  <dominictarr>to build complex things quickly
20:15:48  <dominictarr>(just so you know where I am coming from)
20:16:08  <jjjohnny>dominictarr: do you have a proper streamy thing for streamReading large buffers appropriately
20:16:28  <dominictarr>large buffers in memory?
20:16:29  <jjjohnny>appropriate for down stream streams
20:16:31  <jjjohnny>yes
20:16:44  <jjjohnny>dominictarr: b/c the browser only gives you the whole file
20:16:45  <dominictarr>you want this for audio, correct?
20:17:03  <isaacs>dominictarr: sure.
20:17:06  <jjjohnny>yes, but I am implementing a stream interface on top of the browsers file system api
20:17:16  <isaacs>dominictarr: it's ok to have lots of different APIs that are compatible, and use different implementations.
20:17:22  <isaacs>dominictarr: not just ok, it's preferrable
20:17:28  <dominictarr>isaacs: agree
20:17:52  <dominictarr>my position is that we win if everybody doing node is streaming
20:18:06  <isaacs>and, honestly, i think that a lot of good can be done to polish streams2 further, so that it works better and plays nicer with other things, and is built out of simpler pieces.
20:18:06  <dominictarr>it doesn't matter so much what api they choose to do that
20:18:14  <isaacs>but is that the MOST good that can be done right now? probably not.
20:18:21  <isaacs>there are much worse problems to solve atm
20:18:26  * defunctzombiechanged nick to defunctzombie_zz
20:18:28  <isaacs>that's some post-1.0 stuff.
20:18:42  <isaacs>"These doorknobs are not shiny enough!" (also, the door doesn't actually open right now...)
20:18:58  <isaacs>you fix the one thing, then circle back to do the smaller improvements, if it's still necessary
20:19:08  <dominictarr>what is the high priority stuff, in your opinion?
20:19:27  <dominictarr>maybe opinion is the wrong word...
20:19:35  <dominictarr>you are the BDFL, after all
20:19:44  <dominictarr>:)
20:20:08  <jjjohnny>node bondage
20:20:41  <Domenic_>what no i missed the streams2 discussion
20:21:10  * mikealquit (Quit: Leaving.)
20:22:25  <dominictarr>Domenic_: we where discussing making streams3 with promises
20:23:09  <jjjohnny>lol
20:24:00  <Domenic_>yeah a generator of promises is gonna be streams3.
20:24:01  <Domenic_>(three-quarters kidding)
20:25:44  <Domenic_>isaacs: dominictarr: through doesn't seem to support high-water mark, 'finish' event, etc. stuff that I see in streams2
20:25:58  * mikolalysenkojoined
20:26:10  <dominictarr>Domenic_: no. it's classic streams not new streams.
20:26:30  <Domenic_>dominictarr: right, so isn't that missing stuff?
20:26:40  <jcrugzz>it just flows man
20:26:53  <dominictarr>jcrugzz: ++
20:28:27  <dominictarr>Domenic_: it has everything you want for classic transform streams.
20:48:27  <isaacs>dominictarr: 'finish' is a common request
20:48:51  <isaacs>dominictarr: bikeshed the api all you want, but just *some* way to know when writing has all been flushed out
20:49:25  <dominictarr>right - well, I'd merge a pull request if someone makes one...
20:54:42  <jesusabdullah>awww dangit, ovarslept
20:54:52  <jesusabdullah>I needed it but kessler didn't!
20:55:14  <jesusabdullah>poor isaac
20:55:19  <jesusabdullah>he worked so hard on streams2
20:55:29  <jesusabdullah>and nobody likes them
20:56:13  <mikolalysenko>it is a hard problem. streams have to do a lot of stuff, and the interactions are really subtle
20:56:37  <dlmanning>Does anyone know if I'm likely to run into probems piping streams1 streams to streams2 streams?
20:56:47  <mikolalysenko>I have no constructive suggestion, but frankly I am pretty scared of writing them at the moment...
20:56:57  <st_luke>streams classic
20:57:11  <jjjohnny>dlmanning: use dominictarr's libraries and watch the sky
20:57:12  <st_luke>you make new streams
20:57:13  <jesusabdullah>NEW LOOK, SAME GREAT TA---NO WAIT TOTALLY DIFFERENT TASTE
20:57:14  <LOUDBOT>FUCK YOU AND YOUR WHOLE FAMILY, IN JESUS NAME I PRAY
20:57:18  <st_luke>then in a few months you bring back streams classic
20:57:21  <st_luke>and node is more popular than ever
20:57:35  <dlmanning>jjjohnny: I'm not quite sure what you mean by that?
20:57:40  <dominictarr>dlmanning: seems to work
20:57:45  <jesusabdullah>I think getting real examples for making streams2s will go a long way
20:58:00  <jesusabdullah>as we all know, the current examples are fucking terrible
20:58:12  <dlmanning>dominictarr: I've not run into problems so far
20:58:58  * mikealjoined
20:59:02  <dlmanning>fwiw: I'm pretty much a beginning with node and I was able to pick up the streams2 api without much problem from the existing docs
20:59:15  <dlmanning>beginning/beginner
20:59:37  <dlmanning>Maybe I didn't know enough to be confused
21:00:06  <guybrush>the thing is, in order to use existing codebase you have to use streams1, or you know what you do and cross streams1 with streams2 and mix it with min-stream and promise-generators
21:00:34  <guybrush>tough.. i doubt that its a good idea to cross all the streams
21:01:19  * shamajoined
21:01:34  <dlmanning>oooooh
21:01:39  <dlmanning>guybrush: epic
21:02:24  <jesusabdullah>dlmanning: We just need to be shown how they work is all, otherwise we'll just keep writing streams1s
21:02:59  <jcrugzz>dlmanning: piping streams1 to a streams2 should work fine assuming the streams2 is doing something with the data
21:03:06  <jcrugzz>since it starts paused
21:03:21  <jesusabdullah>streams4: Actual Unix Pipes
21:03:34  <jesusabdullah>I did once find an http server hobbled together with netcat
21:03:48  * mikealquit (Ping timeout: 268 seconds)
21:04:00  <dlmanning>jcrugzz: thanks
21:04:01  <jesusabdullah>it'll be fastcgi all over again and the circle will be complete
21:04:20  * ednapiranhaquit (Remote host closed the connection)
21:05:02  <jcrugzz>jesusabdullah: fastcgi is currently causing me pain
21:05:19  <mikolalysenko>I think the problem with streams2 is that there are so many edge cases to consider. it is easy to make something that looks like it works but has subtle bugs
21:05:25  <jcrugzz>i have disdain for graphite
21:05:47  <mikolalysenko>though this may be more a problem with streams in general rather than an issue with streams2 per se
21:06:07  <dominictarr>streams are hard, because it's async with state
21:06:17  <dominictarr>although there are only like, 5 states
21:06:30  <mikolalysenko>well, more if you are also doing something else that is stateful
21:06:34  <dlmanning>mikolalysenko: Is that a real problem? I haven't heard anyone mention actual bugs in streams2
21:06:35  <mikolalysenko>like parsing or whatever
21:06:55  <mikolalysenko>it isn't in streams2 core, just in implementing them. and it can be subtle stuff like performance bugs
21:07:09  <jesusabdullah>jcrugzz: hahaha, I bet
21:07:13  <mikolalysenko>I think the core stuff is fine and correctly implemented
21:10:26  <mikolalysenko>my current solution to streaming right now (which is probably suboptimal) is to try to write around them.
21:10:38  <jesusabdullah>I, umm
21:10:43  <jesusabdullah>use other peoples' libraries
21:10:44  <mikolalysenko>for example there is this: https://github.com/mikolalysenko/pitch-shift
21:10:57  <mikolalysenko>and if you want to turn it into a stream you can just use through
21:11:28  <dlmanning>I've used stream.Transform a lot
21:11:49  <dlmanning>and I love it
21:11:55  <jjjohnny>dominictarr: if I pause through, it will buffer automatically, and resume at the proper point when I resume?
21:12:08  <dominictarr>jjjohnny: yes
21:16:07  <mikolalysenko>let me ask another question: is there a point in streams2 to exposing a lower level API than the one in through() ?
21:16:52  * jxsonjoined
21:18:31  * fallsemoquit (Ping timeout: 264 seconds)
21:19:38  <jesusabdullah>I think the point was to solve the buffering problem
21:19:49  * tmcw_quit (Ping timeout: 276 seconds)
21:19:55  <jesusabdullah>pause/resume with streams1 is kinda weak at best
21:20:47  <mikolalysenko>yeah, but what do I lose writing all my streams using through() that I would have had if I had just written it by inheriting from the raw streams2 api?
21:20:56  <mikolalysenko>(not comparing through to streams1 at all)
21:22:08  <dlmanning>Doesn't that comparison work both ways?
21:22:43  <mikolalysenko>well, you could compare it to streams1 but that is not what I am asking. I want to know why streams2 is better than through
21:22:47  <dominictarr>mikolalysenko: the other difference is how streams2 concatenates buffers while paused
21:22:59  <dominictarr>through is like streams2 in object mode.
21:23:05  <mikolalysenko>ok
21:23:30  <mikolalysenko>can you explain this a bit? I am not quite sure I get it
21:23:31  <dominictarr>you could still do that manually, of course. but anyway, you should benchmark that stuff
21:24:22  <dominictarr>my understanding is that while the stream is paused, any new chunks that get read will be combined into one Buffer
21:24:34  <dominictarr>so when it is eventually written,
21:24:40  <dominictarr>it can all get written in one go.
21:25:03  <mikolalysenko>ok, so it grows a buffer internally...
21:25:05  * fallsemojoined
21:25:09  <dominictarr>yes
21:25:26  <mikolalysenko>while I guess through just has a queue of objects?
21:28:44  * fallsemoquit (Read error: Connection reset by peer)
21:28:59  * fallsemojoined
21:29:09  <mikolalysenko>but regardless how through handles this problem, this isn't quite what I was getting at
21:29:14  * mikealjoined
21:29:42  <mikolalysenko>I want to know what the performance impact of using through() vs directly implementing streams2 are
21:29:55  <mikolalysenko>other than the obvious additional overhead in through caused by wrapping stuff
21:30:16  <jjjohnny>ive got read and write streams working for browser File System API
21:30:50  <dominictarr>mbalho: do you know good place to print stickers fast?
21:32:20  <mmckegg>jjjohnny: nice!
21:34:15  <jesusabdullah>mikolalysenko: if I had to guess I'd say the perf difference is negligible, at least until stored buffers become Very Large
21:34:37  <jesusabdullah>mikolalysenko: in fact I saw a lot of node issues during the 0.9 days about keeping streams2 perf in check
21:35:10  <mikolalysenko>jesusabdullah: but if that is the case, then why not make the streams2 api the same as through()? since many people (myself included) find it simpler to reason about
21:35:21  * tmcwjoined
21:35:32  <mikolalysenko>jesusabdullah: through() has a much smaller surface area for sure
21:35:40  <dlmanning>mikolalysenko: It's not that much different
21:36:12  <dlmanning>I mean... you just implement _transform()
21:36:22  <jesusabdullah>mikolalysenko: "reasons"
21:36:48  <jesusabdullah>mikolalysenko: seriously though, I think the idea was that you should be able to inherit from all these constructors
21:36:58  <mikolalysenko>hmm
21:37:16  <mikolalysenko>so, I guess the issue is really inheritance vs. composition for specifying behaviors
21:37:57  <jesusabdullah>yeah
21:38:02  <mikolalysenko>though I am kind of in favor of the latter
21:38:18  <mikolalysenko>since it makes it easier to use local closures and is more idiomatic with the prototype based style of javascript
21:38:34  <mikolalysenko>(not that you can't do that with inheritance, but it is more cumbersome I think)
21:38:57  <jesusabdullah>sure
21:39:02  <mikolalysenko>also inheritance has more machinery by necessity
21:39:10  <mikolalysenko>you have to do stuff like call util.inherits() and so on
21:39:20  <jesusabdullah>my head hurts :C
21:39:26  <dlmanning>yeah, sort of an... inheritance tax :D
21:39:47  <jesusabdullah>mikolalysenko: I figure you can do it the same way as with EEs, yeah? var str = new Transform(); str._transform = function () {} or whatevs?
21:41:41  <dlmanning>jesusabdullah: I believe that would work
21:45:22  <mikolalysenko>ok, here is a concrete question then. What is the fastest way to turn this into a streams2 module without using through: https://github.com/mikolalysenko/frame-hop
21:45:32  <dominictarr>mikolalysenko: the big difference between through and Transform is _transform(data, enc, cb)
21:45:49  <dominictarr>you have to call callback explicitly for Transform to read again
21:46:21  <dominictarr>but on through if you wanted that, you'd have to call this.pause(); …; this.resume()
21:46:30  <mikolalysenko>hmmm
21:46:51  <mikolalysenko>I am now really confused
21:47:34  <mikolalysenko>man, I get the sinking feeling sometimes that all of this would be so much simpler with generators...
21:48:10  <mikolalysenko>what is the _flush() method in streams2 for?
21:48:16  <mikolalysenko>and what is the analog in through()?
21:48:40  <dominictarr>that is the same as through(write, _flush)
21:48:51  <mikolalysenko>ah
21:48:52  <dominictarr>I think it takes a callback too, though...
21:48:55  <jesusabdullah>derf
21:49:07  <mikolalysenko>ok, so what does the callback do?
21:49:22  <dominictarr>it does the same thing is tr.queue(null)
21:49:38  <dominictarr>ends the readable side of the stream
21:49:56  <mikolalysenko>ah
21:49:58  <jesusabdullah>uuuugh I just had a productive thought and then I *lost* it
21:50:00  <jesusabdullah>fuuuuck
21:50:02  <mikolalysenko>so you call it when you are done writing
21:50:18  <dominictarr>I should be able to make a through/streams2 that had the same api as through, but was a new stream
21:50:26  <mikolalysenko>ok
21:50:46  <dominictarr>or nearly the same...
21:51:15  <mikolalysenko>I am curious to understand if there is a performance tradeoff in using the through api
21:52:16  <mikolalysenko>the other thing with streams2 is that it seems like it is optimized primarily for binary data, which may not be a bad idea
21:52:28  <mikolalysenko>while through handles general objects
21:52:36  <dlmanning>jesusabdullah: okay, I take it back. Seems like transform streams very much want to be a class
21:52:42  * jibayquit (Read error: Connection reset by peer)
21:55:04  <jesusabdullah>dlmanning: wat
21:55:16  <jesusabdullah>dlmanning: why wouldn't that work?
21:55:33  <mikolalysenko>dlmanning: why? at least from my current vantage point making transform streams classes seems like a lot of unnecessary ceremony
21:57:15  <dlmanning>Nope, false alarm
21:57:26  <dlmanning>I was being stupid. It works now
21:57:33  <dlmanning>:D
22:02:35  <dlmanning>So this works: https://gist.github.com/dlmanning/255e10133dbe4c844e38
22:14:43  * thl0_quit (Remote host closed the connection)
22:18:09  * mikolalysenkoquit (Ping timeout: 248 seconds)
22:24:39  * jolissquit (Quit: joliss)
22:35:12  * mikealquit (Quit: Leaving.)
22:35:59  * mikealjoined
22:40:42  * dominictarrquit (Quit: dominictarr)
22:45:09  <Domenic_>dominictarr: +1 for a through v3 or whatevs that is Transform-based under the hood, so we get 'finish' events, high water marks, etc.
22:56:43  * tmcwquit (Remote host closed the connection)
22:58:58  * mikolalysenkojoined
23:00:55  * cianomaidinjoined
23:10:20  <jesusabdullah>hahaha
23:10:23  <jesusabdullah>Did a skype interview
23:10:26  <jesusabdullah>did some technical stuff
23:10:41  <jesusabdullah>"that is the most...'javascript' answer for that I've ever seen"
23:10:59  <jcrugzz>lol
23:11:05  <jcrugzz>the dude said that?
23:11:08  <jesusabdullah>https://gist.github.com/jesusabdullah/b354324c3eae00531cd6
23:11:09  <jesusabdullah>Yeah he did
23:11:14  * mikealquit (Quit: Leaving.)
23:11:17  <jesusabdullah>My guess is a lot of people do for looping or something
23:11:22  <jesusabdullah>also this totally worked first try
23:11:38  <jcrugzz>haha functional aspects ftw
23:11:56  <jcrugzz>but yea for loops are not as nice at all
23:12:33  <jesusabdullah>hah
23:12:38  <jesusabdullah>oh also there's totally a bug in that
23:12:52  <jesusabdullah>it works for that specific problem, but a different array would make it explode
23:13:24  * ins0mniaquit (Ping timeout: 240 seconds)
23:13:30  <jesusabdullah>lol fixed (maybe)
23:14:07  <jesusabdullah>now I have to wonder what that would look like in coffeescript
23:14:25  <jcrugzz>oh god
23:14:27  <jcrugzz>do you dare
23:18:36  * timoxleyjoined
23:20:48  <jesusabdullah>jcrugzz: they're a coffeescript shop
23:20:58  <jcrugzz>le sigh
23:21:49  <jesusabdullah>XD
23:22:04  <jesusabdullah>thing is, I can almost appreciate coffeescript
23:22:07  <jesusabdullah>it has a few cool features
23:24:04  <jesusabdullah>As a library author though, least common denominator is important
23:25:10  * thl0joined
23:28:25  * mikealjoined
23:29:55  * thl0quit (Ping timeout: 264 seconds)
23:30:53  * thl0joined
23:31:04  <jesusabdullah>I'm percolating a blog post about this
23:31:16  <jesusabdullah>sort of a counterpoint to my post about "the case for a node.js framework"
23:35:29  * thl0quit (Ping timeout: 248 seconds)
23:38:42  <st_luke>I can appreciate fascism
23:39:45  * yorickquit (Remote host closed the connection)
23:44:58  * tmcwjoined
23:51:01  <Raynos>dominictarr: ping
23:55:04  * cianomaidinquit (Quit: cianomaidin)
23:56:16  * rook2pawnquit (Ping timeout: 256 seconds)
23:56:33  * tmcwquit (Remote host closed the connection)
23:57:09  * whit537joined