00:00:01  * ircretaryquit (Remote host closed the connection)
00:00:08  * ircretaryjoined
00:01:54  * thl0joined
00:03:15  * kenperkinsjoined
00:03:33  * mikolalysenkojoined
00:04:26  * no9joined
00:06:36  * thl0quit (Remote host closed the connection)
00:16:20  * mikolalysenkoquit (Ping timeout: 256 seconds)
00:20:51  * kenperkinsquit (Quit: Computer has gone to sleep.)
00:22:56  * no9quit (Ping timeout: 245 seconds)
00:25:52  * kenperkinsjoined
00:36:01  * no9joined
00:40:27  * kenperkinsquit (Quit: Computer has gone to sleep.)
00:44:07  * mikolalysenkojoined
00:48:46  * fotoveritequit (Quit: fotoverite)
00:48:52  * no9quit (Remote host closed the connection)
00:57:25  * Domenic__quit (Read error: Connection reset by peer)
01:03:19  * thatguydanquit (Quit: thatguydan)
01:05:23  * thatguydanjoined
01:07:48  * shuaibquit (Quit: Computer has gone to sleep.)
01:12:56  * ITproquit (Read error: Connection reset by peer)
01:32:33  * thl0joined
01:45:10  * thl0quit (Remote host closed the connection)
01:49:35  <mikolalysenko>So I am thinking about making a fork of cross filter that lets you define custom move operations
01:49:42  <chrisdickinson>mbalho: i think the issue is that browserify's buffers don't support indexing
01:49:42  <chrisdickinson>ah whoops substack already mentioned that
01:49:52  <mikolalysenko>so you can do sorting on groups of typed arrays
01:50:04  <chrisdickinson>substack: that "fix" is going to cause a lot of problems i think.
01:50:28  <mikolalysenko>are we still talking about buffers?
01:50:29  <chrisdickinson>in that it's going to change the underlying behavior from "don't copy on slice" to "copy on slice"
01:50:38  <mikolalysenko>ah, that
01:50:46  * thl0joined
01:51:32  <mikolalysenko>the more I use buffers, the more I think they need to be replaced by typed arrays
01:51:53  <chrisdickinson>yeah :|
01:52:05  <chrisdickinson>well
01:52:08  <mikolalysenko>it is probably too late to deprecate them though...
01:52:24  <chrisdickinson>what i really want is for them to be api-compatible
01:52:31  <mikolalysenko>true
01:52:32  <chrisdickinson>because typedarray.subarray === buffer.slice
01:53:04  <chrisdickinson>and the various "read/write(type)(size)(be|le)" functions are really useful from buffer, as are the various toString options
01:53:21  <chrisdickinson>but copy on slice is going to absolutely murder performance in a lot of ways
01:53:30  <mikolalysenko>yeah, those those features could be trivially implemented using views
01:53:35  <mikolalysenko>and toString could just be a function
01:53:54  <chrisdickinson>mikolalysenko: true, but it might make more sense to augment Uint8Array with those features
01:54:00  <mikolalysenko>yeah
01:54:13  <mikolalysenko>at least as a browser polyfill, augmentation seems like the best immediate solution
01:54:25  <chrisdickinson>use browserify-vm to grab a "clean" Uint8Array and then augment its prototype
01:54:40  <mikolalysenko>yeah, I can't really think of how else to do it...
01:54:53  <mikolalysenko>since augmenting array wouldn't work for sure
01:55:20  <chrisdickinson>i've actually written most of my apis to use readUInt8
01:55:25  <chrisdickinson>so they work without the indexing
01:55:38  <mikolalysenko>probably a good step forward
01:55:40  <chrisdickinson>but i make a lot of use of slice which i really can't do anymore
01:55:57  <mikolalysenko>hmm
01:56:08  <mikolalysenko>well, for typed arrays you get subarray() which works like you want
01:56:34  <chrisdickinson>Buffer.concat is actually killing my perf -- which is due to the copying that's happening (plus the added load on GC)
01:56:40  <chrisdickinson>yeah
01:56:48  <chrisdickinson>i'm leaning towards wrapping those up in a module
01:57:19  <chrisdickinson>so it can transition easily from Buffer -> Uint8Array at the thresholds
01:58:02  <mikolalysenko>I think buffers would be ok if they removed the special array indexing semantics from them
01:58:12  <mikolalysenko>that way we wouldn't be tempted to think of them as array-like things
01:58:16  * spionquit (Ping timeout: 246 seconds)
01:58:19  <chrisdickinson>yeah
01:58:26  <chrisdickinson>yeaaaah.
01:58:37  <mikolalysenko>but as they are, it makes it very difficult to remove polyfill across the browser
01:58:49  <chrisdickinson>plus all of the dom apis expect typed arrays while all the node apis (or shims) expect buffers
01:58:57  <mikolalysenko>yeah
01:59:16  <chrisdickinson>and typed arrays are the only sane way to transmit or persist binary data
01:59:16  <mikolalysenko>if you just made buffer not use indices, then it would be easy to wrap buffers as a typed array or whatever on a browser
01:59:20  <chrisdickinson>base64 just doesn't cut it
01:59:36  <mikolalysenko>I agree typed arrays are way better
01:59:46  <chrisdickinson>yeah, i think that was the original strategy of buffer-browserify
02:00:14  <chrisdickinson>which makes sense because DataViews are kind of slow to instantiate
02:00:25  <mikolalysenko>you could also do something insane and kill all your performance
02:00:39  <mikolalysenko>like use defineProperty(0, ...) , defineProperty(1, ...) etc.
02:00:46  <chrisdickinson>though i suppose you could make three dataviews at the outset of a buffer and just read at offsets
02:01:06  <chrisdickinson>mikolalysenko: yeah, or rely on harmony proxies, which is also sort of bleh.
02:01:12  <mikolalysenko>yeah...
02:01:22  <mikolalysenko>but multiple views makes some sense
02:01:28  <chrisdickinson>still i bet either of those situations are still better than doing a copy-on-slice of megabytes of data
02:01:37  <mikolalysenko>definitely
02:01:45  <chrisdickinson>oh yeah, i'm saying that for a buffer shim you could just make 3 data views at instantiation
02:01:56  <chrisdickinson>instead of making one every time you read an offset
02:01:57  <mikolalysenko>yep, that seems reasonable to me
02:02:06  <chrisdickinson>and then you could read anything at any offset, theoretically
02:02:17  <mikolalysenko>yeah
02:02:26  <mikolalysenko>though it would die in a fire on older versions of ie...
02:02:46  <chrisdickinson>yeah, in old ie you'd just have to fall back to the array approach
02:02:54  <mikolalysenko>and shit would break
02:03:04  <chrisdickinson>nah, you'd do the copy-on-slice version in ie
02:03:10  <mikolalysenko>well
02:03:12  <chrisdickinson>which really would just make ie even worse to use
02:03:20  <chrisdickinson>slow js engine + way slow version of buffers
02:03:20  <mikolalysenko>unless you are counting on the view really being a view...
02:03:37  <mikolalysenko>since the view will actually write through to the underlying copy, and so it is semantically inconsistent
02:03:45  <mikolalysenko>but as long as you only read from the view it would work
02:04:18  <chrisdickinson>well, i mean, you would abandon data views entirely
02:04:24  <chrisdickinson>we'd aim to polyfill buffer's api still
02:04:24  <mikolalysenko>I think probably the most "ideal" solution here would be to just kill the array indexing in node.js
02:04:31  <chrisdickinson>(which i still think is better than typed array's)
02:04:32  <mikolalysenko>but that would probably cause a lot of painful breakage
02:04:53  <chrisdickinson>so, we'd support indexing in old ie by doing copy on slice etc (i.e., the horrible slow way)
02:05:04  <chrisdickinson>but support it on newer browsers using typed arrays + dataviews
02:06:01  <chrisdickinson>substack: ^^ thoughts on the above?
02:06:19  <chrisdickinson>it makes things more complex but would enable the sorts of things we'd want to be able to do with buffers in the browser, i think
02:07:10  <mikolalysenko>I think typed array polyfills on newer browsers could work
02:07:45  <mikolalysenko>though there is a strawman in the typed array proposal to kill the semantics for buffers/views/etc if you screw with the guts of a typed array
02:07:56  <mikolalysenko>so basically if you do something like:
02:07:57  <chrisdickinson>oh?
02:08:02  <mikolalysenko>var x = new Float32Array(10)
02:08:13  <mikolalysenko>x["splice"] = function() { ... }
02:08:26  <mikolalysenko>then x.buffer, x.byteOffset, ... etc. go bye-bye
02:08:46  <mikolalysenko>the idea is to let interpreters just optimize the typed array directly as a block of memory
02:09:02  <mikolalysenko>so indexing can be reduced to ~1 instruction
02:09:18  <mikolalysenko>but if that gets added, then it would kill this solution
02:09:27  <chrisdickinson>ah man
02:09:34  <chrisdickinson>what about augmenting the prototype instead of the instance?
02:09:42  <chrisdickinson>i.e.
02:09:42  <mikolalysenko>...ew
02:09:50  <chrisdickinson>Uint8Array.prototype.slice = function() { .. }
02:10:03  <mikolalysenko>:(
02:10:06  <chrisdickinson>mikolalysenko: that's why i'm saying hoist Uint8Array out of browserify-vm (which creates a new context)
02:10:15  <chrisdickinson>so it wouldn't change the current frame's representation of uint8array at all
02:10:29  <mikolalysenko>hmm
02:11:08  <mikolalysenko>I still feel like the best solution would be to remove the index operator from buffer
02:11:31  <mikolalysenko>since if you made that one small change, then everything would work in the browser automatically
02:11:35  <mikolalysenko>even in IE
02:11:44  <chrisdickinson>from buffer?
02:11:46  <mikolalysenko>yea
02:11:49  <chrisdickinson>yeah
02:12:04  <chrisdickinson>i mean, the slowbuffer backing store supports the indexing
02:12:05  <mikolalysenko>just remove it from node's buffer. the read/write operations already replace the need for it anyway
02:12:24  <chrisdickinson>yeah
02:12:55  <mikolalysenko>then on the browser you could implement Buffer using either a typed array (if possible) or else fall back to a native array
02:13:24  <mikolalysenko>though I wonder how many modules it would break...
02:13:38  <chrisdickinson>shades of charAt :|
02:14:13  <mikolalysenko>well, it is the price you pay for working in a language that does not allow operator overloading :P
02:14:42  <mikolalysenko>but the implementation of buffer in node is syntactically magical
02:15:42  <mikolalysenko>I wonder if you could maybe do some static analysis to rewrite buffer indexing using esprima
02:16:16  <mikolalysenko>like detect if a variable is a buffer, and replace any of the x[i] with x.readUint8(i) or something
02:16:49  <mikolalysenko>but these are crazy ideas
02:17:03  <mikolalysenko>I think though for browserify, what you are suggesting is most practical
02:17:44  <mikolalysenko>chrisdickinson: I agree that the minimally invasive solution is to use typed arrays to emulate buffer on new browsers, and then fall back to native arrays on older browsers (though stuff will obviously break)
02:20:47  * thl0quit (Remote host closed the connection)
02:23:58  <mikolalysenko>so, the other thing I was thinking about was how to make an efficient quick sort for structs packed in typed arrays...
02:24:18  <mikolalysenko>or for groups of typed arrays
02:26:04  <mikolalysenko>and I was thinking about making the module using crossfilter's existing sorting routines, but I am wondering if the fact that it is apache licensed would create problems...
02:43:05  * jcrugzz_joined
02:44:12  * jcrugzzquit (Ping timeout: 256 seconds)
02:47:41  * jcrugzz_changed nick to jcrugzz
02:54:57  * thatguydanquit (Quit: thatguydan)
02:56:29  * thatguydanjoined
03:04:24  * mmckeggquit (Quit: mmckegg)
03:10:01  <mbalho>chrisdickinson: i dont think its possible in js to define a custom object that has [] accessors, right?
03:11:01  * mikolalysenkoquit (Ping timeout: 252 seconds)
03:14:13  <substack>mbalho: well buffers are nice because they have a static size
03:14:51  <substack>so you can use any object and assign the values into numeric keys
03:17:56  <mbalho>substack: that would be way slower than using a typed array i think, mikola has some benchmarks
03:19:33  <substack>probably yes
03:24:54  <chrisdickinson>substack: the problem with the linked fix is that slice becomes a copy operation
03:25:01  <chrisdickinson>and i'm operating on large buffers
03:25:10  <chrisdickinson>and that'll be incredibly slow, like, unusably slow.
03:25:27  <chrisdickinson>whereas if you just use readUInt8, things work as expected
03:25:40  <chrisdickinson>which sucks for interop, but makes it possible to build things on top of Buffer
03:26:11  <chrisdickinson>mbalho: yeah, it's not really possible until es6 i think (you might be able to inherit from Array and get its behavior.)
03:26:22  <chrisdickinson>or using typed arrays.
03:28:08  <chrisdickinson>hence my worry. the slowest thing about git-in-browser is that i'm using copy operations for some things (specifically Buffer.concat)
03:28:22  <chrisdickinson>if slice becomes a copy operation it'll slow to a crawl. it already has a hard time with larger repos.
03:28:42  * ryan_stevensjoined
03:37:13  * ryan_stevensquit (Remote host closed the connection)
03:39:03  * mikolalysenkojoined
03:40:59  <jesusabdullah>niftylettuce: what is your thing and should I shoot you an email y/n
03:45:13  <niftylettuce>jesusabdullah: ITS AMAZING
03:45:16  <niftylettuce>ITS SOOOOO AMAZING
03:45:16  <LOUDBOT>THERE IS A SINKHOLE.
03:45:22  <niftylettuce>EVERYONE SHOULD GET PRIVATE INVITE WHILE THEY CAN
03:45:22  <LOUDBOT>WHAT I'M TRYING TO SAY HERE IS YOUR SUICIDE WASN'T NOVEL ENOUGH
03:45:26  <niftylettuce>IM ONLY GIVING AWAY 500
03:45:26  <LOUDBOT>LOOK WHAT YOU'VE DONE JEREMY ASHKENAS YOU'VE RUINED EVERYTHING
03:45:38  <niftylettuce>https://twitter.com/niftylettuce/status/321097223150653440
03:45:40  <niftylettuce>THERE U GO
03:45:40  <LOUDBOT>WHEN IS THE PART WHERE WE SHOOT NAZIS
03:50:32  * mikolalysenkoquit (Ping timeout: 256 seconds)
03:51:41  <jesusabdullah>oh man, long night of work for me
03:51:47  <jesusabdullah>I hope I can stay in the zone
03:51:50  <jesusabdullah>I tire too easily
04:14:07  * defunctzombie_zzchanged nick to defunctzombie
04:19:18  * mikolalysenkojoined
04:30:35  * mikolalysenkoquit (Ping timeout: 252 seconds)
04:32:55  * mmckeggjoined
04:33:33  * py1honjoined
04:59:24  * mikolalysenkojoined
05:12:08  * mikolalysenkoquit (Ping timeout: 256 seconds)
05:15:07  * jcrugzzquit (Quit: leaving)
05:29:50  <niftylettuce>HACKITY HACK MOTHER FUCKERS
05:29:51  <LOUDBOT>YOU TEACH YOUR USERS THINGS? DAMN, THAT'S AWESOME. WHERE CAN I GET SOME USERS LIKE THAT?
05:40:04  * mikolalysenkojoined
05:44:19  * fallsemoquit (Quit: Leaving.)
05:52:29  * mikolalysenkoquit (Ping timeout: 245 seconds)
06:20:07  * mmckeggquit (Quit: mmckegg)
06:20:39  * mikolalysenkojoined
06:33:10  * mikolalysenkoquit (Ping timeout: 256 seconds)
07:02:22  * mikolalysenkojoined
07:12:27  * defunctzombiechanged nick to defunctzombie_zz
07:14:01  * mikolalysenkoquit (Ping timeout: 256 seconds)
07:42:13  * mikolalysenkojoined
07:55:54  * mikolalysenkoquit (Ping timeout: 264 seconds)
07:57:29  * wolfeidaujoined
07:58:14  * thatguydanquit (Ping timeout: 256 seconds)
08:02:59  * cianomaidinjoined
08:20:04  <substack>oh wow, mbrevoort sent a pull req fixing those crdt/seaport rm issues, NICE
08:23:25  * mikolalysenkojoined
08:34:33  * cianomaidinquit (Quit: cianomaidin)
08:35:23  * fotoveritejoined
08:37:39  * mikolalysenkoquit (Ping timeout: 276 seconds)
08:42:55  * substacktopic: Unofficial browserling/testling mad science channel. For official help /join #browserling
08:45:05  <substack>dominictarr: did you see https://github.com/dominictarr/crdt/pull/21 ?
08:45:19  <substack>was pretty fast
08:46:36  <dominictarr>sweet. I just woke up, will merge it after I've had coffee!
08:46:41  <substack>nice
08:46:58  <substack>mbrevoort is running a gigantic cluster of 140 nodes at pearson
08:47:18  <substack>so this is a really great real-world use case for scuttlebutt/crdt/seaport
08:51:23  * cianomaidinjoined
08:54:01  * cianomaidinquit (Client Quit)
08:59:40  * wolfeidauquit (Ping timeout: 246 seconds)
09:05:15  * fotoveritequit (Quit: fotoverite)
09:06:29  * mikolalysenkojoined
09:19:24  * mikolalysenkoquit (Ping timeout: 264 seconds)
09:21:51  * dominictarrquit (Quit: dominictarr)
09:30:18  * cianomaidinjoined
09:33:45  * cianomaidinquit (Client Quit)
09:36:10  * cianomaidinjoined
09:42:02  * shuaibjoined
09:46:50  * mikolalysenkojoined
09:51:53  * wolfeidaujoined
10:00:40  * mikolalysenkoquit (Ping timeout: 256 seconds)
10:02:01  * dominictarrjoined
10:12:46  * cianomaidinquit (Quit: cianomaidin)
10:22:57  * thl0joined
10:25:40  <juliangruber>dominictarr: what datastructures/libs would you use for a distributed todo list application?
10:25:55  <juliangruber>I'm using crdt and level-scuttlebutt right now...
10:28:21  * spionjoined
10:28:32  * cianomaidinjoined
10:28:54  <dominictarr>juliangruber: yeah, that is what I'd use.
10:29:07  * mikolalysenkojoined
10:29:19  <dominictarr>substack: wow, 140 nodes!
10:30:15  * shuaibquit (Ping timeout: 252 seconds)
10:30:37  <substack>dominictarr: http://vimeo.com/59748495
10:31:31  <dominictarr>watching
10:31:35  * shuaibjoined
10:32:19  * heathjsjoined
10:32:44  * juliangruber_joined
10:33:26  <dominictarr>it's so weird to discover that someone is using something I've written at a large scale, or at any scale, in production.
10:33:47  <dominictarr>github issues are always about problems, or discussions mostly in abstract
10:33:54  * perlbot_joined
10:33:56  <juliangruber_>through must be used a lot in production
10:34:10  <dominictarr>through has ~180 deps on npm
10:34:17  <substack>one of the most depended-upon modules
10:34:22  * pkrumins_joined
10:34:22  * pkrumins_quit (Changing host)
10:34:22  * pkrumins_joined
10:34:33  * jden_joined
10:34:45  <substack>and considering how popular some of the libraries that depend on through are themselves the impact is even greater
10:34:53  * Benvie1joined
10:34:58  * jden_changed nick to Guest94624
10:35:59  * paul_irish_joined
10:38:17  * sorensen_joined
10:38:21  <dominictarr>there is a strong pattern, the most simple, generic modules have many dependents, the specialized ones have less
10:38:37  * lepahcjoined
10:39:04  * brianloveswords_joined
10:39:11  * pkruminsquit (*.net *.split)
10:39:11  * Benviequit (*.net *.split)
10:39:11  * chapelquit (*.net *.split)
10:39:11  * brianloveswordsquit (*.net *.split)
10:39:11  * paul_irishquit (*.net *.split)
10:39:11  * gozalaquit (*.net *.split)
10:39:11  * heathquit (*.net *.split)
10:39:11  * juliangruberquit (*.net *.split)
10:39:11  * perlbotquit (*.net *.split)
10:39:11  * jdenquit (*.net *.split)
10:39:11  * sorensenquit (*.net *.split)
10:39:12  * sorensen_changed nick to sorensen
10:39:12  * perlbot_changed nick to perlbot
10:39:12  * brianloveswords_changed nick to brianloveswords
10:39:12  * lepahcchanged nick to chapel
10:40:01  * shuaibquit (Ping timeout: 245 seconds)
10:41:05  * shuaibjoined
10:42:45  * mikolalysenkoquit (Ping timeout: 248 seconds)
10:42:46  * cianomaidinquit (Quit: cianomaidin)
10:44:21  <dominictarr>so… strange idea: async evented systems, streams etc, and parsers … are the same thing
10:44:58  <substack>very few parsers are streaming unfortunately
10:45:09  <substack>it's not something that people usually consider when writing a parser
10:46:23  <dominictarr>I mean just in the way that they change state based on incoming events (data)
10:47:07  <dominictarr>and produce an output that has certain properties, and
10:47:18  <dominictarr>like if it's a JSON paser,
10:48:05  <dominictarr>and there is a '{' and it's not in the 'STRING' state, then the parser must eventually emit a matching object, or a SyntaxError
10:48:59  <dominictarr>but the interesting thing with parsers, is that they are usually contructed with a formal model, and tested with valid and invalid inputs
10:50:03  * shuaibquit (Ping timeout: 252 seconds)
10:52:29  <dominictarr>we need to do that with async systems
10:53:00  * shuaibjoined
10:53:02  <dominictarr>like, a classic stream should always eventually emit a drain following a pause (returning false)
10:53:12  <juliangruber_>treat them like a state machine?
10:53:23  <dominictarr>PAUSE DRAIN PAUSE DRAIN PAUSE DRAIN is valid
10:53:32  <dominictarr>but PAUSE DRAIN DRAIN is not
10:53:57  <dominictarr>PAUSE PAUSE PAUSE DRAIN is okay though
10:54:33  <dominictarr>juliangruber_: absolutely, because then you can model all possible state transitions
10:55:06  <dominictarr>which you need for quality testing
10:55:45  <dominictarr>and evaluating the quality of the tests
10:55:53  <dominictarr>code coverage isn't good enough
10:56:36  <dominictarr>because you need to test each code _path_, and code coverage tools don't know if you have covered code paths
10:57:31  <juliangruber_>we need a tool where you input your state machine and it gives you useful paths to test
10:57:59  <juliangruber_>like, test PAUSE DRAIN PAUSE DRAIN, but not PAUSE DRAIN PAUSE DRAIN PAUSE DRAIN
11:04:32  <juliangruber_>or a tool that randomly triggers valid and invalid code paths based on a formal state machine
11:04:49  <juliangruber_>I heared, a lot of browser bucks are caught by randomly making it do thing
11:05:40  * spionquit (Ping timeout: 246 seconds)
11:06:30  <dominictarr>"browser bucks" ?
11:06:38  <juliangruber_>bugs :D
11:06:45  <dominictarr>aha
11:07:07  <dominictarr>so, I tried this a while ago with https://npmjs.org/package/macgyver
11:07:16  <dominictarr>but making it more like TAP would be bettor
11:07:34  * yorickjoined
11:07:40  <dominictarr>then, you just have your machine output EVENT STATE {DATA}
11:07:42  <dominictarr>etc
11:07:56  <dominictarr>and then have a model that checks that
11:08:57  * mikolalysenkojoined
11:09:28  <dominictarr>which you could specify with temoral logic, kinda like a regexp
11:09:47  <juliangruber_>temporal?
11:09:58  <dominictarr>oops, yes
11:10:02  <juliangruber_>ok
11:10:03  <juliangruber_>so
11:10:09  <juliangruber_>how would the syntax look like?
11:10:34  <dominictarr>like /DATA*(END|ERROR)?CLOSE$/
11:10:37  * nicholas_joined
11:10:38  * nicholas_quit (Read error: Connection reset by peer)
11:10:40  <juliangruber_>aaaah
11:10:42  <dominictarr>like a regexp
11:10:43  <juliangruber_>nice
11:11:10  <dominictarr>you could put pause state in that too
11:11:35  <juliangruber_>is regexp mighty enough?
11:12:00  <dominictarr>with pause /^(DATA|(PAUSE+DRAIN))*(END|ERROR)?CLOSE$/
11:12:10  <dominictarr>for some things
11:12:20  <dominictarr>most importantly, it's familiar
11:13:18  <dominictarr>you can't do nested state, so you can't do recursive things, like json, or html
11:14:22  <juliangruber_>http://www.catonmat.net/blog/recursive-regular-expressions/
11:16:40  <dominictarr>* reading
11:21:00  * cianomaidinjoined
11:21:11  <dominictarr>juliangruber_: you could use this model to check all your callbacks too, it's really simple /^(ASYNC)(RETURN)(ERROR|RESULT)$/
11:21:23  <dominictarr>for a callback that is always async
11:21:35  <dominictarr>if it can callback sync, it would look like this:
11:22:50  <dominictarr>sync or async: /^(ASYNC)(((ERROR|CB)RETURN)|(RETURN(ERROR|CB)$/
11:23:04  <dominictarr>here, I am using () to seperate event names
11:23:10  <dominictarr>could also just use single letters
11:23:28  <dominictarr>or spaces, but this way it's a valid regexp
11:24:26  * mikolalysenkoquit (Ping timeout: 256 seconds)
11:24:32  <dominictarr>ASYNC is calling the async function, so the first one means that the function must return before the callback
11:24:48  <juliangruber_>aaah
11:25:17  <dominictarr>and if you have an output that calls back twice… then that is an error
11:27:12  <dominictarr>this could all go into a TAP like thing, and then have a checker
11:27:53  <dominictarr>and you could also check what paths arn't tested
11:29:34  * nicholas_joined
11:29:35  * paul_irish_quit (Ping timeout: 255 seconds)
11:29:35  * paul_irishjoined
11:29:35  <juliangruber_>I get the first one
11:29:36  <juliangruber_>in the second on, eh
11:29:36  <juliangruber_>*one, what does (ERROR|CB) mean?
11:29:36  <juliangruber_>should it throw immediately or call CB when done with async operations?
11:29:46  * spionjoined
11:30:02  <dominictarr>like, you'd know whether you have SYNCASYNCs that always callback ASYNC, and never SYNC, so you'd know you'd need to test the path where it calls back sync
11:30:12  * wolfeidauquit (Ping timeout: 260 seconds)
11:30:13  <dominictarr>oh, maybe that should be (ERR|RESULT)
11:30:31  <dominictarr>I mean, it can callback an error or a valid result
11:30:36  <juliangruber_>ok
11:31:31  <dominictarr>of course, there is an important distinction between valid (handleable errors) and invalid errors
11:31:43  <dominictarr>you should test the error paths too.
11:35:06  <juliangruber_>yes
11:35:25  * spionquit (Ping timeout: 246 seconds)
11:35:50  <juliangruber_>what's wrong here https://gist.github.com/juliangruber/5336165
11:36:13  * st_lukejoined
11:38:05  <juliangruber_>oh, the udid thing was wrong
11:44:02  * cianomaidinquit (Quit: cianomaidin)
11:47:07  * cianomaidinjoined
11:47:33  <dominictarr>we just need a streaming regular parser
11:47:47  * spionjoined
11:52:15  * mikolalysenkojoined
11:52:55  * spionquit (Ping timeout: 246 seconds)
11:53:48  * spionjoined
12:07:24  * mikolalysenkoquit (Ping timeout: 264 seconds)
12:09:51  <dominictarr>juliangruber_: hmm, but if you have a stream that accepts data, but doesn't emit data until it's unpaused, and always emits the same number of data items… that is hard to specify
12:10:02  <dominictarr>that is not a simple regular language anymore
12:10:53  <dominictarr>it's like a^nb^n (any number of a's then same number of b's), which regexp can't express
12:13:19  <juliangruber_>sounds like we need a dsl
12:19:52  * fotoveritejoined
12:28:08  <dominictarr>yes, a streaming parser generator
12:28:38  <dominictarr>we could still specify a lot with regular expressions
12:28:49  <juliangruber_>there are some modules for this, but none as flexible as necessary
12:28:52  <juliangruber_>yes
12:29:10  <dominictarr>but more advanced parsers would give us tighter specs
12:29:16  * gozalajoined
12:30:00  <dominictarr>no one would be able to say js isn't a systems language, with that stuff!
12:32:57  <st_luke>they will still say it
12:34:22  * mikolalysenkojoined
12:51:55  * mikolalysenkoquit (Ping timeout: 256 seconds)
12:57:37  * pkrumins_changed nick to pkrumins
13:01:54  * brianloveswordsquit (Ping timeout: 264 seconds)
13:02:01  <juliangruber_>dominictarr: level-scuttlebutt doesn't load my previously stored data
13:02:25  * cianomaidinquit (Quit: cianomaidin)
13:02:59  * cianomaidinjoined
13:03:51  * thl0quit (Remote host closed the connection)
13:06:32  <dominictarr>juliangruber_: what sort of data?
13:07:04  * brianloveswordsjoined
13:09:00  <juliangruber_>dominictarr: crdt. here's a gist: https://gist.github.com/juliangruber/5336653
13:09:17  <juliangruber_>node t.js write inserts data
13:09:24  <juliangruber_>node t.js read reads everything
13:11:01  <juliangruber_>when I inspect the leveldb with lev everything is duplicated
13:14:48  <dominictarr>I'll have a look
13:15:02  <juliangruber_>thanks :)
13:15:43  * cianomaidinquit (Quit: cianomaidin)
13:16:47  * mikolalysenkojoined
13:23:32  <juliangruber_>oh, so level-scuttlebutt adds persistence, but all the data is still stored in memory?
13:23:35  <juliangruber_>*too?
13:25:54  <pkrumins>hey guys, there is this latvian company that just raised money and they're looking for someone to write them a node module for their search engine
13:26:10  <pkrumins>if anyone wants to do this, i can introduce you to them
13:31:31  * mikolalysenkoquit (Ping timeout: 260 seconds)
13:34:30  * fallsemojoined
13:36:24  <juliangruber_>how much work aprox?
13:36:27  <juliangruber_>pkrumins: ^
13:38:52  <pkrumins>so what they told me they've the modules in python and php, but now they also want node.js library
13:39:04  <pkrumins>so you'd use python and php as examples
13:39:09  * thl0joined
13:39:21  <pkrumins>i havent looked at it myself at all so i dont really know how big their libraries are
13:39:32  <pkrumins>if you're interested, i can intro you right now
13:39:36  <pkrumins>and you can discuss this further
13:39:54  <pkrumins>their product is a massive scale search engine
13:40:03  <pkrumins>just add more clusters and it scales automatically
13:40:22  <juliangruber_>when should it start? I'm busy right now but I could do it starting from the 22th
13:40:43  <pkrumins>that's cool they've been emailing me since last year
13:40:49  <pkrumins>so it doesnt look like there's a hurry
13:41:08  <pkrumins>juliangruber_: ok i'm introing you then
13:41:47  <pkrumins>oh but i dont know your email
13:41:49  <pkrumins>:)
13:41:58  <juliangruber_>julian@juliangruber.com
13:42:04  <pkrumins>thanks
13:44:36  * thl0quit (Ping timeout: 264 seconds)
13:44:46  * thl0joined
13:48:52  <pkrumins>juliangruber_: introed
13:49:07  <juliangruber_>pkrumins: thanks :)
13:49:16  <pkrumins>juliangruber_: the guy i introed you to is the ceo of the company
13:49:32  <pkrumins>that's about all i know about him or his company. they raised 1m euros recently.
13:49:47  <pkrumins>you're welcome
13:55:42  * st_lukequit (Remote host closed the connection)
13:58:44  * mikolalysenkojoined
14:05:25  * kenperkinsjoined
14:05:55  * fallsemoquit (Quit: Leaving.)
14:07:00  * fallsemojoined
14:07:38  * tmcwjoined
14:08:23  * cianomaidinjoined
14:11:17  * fallsemoquit (Ping timeout: 252 seconds)
14:14:25  <dominictarr>juliangruber_: I one problem is that it's not setting the id on the scuttlebutts, each time i do a write they have a different id
14:14:36  * mikolalysenkoquit (Ping timeout: 264 seconds)
14:15:11  * thatguydanjoined
14:15:29  <dominictarr>juliangruber_: btw it's very helpful to put a package.json in examples/test scripts
14:18:11  * mikolalysenkojoined
14:18:14  * kenperkinsquit (Quit: Computer has gone to sleep.)
14:20:49  * cianomaidinquit (Quit: cianomaidin)
14:22:03  * defunctzombie_zzchanged nick to defunctzombie
14:23:46  * cianomaidinjoined
14:27:01  * Domenic__joined
14:30:39  * kenperkinsjoined
14:35:25  * fallsemojoined
14:38:16  * cianomaidinquit (Quit: cianomaidin)
14:41:51  * AvianFlujoined
14:46:36  * cianomaidinjoined
14:50:05  <dominictarr>juliangruber_: yeah, that was the first problem - because all my tests used the object as schema approach
14:50:58  <dominictarr>and the schema set the scuttlebutt.id to the udid but that is not the right place for that
14:54:16  * notalexgordon_changed nick to ec
14:58:43  * timoxleyjoined
14:58:57  <dominictarr>I'm adding a test to level-scuttlebutt for it
15:02:00  * dguttmanjoined
15:05:59  * ITprojoined
15:06:02  <rowbit>/!\ ATTENTION: (default-local) keith@... successfully signed up for developer browserling plan ($20). Cash money! /!\
15:06:02  <rowbit>/!\ ATTENTION: (default-local) paid account successfully upgraded /!\
15:25:40  * kenperkinsquit (Quit: Computer has gone to sleep.)
15:27:41  * jan____joined
15:28:38  * kenperkinsjoined
15:31:03  * shuaibquit (Ping timeout: 252 seconds)
15:32:02  * shuaibjoined
15:36:12  * wolfeidaujoined
15:37:51  * CryptoQuickjoined
15:53:57  <mikolalysenko>is there any library in npm that can draw orthographic tilemaps?
15:54:14  <mikolalysenko>for scrolling maps/games/etc
15:56:43  <tmcw>mikolalysenko: https://github.com/substack/tilemap
15:57:09  <mikolalysenko>tmcw: that only works for isometric tiles
15:57:32  <tmcw>in that case no
15:57:36  <mikolalysenko>hmm
15:58:10  <mikolalysenko>looking around, I found this project: https://github.com/zynga/scroller
15:58:20  <mikolalysenko>but it isn't npm-ified
16:01:52  * jcrugzzjoined
16:07:16  * shamajoined
16:07:33  * Benvie1quit (Ping timeout: 252 seconds)
16:09:40  <mikolalysenko>there is also this: https://github.com/simplegeo/polymaps
16:10:07  <mikolalysenko>but looks like it is kind of unsupported/abandoned
16:11:29  * shuaibquit (Quit: Textual IRC Client: http://www.textualapp.com/)
16:13:01  * Benviejoined
16:19:03  * ITproquit (Ping timeout: 260 seconds)
16:19:34  * cianomaidinquit (Quit: cianomaidin)
16:41:16  <dguttman>Browserify v2 and You (nix) - https://news.ycombinator.com/item?id=5512461
16:50:34  <juliangruber_>dominictarr: so is there a quick fix? doing function (name) { var d = new Doc(); d.id = name; return d } doesn't work
16:50:52  * dguttman_joined
16:51:31  * dguttmanquit (Ping timeout: 264 seconds)
16:51:32  * dguttman_changed nick to dguttman
16:51:36  <tmcw>mikolalysenko: oh, that's what you mean by orthographic
16:52:27  <dominictarr>juliangruber_: try 5.0.3
16:52:39  <dominictarr>I'm about to cycle home, will be back on line in half an hour
16:52:46  * dominictarrquit (Quit: dominictarr)
16:56:57  * mikealjoined
16:58:56  <guybrush>mikolalysenko: http://blog.tojicode.com/2012/08/more-gpu-tile-map-demos-zelda.html
17:01:56  * ITprojoined
17:04:25  <mikolalysenko>guybrush: nice demo, but I was thinking about canvas based tilemap libraries
17:04:46  <mikolalysenko>though I would agree that if you are using webgl, then you can obviously do things much more easily/efficiently
17:06:41  * tmcwthat's a premature obviously
17:07:12  <tmcw>mikolalysenko: you might just want to look into mapping libraries - openlayers, modestmaps, leaflet, etc have been in this space for a long time
17:08:20  <mikolalysenko>tmcw: Yeah, there should be some good solutions out there
17:08:27  <mikolalysenko>but it is disappointing that we don't have any in npm yet
17:08:38  <tmcw>leaflet and modestmaps are both in npm
17:09:05  <mikolalysenko>interesting
17:09:30  <mikolalysenko>do they allow custom tilemaps for making things like games?
17:09:35  <tmcw>yes
17:10:57  <mikolalysenko>hmm
17:11:07  * timoxleyquit (Remote host closed the connection)
17:11:30  * timoxleyjoined
17:12:24  <tmcw>there's nothing wrong with reinventing the wheel if you want to :)
17:12:34  <tmcw>but there's more to reinvent and write here than you expect
17:12:44  <mikolalysenko>yeah
17:13:06  <mikolalysenko>what I was thinking though was what it would take to make a basic tile map library for games
17:13:20  <mikolalysenko>I've written them before in the past, and it doesn't have to be too complicated
17:13:34  <mikolalysenko>but otoh, if you want stuff like streaming or whatever it can be nasty
17:13:45  <tmcw>probably good to give leaflet/modestmaps a shot
17:13:50  <mikolalysenko>can they do animations?
17:13:55  <tmcw>or to write pull requests for scroller
17:14:01  <tmcw>pull requests > new projects
17:14:12  <mikolalysenko>I think scroller is closest to what I want
17:14:18  <tmcw>yes, http://mapbox.com/easey/
17:14:39  <mikolalysenko>no, not that kind of animations
17:14:42  <mikolalysenko>I mean like animated tiles
17:14:51  <tmcw>that's outside of scope
17:14:57  <mikolalysenko>say you make a videogame like mario or something, you would want to have animated tiles in the background
17:15:13  <mikolalysenko>scroller can do this I think, but modestmaps leaflet would have issues
17:15:19  <tmcw>what issues?
17:15:44  <mikolalysenko>well, you would have to hack them apart and add some extra features
17:16:00  <tmcw>what features?
17:16:08  <tmcw>and what hacking? not sure you've looked into this.
17:16:39  * timoxleyquit (Read error: Connection reset by peer)
17:16:44  <mikolalysenko>ok, look at any super nintendo era video game
17:16:58  <mikolalysenko>most of them have tile based backgrounds that scroll
17:17:20  <tmcw>yes?
17:17:22  <mikolalysenko>a commonly used special effect in these games is to have animated tiles in the background
17:17:26  <tmcw>yep
17:17:29  <tmcw>and?
17:17:34  <mikolalysenko>so, how would you do that in modest maps?
17:17:52  <tmcw>a custom tile layer, that adds div tiles and does sprites.
17:17:56  <tmcw>there's an example of this in examples
17:18:02  <tmcw>and a bunch of real-world examples.
17:18:24  <tmcw>http://sealevel.climatecentral.org/surgingseas/place/cities/NY/New_York#show=cities&center=10/40.6979/-73.9797&surge=1
17:18:27  <tmcw>drag the slider.
17:18:44  <mikolalysenko>what if you want to do it in a canvas so you can draw polygons or other objects?
17:19:02  <tmcw>then write canvas layer.
17:19:19  <tmcw>just implement what you want to, man :)
17:22:07  <mikolalysenko>modest maps looks fine for mapping, but it is probably not a great solution for games.
17:22:20  <mikolalysenko>I think for a game you would maybe want something like scroller, or perhaps impactjs: http://impactjs.com/documentation/class-reference/backgroundmap
17:25:17  * shuaibjoined
17:31:18  * heathjschanged nick to heath
17:33:20  * defunctzombiechanged nick to defunctzombie_zz
17:36:58  * Asterokidquit (Ping timeout: 246 seconds)
17:45:01  * spionquit (Ping timeout: 246 seconds)
17:49:19  * mikealquit (Quit: Leaving.)
17:52:00  * yorickquit (Read error: Connection reset by peer)
17:56:11  * dominictarrjoined
18:03:10  * jxsonjoined
18:03:36  <chrisdickinson>dominictarr: does npm.im/pull-stream interop with classic streams?
18:04:16  <dominictarr>yes, there is a wrapper
18:04:27  <dominictarr>pull-stream-to-stream and stream-to-pull-stream
18:04:45  <dominictarr>stream-to-pull-stream can interface with both classic and new streams
18:07:17  <chrisdickinson>ah, interesting
18:07:33  <chrisdickinson>i'm thinking of implementing inflate as a pure js stream, and i think it'd be a lot easier as a pull-stream
18:08:24  <chrisdickinson>maybe it would be a duplex stream on the outside for the classic-stream-ness, and then internally it'd be a pull stream
18:09:55  * spionjoined
18:13:23  * AvianFluquit (Read error: Connection reset by peer)
18:13:56  * AvianFlujoined
18:15:11  * Raltquit (Ping timeout: 256 seconds)
18:15:58  * Raltjoined
18:18:33  <dominictarr>chrisdickinson: yeah, pull-stream is way easier
18:19:06  <chrisdickinson>i'm mostly doing this so i can actually tell where the compressed data ends and the uncompressed git data begins again in a stream
18:19:13  <dominictarr>also, I have a new plan, for testing streams -- that you'll find quite interesting, as a parser author
18:19:20  <chrisdickinson>oh?
18:19:46  <dominictarr>so, I realized that parsers, and async evented machines/systems are the same things
18:20:23  <dominictarr>so, I could specify the legal outputs for a simple through stream with something like regexp
18:20:37  <dominictarr>through: /DATA*(END|ERROR)CLOSE
18:20:42  <dominictarr> /
18:20:59  <dominictarr>means zero or more data events, then an end or an error, then close.
18:21:15  <chrisdickinson>oh interesting
18:21:27  <chrisdickinson>events as tokens
18:21:27  <dominictarr>so, that is a simple one
18:21:45  <dominictarr>what about a strictly pausing through stream?
18:22:17  <dominictarr>strict-through: /(DATA(PAUSE+DRAIN)?)*(END|ERROR)CLOSE/
18:22:26  <chrisdickinson>nice
18:22:43  <chrisdickinson>yeah, it's weird that classic streams don't have pause as an event
18:22:55  <dominictarr>they did originally
18:23:06  <dominictarr>I think taking it out was a mistake
18:23:09  <chrisdickinson>i was gonna say "(DATA PAUSE)*(END|ERROR)CLOSE"
18:23:14  <chrisdickinson>err, DATA RESUME
18:23:19  <chrisdickinson>… or drain, even
18:23:26  <chrisdickinson>man, my brain isn't working yet, haha
18:23:46  <dominictarr>so, that spec was only for the output of a stream
18:23:51  <chrisdickinson>though i suppose that's technically incorrect as a stream *could* error before draining
18:23:54  <dominictarr>I didn't take into account the input
18:24:00  <dominictarr>trup
18:24:03  <dominictarr>true
18:24:47  <dominictarr>for this sort of stuff you need a more sophisticated model
18:25:12  <dominictarr>and actually, there are many streams that should never emit errors
18:26:22  <dominictarr>but, when you get to the more sophisticated model, there are formalisms for that in the parsing world
18:26:39  <chrisdickinson>it's definitely an interesting approach. i like the idea of treating discrete events as tokens, and streams as the grammar
18:27:03  <dominictarr>you can specify accurately how difficult/complex a given stream/async-machine is my it's location in the chompsky heirachy
18:28:20  <dominictarr>another part of this idea is to augment each stream, etc, with logs for the event to stdout or stderr, and then it's checked by an external program, like tap
18:28:49  <dominictarr>the really cool thing here, is that you could use it to check any language that can write to stdout!
18:29:09  <dominictarr>even like, VBSCRIPT and shit!
18:29:50  <chrisdickinson>haha
18:29:52  <chrisdickinson>awesome
18:30:18  <dominictarr>ALSO: everyone uses regexp, so it's highly accessable
18:31:52  <dominictarr>Oh, yeah, also, since the gramma is a formal model, you can use it to check test coverage -
18:32:28  <dominictarr>but not just test coverage of lines of code, but coverage of particular state transitions
18:32:57  <dominictarr>in cases where you can randomly generate, and check outputs, you could automatically generate tests
18:45:36  * defunctzombie_zzchanged nick to defunctzombie
18:52:21  * ITproquit (Ping timeout: 256 seconds)
18:53:07  * jcrugzz_joined
18:54:12  <juliangruber_>dominictarr: what's the simplest module to test this approach with / develop it for?
18:54:46  <dominictarr>testing streams, definately
18:55:01  * jcrugzzquit (Ping timeout: 248 seconds)
18:55:51  <dominictarr>juliangruber_: ah, I found the error
18:57:59  * fotoveritequit (Read error: Connection reset by peer)
18:59:20  <juliangruber_>streams2? pull-stream? through?
18:59:27  <dominictarr>related
18:59:27  <juliangruber_>dominictarr: awsum! what's the error?
18:59:34  <dominictarr>it's embarassing
18:59:50  <dominictarr>https://github.com/dominictarr/level-live-stream/commit/55efdbb55f76a2efb6e07580c4bbed1fe8d36743
19:00:38  <dominictarr>I changed that because of <%reason%>, and then published it, and didn't have good tests for level-live-stream (because it's hard to write tests for streams…)
19:01:52  <dominictarr>ah, because I was working on level master, and you know, things get complicated, and you like symlink into node_modules with console.log EVERYWHERE
19:02:06  * jcrugzz_quit (Ping timeout: 245 seconds)
19:02:09  * dguttman_joined
19:02:24  <dominictarr>and then I realized level-live-stream had other problems, and wrote pull-level (which has good tests!)
19:02:44  <dominictarr>level-scuttlebutt is way to big anyway
19:03:28  * dguttmanquit (Ping timeout: 245 seconds)
19:03:28  * dguttman_changed nick to dguttman
19:03:41  <dominictarr>it's scary: 958 lines!
19:03:54  <dominictarr>I even use a lib/ folder, which I don't normally do
19:03:58  * anvakajoined
19:05:48  <juliangruber_>haha yeah everyone has been there I guess
19:06:18  <juliangruber_>so level-scuttlebutt is fixed now with a new version of level-live-stream?
19:06:58  * ITprojoined
19:07:14  <dominictarr>yup
19:07:20  <juliangruber_>sweet
19:07:30  <juliangruber_>what does level-scuttlebutt even do?
19:07:35  <juliangruber_>have you seen level-stay?
19:08:54  <dominictarr>yeah, saw it just before
19:09:18  <dominictarr>it does like too many things
19:09:56  <dominictarr>it saves scuttlebutts, but, it can also replicate the entire db, and it provides a remote client, and map-reduce
19:10:26  <dominictarr>and the remote stuff has support for bringing scuttlebutts in and out of memory, and transparently reconnecting
19:10:49  <juliangruber_>is a scuttlebutt that is only in leveldb and not in memory at all also possible?
19:10:55  <dominictarr>but I wrote it before I figured out level-sublevel
19:10:58  <dominictarr>juliangruber_: yes
19:11:12  <juliangruber_>that can be used?
19:11:25  <juliangruber_>like, can I use 100 scuttlebutts at the same time without consuming much ram?
19:11:38  <dominictarr>scuttlebutt.dispose() when you are finished using it
19:11:52  <juliangruber_>but it has to be loaded into memory initially
19:12:17  <dominictarr>if there is a client connected to it, then it has to be in memory
19:13:34  <dominictarr>but when they disconnect, it will dispose of that scuttlebutt after a timeout (1 sec I think)
19:14:09  <dominictarr>and the full db replication doesn't need to bring the scuttlebutts into memory at all
19:14:39  <dominictarr>so you can replicate 1000 scuttlebutts to another db, and it's just a text stream,
19:14:59  <dominictarr>the objective here is to be able to have many scuttlebutt servers, and load balance between them
19:16:40  <juliangruber_>ok I see
19:18:12  <anvaka>Hey guys. So I made this little visualization of NPM dependencies: http://www.yasiv.com/npm#view/browserify not sure if it's going to be useful for anyone, but hope it could give a better picture of packages structure :)
19:23:21  * yorickjoined
19:25:24  * jcrugzzjoined
19:33:53  * CryptoQuickquit (Ping timeout: 245 seconds)
19:34:26  <juliangruber_>anvaka: doesn't seem to work with npm.im/through
19:35:56  <anvaka>http://www.yasiv.com/npm#view/through works - it has no dependencies
19:36:14  <juliangruber_>aah, I thought it showed dependents
19:36:54  <anvaka>hm... that could be interesting graph too :)
19:37:12  <juliangruber_>with potentially far more nodes
19:37:15  <juliangruber_>I like it
19:38:24  <juliangruber_>anvaka: is yasiv on npm?
19:38:46  * ITproquit (Ping timeout: 246 seconds)
19:39:47  <anvaka>juliangruber_: the npm visualization is on github: https://github.com/anvaka/npmgraph
19:40:20  <juliangruber_>you should put it on npm
19:40:26  <juliangruber_>despite not having any dependencies
19:41:29  <anvaka>i'm kind of new to node side of js development :). Could you explain why this is a good idea?
19:42:50  <juliangruber_>so others can easily install it
19:43:23  * shuaibquit (Quit: Textual IRC Client: http://www.textualapp.com/)
19:44:31  * juliangruber_sleeping
19:46:34  * thl0quit (Remote host closed the connection)
19:47:19  <dominictarr>anvaka: everything should be on npm
19:47:58  <dominictarr>anvaka: also, it would be really cool to visualize _dependants_, as well as dependencies!
19:48:14  <guybrush>anvaka: looks awesome! very nice
19:48:47  <anvaka>thank you!
19:50:57  <anvaka>I'll add an option to show revers graph, as well as dev dependencies
20:00:25  * shuaibjoined
20:01:33  * AvianFluquit (Remote host closed the connection)
20:05:28  * anvakapart
20:06:07  * anvakajoined
20:13:46  * spionquit (Ping timeout: 246 seconds)
20:14:51  * spionjoined
20:16:45  * fotoveritejoined
20:21:49  * spionquit (Ping timeout: 246 seconds)
20:24:33  * shuaibquit (Quit: Textual IRC Client: http://www.textualapp.com/)
20:26:33  * thatguydanquit (Quit: thatguydan)
20:27:23  * tmcwquit (Remote host closed the connection)
20:27:39  * AvianFlujoined
20:28:26  <Domenic__>anvaka: I think circular dependencies break that. I tried to use it to diagnose a circular dependency problem and it showed a bunch of floating squares.
20:28:40  * tmcwjoined
20:29:23  <anvaka>Domenic__ can you give a link?
20:29:45  <Domenic__>anvaka: well i republished the package without circular dependencies, so not anymore :P
20:29:54  <anvaka>:)
20:30:04  <Domenic__>actually no i didn't
20:30:20  <Domenic__>hmm either i was seeing things or you used to do dev dependencies
20:30:54  <anvaka>how good is your ISP? Could it be a network glitch?
20:33:28  * ITprojoined
20:35:02  <Domenic__>pretty good usually. meh let's not worry about it; i'll let you know if i can repro it.
20:58:45  * Benviequit (Quit: Benvie)
21:01:36  * Benviejoined
21:19:41  * nicholas_quit (Read error: Connection reset by peer)
21:20:04  * nicholas_joined
21:22:13  * pib1964joined
21:42:56  * tilgovijoined
21:49:03  <Raynos>isaacs, Domenic__, dominictarr: Do you have any recommended reading on parallel errors?
21:49:16  <Raynos>I've realized why I find async errors hard. It's because two errors can happen concurrently
21:49:19  <dominictarr>parallel errors?
21:49:27  <Raynos>with synchronous code everything blocks and if an error occurs it throws terminating all other errors
21:49:40  <Raynos>i.e. there's no such thing as "two errors happened concurrently"
21:49:54  <dominictarr>so, you have two cbs and they both error
21:49:57  <Raynos>with async code where your opening two file descriptors in parallel both can error concurrently
21:50:19  <dominictarr>technically, they arn't concurrent, because they are in the same event loop
21:50:22  * st_lukejoined
21:50:24  <dominictarr>in the same process
21:50:43  <Raynos>with normal sync code you have a simple contract between the "source of an error" and the consumer.
21:50:51  <dominictarr>if(err) DIDERROR=err
21:50:52  <Raynos>the consumer calls a function. The functions may throw and you can catch it
21:51:00  <Raynos>with async code
21:51:05  <Raynos>you call a function and it has a cb
21:51:08  <Raynos>that may contain an error
21:51:11  <dominictarr>and if(DIDERR && err) TWOERRS = true
21:51:17  <Raynos>UNLESS that function talks to TWO sources in parallel
21:51:22  <Raynos>in which case it may contain two errors
21:51:44  <dominictarr>okay, I'm just saying that is not the definition of "concurrent"
21:51:57  <Raynos>by concurrent I mean
21:52:00  <dominictarr>well… not in the distributed systems sense
21:52:10  <Raynos>that if you have the first error there is no way to stop the second error from happening
21:52:15  <Raynos>in a serial non-concurrent fashion
21:52:21  <dominictarr>oh, right -
21:52:22  <Raynos>there is always a way to stop the second error from happening
21:52:25  <Raynos>by simply halting the program
21:52:26  * fotoveritequit (Quit: fotoverite)
21:52:41  <Raynos>you cant just halt the program once you have the first error because the second one may have already happened
21:52:45  <dominictarr>so, this discussion is waaay to non-specific enough
21:53:16  <Raynos>The specific problem I have is implementing merge or parallel or list
21:53:32  <Raynos>any kind of function that takes many asynchronous streams / futures / continuables / cbs / whatever and turns it into one
21:53:34  <dominictarr>for streams?
21:54:02  <Raynos>something feels fundamentally wrong about "first error wins, rest are ignored"
21:54:04  <Domenic__>Raynos: two patterns. any error = fail => deal with the first error you see and bail. this is often enough, e.g. in situations where you expect success and any errors mean you're going to have to stop doing stuff for a while. 2) handle all errors, e.g. turn them into a composite error
21:54:19  <dominictarr>Raynos: problem solved: promises
21:54:25  <Raynos>by "bail" do you mean terminate process
21:55:04  <Domenic__>no i mean stop caring about the result because you're fucked anyway, so fall back to the nearest error handling code (as distinct from error passing-up code)
21:55:12  <dominictarr>usually, you can handle the first error, and stop
21:55:13  <Raynos>Domenic__: my problem with a composite error is that I have this idea that waiting for "all errors" is a bad thing.
21:55:25  <Domenic__>Raynos: in that case yeah then just bail on first error
21:55:27  <dominictarr>but that isn't the general case
21:55:30  <Domenic__>it depends on scenario right
21:55:53  <Domenic__>in a http server drawing data from two sources and mashing them up into a response, if either fails, your HTTP request is fucked, so you should just bail the moment you see any error and give the appropriate 500
21:55:56  <Raynos>I guess it depends on what the error is and how you recover from it
21:56:15  <Domenic__>in something that's updating multiple databases, you don't want to bail if one of them is unavailable, so you wait for all operations to complete---success or failure
21:56:18  * st_lukequit (Remote host closed the connection)
21:56:35  <Domenic__>then you have to decide what to do with your potential list of failures, e.g. turn it into a composite error and halt further processing, or just log it, or ....
21:56:39  <dominictarr>if it's errors like lost my job, wife left me, son/daughter is gay/straight/vegan/republican/whatever those are errors I need to handle individually
21:56:52  <Domenic__>dominictarr: lollll
21:57:04  <Raynos>I just never really do anything with errors other then cleanup / bail / log
21:57:12  <Raynos>maybe this isn't that big of a problem
21:57:23  <Domenic__>I guess what I'm saying is it's not as much about the error as it is about hte operation that spawned the error
21:57:38  <Raynos>true
21:57:56  <dominictarr>yeah, it's the _meaning_ of the error
21:58:04  <dominictarr>is the error handleable?
21:58:13  <dominictarr>or are we just fucked
21:58:41  <Raynos>I'm thinking about having a stream protocol that's "chunk* error* end_of_stream" which would allow me to implement merge on streams
21:58:58  <dominictarr>like, if I stat a bunch of files to see if they are directories, then I want to know which ones are not existing, probably
21:59:35  <Domenic__>Raynos: I would *guess* that for merging streams usually you want to just error on first error. Assuming that this is like normal node streams and an error means no more data for you.
22:00:03  <Raynos>i just feel bad for swallowing errors at a library level
22:00:05  <dominictarr>you need to think about specific use cases here
22:00:24  <dominictarr>"error" has too many possible meanings
22:00:33  <Domenic__>Raynos: hah, yes, i agree with that
22:00:42  * yorickquit (Remote host closed the connection)
22:00:47  <dominictarr>just write a library that handles one situation, and the write another module later
22:01:11  <Raynos>i just want a way to forward many errors :P
22:01:37  <Raynos>so that when I need to I can write a consumer that knows how to handle one or more errors
22:01:38  <Domenic__>but then you have to wait for them
22:01:50  <Raynos>Domenic__: not if you emit them one at a time
22:01:59  <dominictarr>if it's a pull stream… you can send errors as data
22:01:59  <dominictarr>but then you need type checking
22:02:07  <Raynos>then the consumer can decide. I've seen one error in your stream. I'm going to abort you
22:02:18  <dominictarr>alternative: pass in a function to deal with the errors as they come
22:02:25  <Domenic__>Raynos: might work, i guess. might break lots of assumptions if there is data available after 'error' though.
22:02:28  <Raynos>or just (err, data) on pull cb
22:02:50  <dominictarr>cb(!!err) means end the stream
22:03:12  <Raynos>Domenic__: there should never be data available after error
22:03:58  <Raynos>I've made the assumption that a stream is not recoverable and cannot contain more data after an error
22:04:22  <dominictarr>Raynos: what are 3 actual usecases? and what types of errors can occur? how can you handle them?
22:04:48  <Raynos>i simply dont know enough about errors
22:04:58  <Raynos>i cant tell the difference between EPIPE and ECONNREFUSED
22:05:13  <dominictarr>to learn about errors: write a test framework then
22:05:41  <Raynos>i wrote 5 of those
22:06:02  <Domenic__>Raynos: what if you merge (a, b, err) with (x, y, z, w)
22:06:07  <dominictarr>do any of those 5 handle `throw false`
22:06:23  <Raynos>depends
22:06:25  <Raynos>maybe
22:06:26  <Domenic__>Or worse, (a, b, err) with (x, y, z, w, err)
22:06:43  <Raynos>Domenic__: a stream is a sequence of ordered chunks by time
22:07:30  <Raynos>so once you get an error, the chunk before it is the last chunk
22:07:42  <Raynos>after that the rest is zero or more errors followed by fini
22:07:56  <Raynos>this is a problem
22:08:07  <Raynos>if we assume there is a causation between the chunk before the error and the error
22:08:10  <Domenic__>so it's... (a, x, b, y, err)
22:08:19  <Domenic__>and the second err (let's rename it err2) gets lost
22:08:22  <Raynos>no.
22:08:43  <Raynos>a correct merge would take the first error. Send abort to the rest of the streams, the rest of the streams return 0 or more errors followed by fini
22:08:50  <Raynos>it forwards all errors and once it has all fini's it forwards a fini
22:09:11  <Domenic__>ah so all streams respect abort?
22:09:15  <Raynos>yes
22:09:19  <Domenic__>that helps
22:09:28  <Raynos>and by respect abort they no longer return chunks and return 0 or more errors followed by fini
22:09:40  <Raynos>https://gist.github.com/Raynos/6412cfd196cbf0379d76
22:09:48  <Raynos>I wrote a little gist for how I think it should work
22:10:10  <dominictarr>so, you when you are aborting all the streams, you could collect the errors into one
22:10:27  <Raynos>then you have to buffer the first error until you have the rest
22:10:31  <Domenic__>gist seems reasonable
22:10:33  <dominictarr>if the streams havn't produced their own error by then, they are just like "okay, bye"
22:10:40  <Raynos>i dont know whether streaming errors one by one or as one aggregate is better
22:11:08  <dominictarr>you need a few use-cases to judge that
22:11:11  <Raynos>dominictarr: they are just like "wait a sec. closing file descriptor. ill let you know if this errors. <time passes> nah im good. okay bye"
22:11:56  <dominictarr>how does one handle the error of a not closing file descriptor?
22:12:32  <dominictarr>"thanks, that is good to know…. getting on with my life now'
22:13:40  <dominictarr>what is that error like? you put something in the recycling, but it was ment to go in the trash?
22:13:58  <Raynos>dominictarr: I don't know is the answer
22:14:13  <Raynos>but all I know is that when a stream aborts
22:14:18  <Raynos>it does an asynchronous action
22:14:21  <Raynos>which may involve an error
22:14:22  <dominictarr>you where about to quit your job, but the the company went out of business?
22:14:59  <Raynos>lets say you try to abort a transaction
22:15:10  <Raynos>but it comes back saying "error too late. transaction completed"
22:15:10  <dominictarr>a bank transaction?
22:15:20  <Raynos>you would then need to implement rollback or rewind and apply that
22:15:38  <Raynos>but thats not a source :/
22:15:43  <dominictarr>EPLEASEIMPLEMENTROLLBACK
22:15:53  <Raynos>a better example is aborting a HTTP POST request
22:15:57  <Raynos>because you changed your mind
22:16:07  * mikolalysenkoquit (Ping timeout: 264 seconds)
22:16:12  <Raynos>"PLEASE IMPLEMENT TROLL BACK" :D
22:16:12  <LOUDBOT>EVEN THOUGH WE'RE SUPER-STRANGE YOU GUYS
22:16:12  <dominictarr>you can do that?
22:16:23  <Raynos>you can in a browser
22:16:31  <Raynos>I think you may be able to call `req.close()` on outgoing requests
22:16:49  <dominictarr>hmm, must assume a lot to the end point
22:16:57  <dominictarr>I guess if it's still queued, though
22:17:15  <Raynos>https://github.com/joyent/node/blob/master/lib/http.js#L1395
22:17:18  * ITproquit
22:17:21  <dominictarr>sounds like a protocol issue
22:18:05  <Raynos>the main issue is that I'm trying to write higher order functions on sources without thinking about sink functions
22:18:26  <Raynos>i somehow want to future proof my "duplex functions" to work with different type of sinks I may write in the future.
22:19:23  <dominictarr>I have an easier method
22:20:13  <dominictarr>just rewrite all your duplex functions when you realize you made the wrong assumption.
22:20:33  <Raynos>i dont like that, i do that too much
22:21:00  <Raynos>it implies i suffer too much from flavor of the month. "Oh ill just try this slightly different approach, its nicer. <rewrite half my modules>"
22:21:39  <dominictarr>well, don't rewrite stuff for a _slightly different_ approach
22:21:47  <dominictarr>how much is too much?
22:21:56  <dominictarr>most people don't do that enough
22:24:07  <Raynos>yeah I agree. What I really wanted to do was read prior art
22:24:14  <Raynos>about other systems where multiple errors can happen
22:24:28  <dominictarr>for handling async errors?
22:24:36  <dominictarr>npm.im/async
22:25:32  <Raynos>I meant other platforms
22:25:36  <Raynos>that are not node
22:25:52  <dominictarr>with threads?
22:26:20  <dominictarr>probably: just do one thing at a time and stop on the first error
22:28:19  <Raynos>blargh
22:28:29  <Raynos>I want to read a good article about stop on first error vs aggregate all
22:28:48  <Raynos>specifically for things where you can't halt on the first error and not have the other errors happen
22:28:54  <Raynos>so it has to be parallel / concurrent
22:32:10  * dominictarrquit (Quit: dominictarr)
22:37:09  * nicholas_quit (Remote host closed the connection)
22:42:13  * wolfeidauquit (Ping timeout: 240 seconds)
22:47:17  <jesusabdullah>domains?
22:47:24  <jesusabdullah>I've not used domains (yet)
22:47:37  * thatguydanjoined
23:03:53  * Domenic__quit (Remote host closed the connection)
23:07:15  <isaacs>Raynos: I always abort and stop caring after the first error.
23:07:42  <isaacs>if (errState) return; else if (er) return cb(errState = er); else if (--n === 0) cb()
23:23:01  * wolfeidaujoined
23:23:11  * tmcwquit (Remote host closed the connection)
23:28:01  * ralphtheninjaquit (Ping timeout: 246 seconds)
23:34:30  * wolfeidauquit (Remote host closed the connection)
23:38:27  * fallsemoquit (Quit: Leaving.)
23:46:09  * fotoveritejoined
23:54:46  * wiwilliajoined
23:57:47  * ralphtheninjajoined