00:00:01  * ircretaryquit (Remote host closed the connection)
00:00:08  * ircretaryjoined
00:00:24  <isaacs>defunctzombie: so, does bar() happen before foo() is assigned to x, or after?
00:00:55  <defunctzombie>after
00:01:01  <defunctzombie>doFoo won't return until foo is done
00:01:11  <defunctzombie>your calls act blocking
00:01:18  <defunctzombie>here is the typical example I give
00:01:25  <defunctzombie>var valid = validate_email(email);
00:01:31  <defunctzombie>today validate email uses a regex
00:01:41  <defunctzombie>tomorrow, I want it to use some http api or some other shit
00:01:52  <defunctzombie>my code that uses that validate_email does not care how email is validated
00:02:01  <defunctzombie>only that I get true | false
00:02:14  <defunctzombie>I don't need to re-architect my app when I decide to change that
00:02:27  <defunctzombie>and.. importantly, my error handling gets way simpler
00:03:36  <defunctzombie>isaacs: I agree about your post when thinking in terms of callbacks for everything, but don't when you have calls that *look* sync since statements after the line will execute after the line aways
00:04:13  <isaacs>defunctzombie: how the code looks is not important. what's important is knowing the order of execution in *your program*
00:04:29  <defunctzombie>isaacs: and the order of execution will be foo();
00:04:35  <defunctzombie>then the lines after that assignment
00:04:42  <isaacs>defunctzombie: so,you'er just saying that if you follow the rules, then the rules are all followed.
00:04:45  <isaacs>great.
00:04:47  <defunctzombie>...
00:04:56  <defunctzombie>no.. I am saying that if you have x = foo();
00:04:57  <isaacs>but, i'm telling you, there are cases where sync-looking APIs actually break these rules.
00:05:15  <defunctzombie>and foo returns immediately or not it doesn't matter for the case where statements execute after foo() only after foo returns
00:05:31  <isaacs>in fact, you asked specifically to be able to enter javascript multiple times concurrently
00:05:33  <defunctzombie>isaacs: show me real case and not just "telling me"
00:05:36  <isaacs>which could cause these rules to be broken.
00:06:00  <defunctzombie>isaacs: I actually found node-fibers which does what I want to play with
00:06:31  <isaacs>i don't get the aversion to knowing how your program runs.
00:06:31  <defunctzombie>isaacs: statements execute in order as you read them
00:06:38  <defunctzombie>isaacs: it is not an aversion
00:06:48  <defunctzombie>you know how it runs
00:06:55  <isaacs>defunctzombie: how do you do two async things in parallel, then?
00:06:55  <defunctzombie>I am simply saying that it doesn't matter at the lowest level
00:07:05  <defunctzombie>parallel(thing1, thing2);
00:07:15  * gwenbellquit (Quit: Lost terminal)
00:07:16  <isaacs>defunctzombie: ie, read a file, and also send a request, and then get a notice when they're both done?
00:07:21  <defunctzombie>and parallel returns when both done.. or the hundred other ways to do it
00:07:40  <isaacs>defunctzombie: what's `parallel` in your "looks sync" land?
00:07:48  <isaacs>defunctzombie: ie, if i change my email validator to a web service
00:07:59  <isaacs>defunctzombie: and i *also* change my url-validator to a web servic.
00:08:07  <isaacs>defunctzombie: how can i check *both* the email *and* the url at the same time?
00:08:25  <isaacs>defunctzombie: without (a) waiting for one to finish before the next runs, or (b) releasing zalgo
00:09:27  <isaacs>nb: i dont' need to know which will finish first, necessarily. just need to know that both are async, and let them do their async bits in parallel.
00:09:36  <isaacs>in other words, sometimes you actually don't *want* sleep(), you *want* setTimeout
00:09:48  <defunctzombie>isaacs: https://gist.github.com/shtylman/6325157
00:09:52  <isaacs>(to take the most basic "looks sync" example)
00:10:10  <isaacs>defunctzombie: ok, paste the code for parallel
00:10:19  <isaacs>defunctzombie: or is that a [native code] thing?
00:10:47  * coderzachquit (Quit: coderzach)
00:11:04  <isaacs>defunctzombie: if parallel(..) is a native thing, then ok, that's your keyword
00:11:57  <isaacs>defunctzombie: also, if it truly IS running the *javascript* in parallel, then things get super complicated fast.
00:12:26  <defunctzombie>isaacs: parallel would just setup a "container" that allows both calls to run
00:12:33  <defunctzombie>it does not run anything in threads
00:12:37  <defunctzombie>it is the same async model as node
00:12:46  <defunctzombie>when some IO is waiting, some other IO can dispatch
00:12:57  <defunctzombie>nothing is any more dangerous here than currently
00:15:22  <isaacs>https://gist.github.com/isaacs/6325173
00:16:04  <dominictarr>isaacs: I appreciate your position on this "always be async" thing
00:16:27  <dominictarr>but personally, I try to write code that accepts sync or async
00:16:31  <dominictarr>cbs
00:16:34  <isaacs>dominictarr: my position is "either always be sync or always be async"
00:16:39  <dominictarr>yes
00:16:44  <dominictarr>I appreciate that.
00:16:53  <isaacs>dominictarr: but if something is *usually* in teh cache, then your api should probably be sync
00:17:01  <isaacs>dominictarr: and fail crisply on a cache mis
00:17:33  <dominictarr>problems like this:
00:18:31  <isaacs>defunctzombie: updaetd gist to be clearer: https://gist.github.com/isaacs/6325173
00:18:41  <dominictarr>stream.write = function (data) { if(this.paused) { this.buffer(data); return false } else { this.emit('data', data); return true } }
00:18:49  <mbalho>mikolalysenko: level version in auth-socket is like 1 or 2 months old at this point, i think there were 2 minor versions last month of level
00:18:57  <dominictarr>^ there is a "sync race condition here"
00:19:10  <dominictarr>^ "sync race condition" i meant.
00:20:01  <isaacs>dominictarr: right, because it can emit either now or later.
00:20:12  <defunctzombie>isaacs:
00:20:18  <defunctzombie>isaacs: your gist, I don't follow it
00:20:24  <defunctzombie>isaacs: parallel does not mean threaded
00:20:27  <isaacs>defunctzombie: ok
00:20:30  <mbalho>mikolalysenko: to get the profile i think this callback would have to get changed to pass profile also, though 5 arguments seems kinda long https://github.com/maxogden/auth-socket/blob/master/index.js#L24
00:20:33  <isaacs>defunctzombie: what if hm() does some IO?
00:20:33  <dominictarr>isaacs: assuming it's a node EventEmitter
00:20:44  <isaacs>dominictarr: yeah
00:20:51  <dominictarr>it's sync
00:21:05  <defunctzombie>isaacs: then that first loop will wait until hm() returns
00:21:21  <isaacs>defunctzombie: one sec..
00:21:22  <defunctzombie>but that will not block the second loops hm from doing IO once it gets there
00:21:28  <dominictarr>but I fixed this bug with once for someone who found it with my stream-spec module -
00:21:31  <defunctzombie>so execution will be:
00:21:36  <defunctzombie>a() will start running
00:21:39  * st_lukequit (Remote host closed the connection)
00:21:54  <defunctzombie>when a gets to a point that it would "block on IO" aka the hm function
00:22:04  <defunctzombie>then it will give b a chance to start running
00:22:12  <defunctzombie>b will do the same
00:22:16  <isaacs>https://gist.github.com/isaacs/6325173
00:22:18  <isaacs>updated
00:22:40  <mikolalysenko>mbalho: hmm, but when would you not want the profile?
00:22:56  <Raynos>dominictarr: FFFF sync race condition
00:22:56  <mikolalysenko>mbalho: it seems a bit inefficient since you have to hit leveldb/doorknob multiple times
00:22:56  <dominictarr>isaacs: bug was: emit('data' ,data) can trigger dest to pause
00:23:06  <dominictarr>then return true is a like
00:23:10  <defunctzombie>isaacs: aren't those infinite loops?
00:23:11  <mikolalysenko>mbalho: also it breaks the abstraction in auth-socket to pull that data out...
00:23:17  <dominictarr>s/like/lie/
00:23:24  <isaacs>defunctzombie: no, theyr'e 1e9 loops
00:23:30  <defunctzombie>ok.. so
00:23:39  <dominictarr>and the fix is return !this.paused
00:23:39  <defunctzombie>b never sets x to 100
00:23:49  <defunctzombie>so that assert in b doesn't make sense to me
00:23:56  <isaacs>defunctzombie: oh, copypasta error, my bad
00:23:58  <isaacs>defunctzombie: rfresh
00:24:06  <isaacs>defunctzombie: if hm() yields, it's a one-pass loop, then a throw
00:24:15  <isaacs>defunctzombie: if hm() doesn't yield, then it runs forever.
00:24:19  <isaacs>er, for 1e9 times
00:24:54  <isaacs>defunctzombie: but my point is the same: it should be EITHER a function that yields, OR a function that does not yield.
00:24:59  <defunctzombie>no
00:25:07  <defunctzombie>I don't get that from this example at all
00:25:07  <isaacs>defunctzombie: and, "parallel" is your keyword here.
00:25:23  <isaacs>defunctzombie: dont' get what?
00:25:28  <isaacs>defunctzombie: that it matters what hm() does?
00:25:57  * evboguepart
00:26:03  <defunctzombie>that you wrote this code using a shared global var
00:26:20  <defunctzombie>and what I am saying is that unless the var is within the scope you should assume it can change
00:26:35  <isaacs>defunctzombie: why would it change? i'm only writing code that's synchronous
00:26:47  <isaacs>defunctzombie: it shouldn't be able to change out from under me, that's not how javascript works.
00:26:57  <defunctzombie>because you are not the owner of x
00:27:00  <isaacs>defunctzombie: also, if hm() doesn't yield, it CANT change.
00:27:02  <defunctzombie>?
00:27:09  <defunctzombie>you can write this same failure in node with callbacks now
00:27:19  <isaacs>defunctzombie: not like this you can't :)
00:27:28  <isaacs>defunctzombie: it's pretty obvious that you've got parallelizing code.
00:27:48  <mikolalysenko>hmm... so I just read that post and it is pretty clear to me this isn't a yield vs. callbacks issues
00:27:55  * tmcwjoined
00:27:56  <isaacs>defunctzombie: and, my claim is, with node style callbacks, if it's *sometimes* sync, and sometimes not, then zalgo
00:28:17  <mikolalysenko>it also seems a bit obvious to me. I mean the main point is that (a; b) is not the same as (a; yield; b)
00:28:18  <isaacs>mikolalysenko: yeah, you could translate the entire post to yield, and the same principles all apply
00:28:45  <isaacs>mikolalysenko: either your api function yields, or it doesn't.
00:28:54  <isaacs>mikolalysenko: butit shouldn't *sometiems* yield, and *sometimes* not
00:28:59  <mikolalysenko>well, or it yields in some specific condition and then life gets complicated
00:29:05  <isaacs>right
00:29:08  <mikolalysenko>since you need to understand those conditions to do get it to do what you want
00:29:35  <isaacs>it's actually *easier* to make that mistake with coroutines or fibers
00:29:43  <mikolalysenko>I agree though with the general point that yield is a semantic difference in the program, and your interface has to be clear about what conditions it yields under
00:29:51  <isaacs>generators have a bit more syntax that makes it less likely
00:30:02  <defunctzombie>isaacs: I would simply say a, and b are not parallel safe since they use a global
00:30:09  <mikolalysenko>well, depends
00:30:17  <isaacs>defunctzombie: clearly, because if hm() yields, they throw :)
00:30:33  <isaacs>defunctzombie: it's a bit surprising to me that this bit of javascript could throw
00:30:43  <defunctzombie>isaacs: why?
00:30:44  * st_lukejoined
00:30:51  <isaacs>because javascript function calls should block the execution of the program until they return
00:31:07  <defunctzombie>why?
00:31:18  <isaacs>defunctzombie: ask brendan eich
00:31:28  <isaacs>defunctzombie: this is how javascript works.
00:32:11  <defunctzombie>is that stated somewhere?
00:32:27  <isaacs>defunctzombie: besides section 10 of the ecmascript spec?
00:32:31  <defunctzombie>isaacs: as an side.. I am speaking more generally than js (since js by default doesn't have coroutines)
00:32:41  <defunctzombie>*aside
00:32:58  <defunctzombie>if you introduce coroutines, that statement does not have to hold
00:33:01  <isaacs>with something like fibers, even though you don't have actual concurrent execution of lines of javascript, you still have the exact same hazards of reentry and coopting
00:33:13  <isaacs>and preemption
00:33:13  <mikolalysenko>isaacs: it isn't exactly as bad
00:33:22  <isaacs>and other non-dipthong double-vowel words
00:33:30  <defunctzombie>?
00:33:32  <mikolalysenko>you can have atomic blocks with coroutines
00:33:36  <mikolalysenko>you don't really get that with threads
00:33:38  <defunctzombie>you only give up execution when you want to
00:33:51  <defunctzombie>and nothing is modifying x at the same time
00:34:11  <isaacs>mikolalysenko: a) you CAN have atomic blocks with threads, that's what mutexes are for, and b) you can only have that with coros if you know you aren't calling anything that might potentially yield.
00:34:12  <mikolalysenko>for example, things like producer/consumer queues are trivial in coroutines and take a lot of work to get right using threads
00:34:29  <mikolalysenko>mutex doesn't gaurantee atomicity. it only gaurantees mutual exclusion
00:34:41  <defunctzombie>isaacs: you don't need a mutex here for x
00:34:44  <mikolalysenko>stm is a way to atomicity
00:34:48  <isaacs>mikolalysenko: ok, true
00:34:51  <defunctzombie>no two things are actually running at once
00:34:52  <mikolalysenko>and no one uses it because it is slow and sucks
00:34:53  <kriskowal>defunctzombie: clarification: generators are shallow coroutines and do have run-to-completion semantics, broken explicitly on yield expressions. fibers are deep coroutines and can be interrupted implicitly at function call boundaries.
00:35:17  <defunctzombie>kriskowal: coroutines are interrupted when it yields in the coro
00:35:25  <defunctzombie>the coroutine has to allow itself to be interrupted
00:35:31  <defunctzombie>this is a difference
00:35:37  <kriskowal>begging a shallow/deep distinction.
00:35:40  <defunctzombie>but my point around all of this is not to argue a semantic
00:35:55  <isaacs>defunctzombie: semantics are exactly what i'm tryiing to argue. stop making this about syntax.
00:35:57  <defunctzombie>it is to say that writing syntax this way is way easier in many many cases
00:36:12  <defunctzombie>isaacs: I personally think syntax and structure matter
00:36:29  <defunctzombie>a lot
00:36:42  <isaacs>defunctzombie: my point is that, regardless of the syntax, if you are designing api's, and you care about your users' sanity and your own, you will define crisp semantics about which functions yield, and which do not.
00:36:43  <kriskowal>the syntax and semantics in this case are both important. the point of deep coroutines is that you can yield at a boundary that does not look like a boundary
00:37:07  <isaacs>quite a lot of work went into generators to make them far less likely to release zalgo.
00:37:26  <defunctzombie>isaacs: generators are boring imho
00:37:29  <mbalho>not crampusproof yet to
00:37:30  <mbalho>tho*
00:37:48  <kriskowal>ah, that would be the cusp of it.
00:37:56  <isaacs>crampusproof?
00:38:00  * jcrugzzquit (Ping timeout: 276 seconds)
00:38:03  <isaacs>defunctzombie: i like boring code.
00:38:07  <kriskowal>if you think fibers are exciting, you'll get a real thrill out of threads.
00:38:09  <defunctzombie>isaacs: and for your example of hm(); ... I think that program is very easy to reason about
00:38:09  <isaacs>defunctzombie: boring code is easy to deal with.
00:38:12  <mikolalysenko>there are a few things coroutines/fibers are nice for, like video games
00:38:16  <defunctzombie>isaacs: x is global, shit can happen to it haha
00:38:29  * tmcwquit (Remote host closed the connection)
00:38:44  <isaacs>mikolalysenko: i have nothing against coroutines/fibers. i have a lot against ambiguously synchronous APIs
00:38:46  <mikolalysenko>basically any place you have complicated internal states, like parsers or data structure traversals
00:38:58  <mikolalysenko>isaacs: ok, then we are in agreement here
00:39:47  <mikolalysenko>the main reason I like coroutines is that you can apply structured programming to concurrency
00:39:54  <isaacs>i thnk that coros make it easier to release zalgo (vs generators, where yields are shallow)
00:39:59  <mikolalysenko>so instead of having things like switch statements you just use loops and so on
00:40:17  <mbalho>isaacs: krampus/crampus
00:40:25  <kriskowal>i on the other hand do have a problem with deep courtines / fibers. but, isaacs is right that even in that case, you can avoid trouble if every function clearly documents that it will vouchsafe for itself and any function it calls (transitively), they may or may not yield
00:40:39  <isaacs>mbalho: the yule devil?
00:40:54  <mbalho>that is an anachronistic way to put it but yes
00:41:03  <mikolalysenko>isaacs: well, but sometimes you want recursion. for example, if you are say scripting a video game chracter it makes sense to have high level tasks that might yield
00:41:04  <defunctzombie>no way a function could verify everything it calls
00:41:08  <mikolalysenko>where each yield skips a tick
00:41:26  <mikolalysenko>or in a parser you might want to yield after parsing something, and that parsing requires some recursive function call
00:41:38  * AvianFluquit (Remote host closed the connection)
00:41:42  <mikolalysenko>like walking a tree for example
00:42:07  <mikolalysenko>in fact, without recursive yielding the whole concept of yield is a little useless/limited...
00:42:40  <defunctzombie>I would be way happier with yield if you could yield inside of functions, etc
00:42:47  <defunctzombie>at not just at the top level function*
00:42:49  <mikolalysenko>now of course you can make your own stack and basically push/pop and switch states etc...
00:42:53  <kriskowal>you can still do that with shallow coroutines. it isn't a limitation on recursion but on boundaries.
00:42:56  <defunctzombie>which is just more usless keyword nonsense
00:43:00  <mikolalysenko>and this works and is performant, but it is really horrible to write
00:43:19  <kriskowal>e.g., yield* nested()
00:44:16  * dominictarrquit (Quit: dominictarr)
00:45:17  <mikolalysenko>in the end, I'd be happy with a decent coroutine solution, but I've also programmed enough in languages without this feature that it won't bother me much if it never happens
00:45:20  <defunctzombie>why not pretend every function call could yield?
00:45:30  <mikolalysenko>defunctzombie: because that would be super inefficient
00:45:34  <defunctzombie>then you don't need to have yield in front of everything
00:45:39  <defunctzombie>mikolalysenko: [citation needed]
00:45:58  <mikolalysenko>do you know what it would take to implement a yield for each function?
00:46:03  <defunctzombie>I said pretend, it won't actually yield
00:46:39  <mikolalysenko>sometimes you might want things to be atomic though
00:46:57  <defunctzombie>but it will solve the issue isaacs creates with writing non parallelizing code with globals :)
00:47:06  <kriskowal>you want things to be atomic by default or you will have no way to ensure consistency from line to line
00:47:17  <mikolalysenko>basically assuming that everything can yield makes the size of your atomic blocks smaller
00:47:23  <mikolalysenko>and if you push that to the limit, you get threads
00:47:38  * frankblizzardquit (Read error: No route to host)
00:47:47  <mikolalysenko>large atomic blocks = fewer execution paths = easier to reason about
00:48:04  * frankblizzardjoined
00:48:28  <mikolalysenko>if you assume everything yields, you end up in race condition city with lots of different possible ways the code can execute
00:48:37  <mikolalysenko>you can do it and make it work, but it is a lot more effort
00:48:54  <defunctzombie>or you don't
00:48:59  <mikolalysenko>larger atomic blocks give you fewer moving pieces and make reasoning about the code easier
00:49:00  <defunctzombie>jsut depends how you are storing state
00:49:12  <mikolalysenko>well, you said you wanted to pretend that every function yielded
00:49:28  <mikolalysenko>if you do that though, then your code is going to split into a lot of little pieces
00:49:46  <mikolalysenko>and it is going to be hard to reason through all the different cases where things can slip in between them
00:50:12  <isaacs>kriskowal++
00:51:01  <isaacs>really, callbacks are sort of like the "nuclear option" wrt atomicity
00:51:02  * whit537quit (Ping timeout: 240 seconds)
00:51:11  <isaacs>*everything you can see* is atomic
00:51:39  <mikolalysenko>yeah, and yield "splits the atom" to strain the metaphor :)
00:53:28  <defunctzombie>problem boils down to this for me: https://gist.github.com/shtylman/6325335
00:53:45  <kriskowal>this whole conversation is predicted on the notion that the responsibility to ensure that a callback is called in a future turn is on the service provider
00:54:20  <defunctzombie>I think you can't assume that personally cause people write shit that breaks
00:54:22  * st_lukequit (Remote host closed the connection)
00:54:40  <defunctzombie>and this whole.. use callbacks appropriately thing is flawed in my mind since this leaves it up to people to do the right thing
00:54:44  <defunctzombie>with no system support
00:54:48  <defunctzombie>and that is just silly
00:55:03  <mikolalysenko>yield doesn't necessarily make it any better.
00:55:10  <defunctzombie>the language/runtime/etc should help do the right thing.. or otherwise there is no right hting
00:55:13  <mikolalysenko>I mean it might if you had say a type system or something to back it up...
00:55:35  <kriskowal>the language and runtime do help by providing coarse atomicity. that does not seem to be the direction you're going.
00:55:41  <mikolalysenko>but absent tools to check that a module does what it says it does, you just have to read the code or trust the author
00:55:42  <kriskowal>the solution is not in that direction
00:55:57  <kriskowal>but rather, a callMeLater(callback) wrapper so you can program defensively
00:56:21  <defunctzombie>I think that is silly since now my code is littered with that
00:56:27  <defunctzombie>and I view that as a failure of the language/runtime
00:57:33  * tilgoviquit (Remote host closed the connection)
00:57:38  <kriskowal>not sure how you're proposing that the language fix this failure
00:57:43  <defunctzombie>callbacks don't stop the above from failing
00:57:50  <defunctzombie>https://gist.github.com/shtylman/6325335#file-issue-js-L11-L13
00:58:19  * whit537joined
00:58:57  <kriskowal>ah, well. long story short, promises do provide a basis for programming defensively.
00:59:25  <defunctzombie>but callbacks do make it explicit and do allow you to wrap what happens
01:00:51  <mikolalysenko>ludum dare just started!
01:00:53  <defunctzombie>isaacs: I concede that the async looks sync hides that failure too easily... need to think about it more and why it matters
01:05:43  <kriskowal>FS.readFile("foo.txt", laterMaybe(callback)) https://gist.github.com/kriskowal/6325381
01:06:22  <kriskowal>of course the ability to optimize comes at the cost of defensiveness :P you can probably trust FS.readFile to call back in a future turn
01:10:09  <defunctzombie>heh
01:10:19  <defunctzombie>I would still prefer fs.readfile('foo.txt'):
01:10:23  <defunctzombie>and htat to block
01:10:30  <defunctzombie>followup execution
01:10:38  <defunctzombie>but need to think more about these seeming implications
01:10:40  <kriskowal>and hate that yield too i presume
01:10:54  <defunctzombie>;)
01:10:56  <kriskowal>can't have your cake and eat it in this case
01:11:40  * jxson_joined
01:11:51  * jxson_quit (Remote host closed the connection)
01:12:40  <defunctzombie>kriskowal: I will have my fucking cake and I will enjoy it!
01:13:17  <defunctzombie>http://31.media.tumblr.com/tumblr_lhjraxSqCa1qc3yd1o1_400.png
01:13:18  <kriskowal>that's fine as long as you're okay with not having it anymore afterward .
01:14:34  * kriskowalpart
01:15:42  * jxsonquit (Ping timeout: 264 seconds)
01:17:36  <defunctzombie>man.. that was harsh
01:17:38  * mikolalysenkoquit (Ping timeout: 264 seconds)
01:17:41  <defunctzombie>so much negativity
01:17:51  <jesusabdullah>idk who that kid is but man that's awesome
01:18:06  * mikolalysenkojoined
01:18:17  <jesusabdullah>okay what did I miss?
01:18:24  <jesusabdullah>Let's chill the harsh
01:18:29  <defunctzombie>thlorenz: dedupe?
01:18:36  <jesusabdullah>oh, promises vs. callbacks?
01:18:41  <defunctzombie>jesusabdullah: that is from the movie matilda
01:18:53  <defunctzombie>thlorenz: what does it dedupe on?
01:18:58  <defunctzombie>thlorenz: same content?
01:19:18  <jesusabdullah>why don't I remember this kid?
01:19:24  <jesusabdullah>I watched that movie
01:19:44  <jesusabdullah>dangit you guys why did you sass away kriskowal :(
01:25:39  * kumavis_quit (Quit: kumavis_)
01:27:25  * frankblizzardquit (Remote host closed the connection)
01:28:49  * frankblizzardjoined
01:29:43  * airportyhjoined
01:30:17  * i_m_cajoined
01:30:46  <airportyh>jesusabdullah: ping
01:34:43  <jesusabdullah>airportyh: hwy hwy
01:34:59  <jesusabdullah>airportyh: one sec
01:35:35  <airportyh>jesusabdullah: ok
01:35:59  <jesusabdullah>airportyh: so yeah, remind me again what you have in mind from our conversation on github?
01:37:04  <jesusabdullah>airportyh: the idea was to expose build information on modules, right?
01:38:07  <airportyh>jesusabdullah: yeah, whether browserify succeeded or failed
01:38:16  <airportyh>jesusabdullah: for which packages
01:39:05  <airportyh>jesusabdullah: so that we can make a much better package search index for browserifiable packages
01:39:14  <jesusabdullah>airportyh: because I was thinking about exposing an api for getting metadata about each singular build
01:39:51  <jesusabdullah>airportyh: like, which versions had been built and which hadn't, which ones were available on npm, basically an api that reflects what's on the cache with a dash of what's on npm
01:41:00  <airportyh>jesusabdullah: that's would be cool too, because it would be nice to know which packages people tried to use as well
01:41:13  <jesusabdullah>airportyh: yeah
01:41:28  <jesusabdullah>airportyh: there's an issue or two about this, let's see if I can find them
01:41:49  <airportyh>jesusabdullah: my thought was we use this info not necessarily to whitelist/blacklist, but more as a scoring system
01:41:54  <airportyh>jesusabdullah: for search ranking
01:42:10  <jesusabdullah>https://github.com/jesusabdullah/browserify-cdn/issues/24
01:42:47  * jibayquit (Read error: Connection reset by peer)
01:43:15  <jesusabdullah>airportyh: I think that + npm information on some api, like, /metadata/{{bundle}} or some such
01:43:19  <jesusabdullah>airportyh: thoughts?
01:43:57  <airportyh>jesusabdullah: would that be for all versions or just ones in cache?
01:44:25  <jesusabdullah>airportyh: I think Max was thinking all versions, so like if you were already checking the cache you'd just also hit a route on npm and do some merging
01:44:51  <jesusabdullah>airportyh: if you implement one the other is easy to add on later, so like, yeah no worries
01:45:07  <jesusabdullah>airportyh: It would probably just be on the same endpoint, was all I was thinking
01:45:22  <airportyh>jesusabdullah: gotcha
01:45:53  <jesusabdullah>airportyh: if you'd like I can try to show you around the code near the caches
01:46:29  <airportyh>jesusabdullah: please
01:46:44  <jesusabdullah>airportyh: https://github.com/jesusabdullah/browserify-cdn/blob/master/bundler/cache.js So this is the code that generates the cache objects
01:47:21  <airportyh>jesusabdullah: the cache is all stored in leveldb?
01:47:26  <jesusabdullah>airportyh: it wraps a leveldb with ttls and sublevels
01:47:28  <jesusabdullah>airportyh: yeah
01:47:50  <jesusabdullah>airportyh: so the function you end up using starts on line 18 once you generate these
01:49:00  <jesusabdullah>airportyh: "body" is the key, "generate" is a function that calls back with what the value should be, and that gets stored into the leveldb before the outer callback is called
01:49:49  <jesusabdullah>airportyh: https://github.com/jesusabdullah/browserify-cdn/blob/master/bundler/cache.js#L64-L99 Here is where I actually create the caches, you can see how I configured them there
01:50:33  <airportyh>right
01:50:49  <jesusabdullah>airportyh: so the caches expose the db directly, you could just wrap the cache object in something that grabs cache.db and pulls out the proper information when you call to that route
01:51:03  <jesusabdullah>airportyh: I think the hardest part is the queries
01:52:14  <jesusabdullah>airportyh: if you want to take this on I think that would be very cool. I'd PR that in a heartbeat
01:52:18  <airportyh>jesusabdullah: what kind of queries would we send?
01:52:56  <airportyh>jesusabdullah: the one I am most interested is - send a bunch of package@vers, and get back good/bad
01:53:12  <jesusabdullah>airportyh: yeah, basically that's it
01:53:27  <jesusabdullah>airportyh: you can get which semvers have been called for that module
01:53:39  <jesusabdullah>airportyh: you can also get which versions have been successfully built
01:53:49  <airportyh>jesusabdullah: sorry brb
01:54:23  <jesusabdullah>airportyh: there's no store on which ones have failed, that could be really useful for debugging as an aside
01:54:40  <jesusabdullah>airportyh: cause some of those are gonna be browserify-cdn's fault ;)
01:57:45  <airportyh>jesusabdullah: gotcha, okay I'll give this a try
01:57:51  <jesusabdullah>airportyh: awesome!
01:57:56  <jesusabdullah>airportyh: I'm sure it will be great!
01:58:27  <airportyh>jesusabdullah: I like this project
01:58:46  <airportyh>jesusabdullah: my goal is to make browserify/npm more viable for frontend dev
01:59:05  <jesusabdullah>airportyh: me too XD
01:59:19  <airportyh>jesusabdullah: gotta go now, baby crying :) ttyl
01:59:22  <jesusabdullah>airportyh: I like that I can get browser builds for free
01:59:28  <thlorenz>defunctzombie: will explain tomorrow - along with the PR I'm about to make
02:00:01  <thlorenz>Domenic_: dedupe is not perfect at this point, but this is all I could do w/ minimal browserify changes
02:00:26  <thlorenz>so whatever version it sees first will be used for all future packs that are considered compatible
02:01:27  * frankblizzardquit (Remote host closed the connection)
02:04:17  * jcrugzzjoined
02:07:51  * gwenbelljoined
02:08:17  * thlorenzquit (Remote host closed the connection)
02:15:06  * airportyhquit (Ping timeout: 264 seconds)
02:21:04  * soldairquit (Quit: Page closed)
02:23:28  * airportyhjoined
02:23:44  * airportyhquit (Client Quit)
02:34:39  * thlorenzjoined
02:35:56  * thlorenzquit (Remote host closed the connection)
02:41:30  * ednapiranhajoined
02:46:08  * gwenbellquit (Quit: Lost terminal)
02:46:42  * mikolalysenkoquit (Ping timeout: 276 seconds)
02:47:29  * ednapiranhaquit (Remote host closed the connection)
02:52:19  * mikolalysenkojoined
03:11:16  <chapel>anyone know the nitty gritty of how browsers handle redirects and passing the referrer?
03:12:04  <chapel>I came up with an issue where our site is redirecting mobile users to the mobile site, the referrer is being lost, and I don't know if it is due to the 301 redirect or because its going from www.domain.com to mobile.domain.com
03:13:18  * whit537quit (Quit: whit537)
03:20:09  <mbalho>say i wanted to make a command line utility called `pizza` such that i could "cat foo.json | pizza" and have it do stuff with stdin but if i just type pizza i should get the usage/help
03:20:20  * kumavis_joined
03:20:20  <mbalho>anyone have a good example of doing this?
03:20:41  <jesusabdullah>mbalho: https://github.com/jesusabdullah/exercise-bike
03:20:57  <jesusabdullah>mbalho: that was the best I could figure out
03:21:28  <mbalho>you have to say ':stdin:' ?
03:21:39  <mbalho>ive seen some modules use a hyphen
03:21:47  <mbalho>like 'pizza -' would tell it to listen on stdin
03:24:21  <isaacs>mbalho: process.stdin.isatty()
03:24:45  <isaacs>er, process.stdin.isTTY
03:25:11  <mbalho>so if you do "something | node" it sets isTTY to true?
03:25:17  <mbalho>what other situations cause isTTY to be true
03:25:47  <isaacs>mbalho:
03:25:48  <isaacs>$ echo '' | node -p '!!process.stdin.isTTY'
03:25:48  <isaacs>false
03:25:48  <isaacs>$ node -p '!!process.stdin.isTTY'
03:25:50  <isaacs>true
03:26:01  <isaacs>mbalho: process.stdin.isTTY is true if stdin is a tty
03:26:16  <isaacs>mbalho: if you do somethign | node, then isTTY is false
03:26:22  <isaacs>mbalho: or if you do node <file.js
03:26:27  <isaacs>mbalho: or node <(process)
03:26:44  <mbalho>gotcha
03:26:44  <isaacs>mbalho: that's how node knows to start the repl or not
03:26:52  <substack>or spawn(process.execScript, ['file.js'])
03:27:07  <substack>*execPath
03:27:12  <isaacs>mbalho: if (process.stdin.isTTY) { usage() } else { doStuff() }
03:27:35  <substack>&& argv.file !== '-'
03:27:52  <isaacs>substack: well, you can use stdin by default if it's not a TTY
03:27:58  <isaacs>substack: like node does.
03:28:09  <isaacs>$ echo 'console.log("pizza")' | node
03:28:09  <isaacs>pizza
03:28:18  <mbalho>pizza\n
03:28:28  <isaacs>$ node <(echo 'console.log("pizza")')
03:28:28  <isaacs>pizza
03:29:09  <isaacs>$ echo '1+2' | node -p
03:29:10  <isaacs>3
03:29:10  <mbalho>i hope the next version of node aliases ☃ to process.stdout.write
03:29:26  <isaacs>$ echo '1+2' | node -i
03:29:26  <isaacs>> 1+2
03:29:26  <isaacs>3
03:29:26  <isaacs>>
03:29:55  <mbalho>isaacs: what does -e do?
03:30:12  <isaacs>mbalho: -e executes the argument. -p executes the argument and then prints the result
03:30:17  <mbalho>ohhh
03:30:28  <mbalho>i thought you needed -pe, didnt know -p did both
03:30:38  <isaacs>mbalho: before 0.10, i think, you did
03:30:43  <isaacs>maybe before 0.8
03:30:49  <isaacs>i forget those olden-times versions
03:30:58  <mbalho>☃('pizza')
03:30:59  <mbalho>pizza
03:31:01  <mbalho>console.log('pizza')
03:31:03  <mbalho>pizza\n
03:32:12  <mbalho>♞({"foo": "bar"})
03:32:14  <mbalho>{"foo":"bar"}
03:32:23  <mbalho>console.log({"foo": "bar"})
03:32:23  <mbalho>{ foo: 'bar' }
03:32:30  <mbalho>or { foo: 'bar' }\n rather
03:32:49  <mbalho>isaacs: i will pull request snowman and horse aliases to node core ne day
03:32:51  <mbalho>one*
03:32:57  <mbalho>isaacs: horse being JSON.stringify
03:33:24  <isaacs>mbalho: userland
03:33:38  <mbalho>but.... the lulz....
03:33:47  <isaacs>lulz belong in userland.
03:34:13  <mbalho>npm doesnt let be publish obscure unicode names does it?
03:34:48  <mbalho>npm ERR! Error: Invalid name: "♞"
03:37:36  <mbalho>isaacs: http://i.imgur.com/34b6rZp.png works
03:38:44  <isaacs>mbalho: yeah, npm doesn't let you publish anything that isn't url-safe.
03:38:58  <isaacs>mbalho: anything where encodeURIComponent(name) !== name, failse.
03:39:28  <mbalho>o rite
03:40:38  <mbalho>isaacs: i get an error when i do module.exports.♞ = 1
03:41:07  <mbalho>oh i cant use it as a literal
03:41:13  <mbalho>time to bug brendan brb
03:41:29  <isaacs>mbalho: yeah, that horse is ILLEGAL
03:43:18  <isaacs>NOBODY expects the token: ILLEGAL!
03:43:25  <chapel>http://jsperf.com/closure-vs-property/7
03:43:29  * isaacsneeds a unicode symbol for "spanish inquisition"
03:55:03  <mbalho>bah sucks so bad that we dont have unicode literals, pretty sure ruby has that
03:56:47  <jcrugzz>mbalho: lol you really want that horse module dont you
03:57:21  <mbalho>damn right
03:58:54  * i_m_caquit (Ping timeout: 256 seconds)
04:02:24  <jesusabdullah>who wouldn't?
04:07:58  * kriskowaljoined
04:13:14  * mk30_quit (Quit: Page closed)
04:46:02  * jergasonjoined
04:46:43  * defunctzombiechanged nick to defunctzombie_zz
04:47:46  * calvinfoquit (Quit: Leaving.)
04:55:29  * jergasonquit (Remote host closed the connection)
04:58:13  <juliangruber>substack: am I right that watchify doesn't play nicely with brfs?
04:58:25  <juliangruber>substack: as updated html doesn't trigger a browserify update?
05:07:14  * kumavis_quit (Quit: kumavis_)
05:09:26  * defunctzombie_zzchanged nick to defunctzombie
05:17:11  <jcrugzz>if anyone ever has the need to do consistent hashing with multiple redises, there is now a module for that https://github.com/jcrugzz/multi-redis
05:18:10  * tilgovijoined
05:18:32  * calvinfojoined
05:19:37  * kumavis_joined
05:22:09  <jesusabdullah>jcrugzz: that's pretty sweet
05:23:44  <jcrugzz>jesusabdullah: thanks! it is the same strategy used for the new logging system i implemented. until i convert into something based on multilevel :)
05:28:49  <jesusabdullah>gg
05:32:30  * kumavis_quit (Quit: kumavis_)
05:36:02  * kumavis_joined
05:44:43  <chrisdickinson>chapel: is that jsperf link re trevnorris' post?
05:45:37  <chapel>chrisdickinson: no mrelph
05:45:50  <chrisdickinson>ah
05:47:35  <jcrugzz>chrisdickinson: seems similar to the tests you had setup
05:47:56  <chrisdickinson>yeah
05:51:45  <chapel>chrisdickinson: where is trevnorris' post?
05:51:53  <chrisdickinson>http://blog.trevnorris.com/2013/08/long-live-callbacks.html
05:52:23  <chrisdickinson>re: the slowdown of the first example: https://gist.github.com/chrisdickinson/7d344ca7454adfd11a15
05:52:50  <chrisdickinson>(spoiler: it's because v8 can't inline a function into a loop when that function isn't in the same context as the loop)
05:53:31  <jcrugzz>i still await the day that bind will be better optimized
05:53:55  <jcrugzz>cause the workaround does not feel as nice xD
05:53:57  <chapel>jcrugzz: you're telling me
05:54:00  <chapel>bind is super slow
05:54:45  <chrisdickinson>how would you optimize it?
05:55:15  <jcrugzz>chrisdickinson: i have not dove into v8 land so i couldnt tell you, im assuming its difficult
05:55:36  <chrisdickinson>(no matter what, it's turning one function call into two, and changing the arity of the resulting function)
05:55:52  <mikolalysenko>chapel: what is slow about bind?
05:55:56  <chrisdickinson>I think it's probably more likely that fat arrow functions'll end up getting optimized
05:56:10  <mikolalysenko>actually, I've found that it isn't much worse than using a closure to do the same thing
05:56:31  <chapel>mikolalysenko: bind does a lot of things that most people wouldn't do if just using a closure (even emulating bind)
05:56:35  <mikolalysenko>I also once tried this thing, but it was crazy and bind ended up being faster anyway: https://github.com/mikolalysenko/specialize
05:56:37  <chapel>since bind has a curry feature
05:56:58  <mikolalysenko>in practice I'm not sure it makes much difference...
05:57:13  <chapel>in most cases yes, but if it is a hot function, bind is bad
05:57:23  <mikolalysenko>if you call bind() in a loop it is bad
05:57:28  <mikolalysenko>but calling a bound function is ok
05:57:29  <chapel>e.g. one that is being iterated or called thousands or more at a time
05:57:33  <chrisdickinson>oh! i bet that's part of it, too! if you're benching bind, the returned function will almost certainly be in a different context than the loop
05:57:45  <chrisdickinson>so .bind-generated functions shouldn't be able to be inlined
05:58:02  <mikolalysenko>here was the experiment I did: https://github.com/mikolalysenko/specialize/blob/master/benchmark/ca.js
05:58:19  <mikolalysenko>my conclusion was that bind ended up being the fastest way to do that...
05:58:39  <jcrugzz>why is v8 able to better optimize https://github.com/spion/async-compare/blob/master/examples/flattened-class.js#L3-L7 ?
05:58:40  <mikolalysenko>about on par with manually inlining the arguments
05:58:40  <chapel>isaacs on a podcast was complaining about bind in fact
05:58:50  <jcrugzz>just becaue its separated out to perform the same functionality?
05:59:33  <mikolalysenko>well, here is how I am using it: https://github.com/mikolalysenko/specialize/blob/master/benchmark/ca.js#L59
05:59:59  <chrisdickinson>jcrugzz: because the function being called is https://github.com/spion/async-compare/blob/master/examples/flattened-class.js#L4, with context from the enclosing scope
06:00:14  <chrisdickinson>any hot loop calling that function will not be able to inline it
06:01:12  <jcrugzz>chrisdickinson: so how exactly does this differ from using the built in .bind()
06:01:19  <chrisdickinson>it doesn't
06:01:36  <jcrugzz>so its just a misconception that doing this is faster?
06:01:36  <chrisdickinson>i.e., that could be part of why bind is slow in benchmarks
06:01:43  <chrisdickinson>https://github.com/v8/v8/blob/master/src/hydrogen.cc#L6339-L6345
06:02:04  <chrisdickinson>specifically 6343
06:02:29  <chapel>a good explanation why bind is slower http://stackoverflow.com/questions/17638305/why-is-bind-slower-than-a-closure
06:03:41  <chapel>jcrugzz: your closure based bind is much simpler than native bind
06:05:44  * shamaquit (Remote host closed the connection)
06:05:52  <jcrugzz>chapel: but im guessing not by much
06:06:01  <chapel>by quite a bit
06:06:01  <jcrugzz>using a var self = this; and using a pure closure will always be faster
06:06:22  <chapel>native bind has currying, and lots of checks
06:06:36  <mikolalysenko>chapel: the benchmark from that stack overflow page isn't very good
06:06:42  <mikolalysenko>it just binds the functions and runs them once
06:06:49  <chapel>jcrugzz: yes, always faster, just saying to emulate natives bind (as much as is possible) is more complicated
06:07:03  <mikolalysenko>so you are really just measuring the overhead of constructing each bound closure, not how efficiently v8 optimizes them
06:07:03  <chapel>mikolalysenko: yeah, I've seen better, but the explanation is good
06:07:09  <mikolalysenko>not sure I buy it
06:07:31  <mikolalysenko>the scope is only related to inlining
06:07:36  <mikolalysenko>but that doesn't even matter in that example
06:07:59  <mikolalysenko>also if you look at the example I have here: https://github.com/mikolalysenko/specialize/blob/master/benchmark/ca.js
06:08:05  <mikolalysenko>both bind and a closure inline it
06:08:14  <mikolalysenko>or at least they get to comparable levels of performance
06:08:21  <mikolalysenko>with a slight edge to bind actually in the long run
06:08:50  <mikolalysenko>anyway, there are two things: inlining and partial evaluation
06:09:00  <mikolalysenko>inlining only matters when you are binding a function argument
06:09:06  <chapel>this is a better test, but not complete emulation http://jsperf.com/bind-vs-jquery-proxy/23
06:09:23  <chrisdickinson>jcrugzz: https://gist.github.com/chrisdickinson/7dbab0c37892aca7200d
06:09:29  <chrisdickinson>run with node --trace_inlining
06:09:43  <mikolalysenko>chapel: that is still not a good benchmark
06:09:54  <chapel>how is it any worse than yours?
06:09:56  <mikolalysenko>you are just measuring the time required to construct each bound function, not how fast they are
06:10:22  <chapel>the 3 at the bottom are saved
06:10:24  <chapel>e.g. already bound
06:10:49  <chapel>native bind in that case is half as fast as no bind
06:11:02  <chapel>and little more than half of custom bind
06:12:05  <mikolalysenko>well they are using bind wrong first of all
06:12:21  <mikolalysenko>they are setting 1 as the object parameter which is probably screwing something up in the v8 type inference
06:12:33  <chrisdickinson>mikolalysenko: it'll just get boxed
06:12:38  <chapel>how are they using it wrong?
06:12:46  <chapel>a number is an object
06:13:04  * dguttmanquit (Quit: dguttman)
06:13:13  <mikolalysenko>yeah, but it is going to be adding a number to an undefined
06:13:20  <mikolalysenko>and there is a big penalty for doing that in terms of perf
06:13:34  <mikolalysenko>since it is only binding "this" not the rest of the arguments
06:14:02  <chrisdickinson>mikolalysenko: interesting, the "warmup" of manualInline gets inlined, but the main loop doesn't
06:15:45  <chapel>mikolalysenko: bind doesn't require arguments to be passed
06:16:08  <chapel>not sure your point there?
06:16:42  <mikolalysenko>chapel: I think I am just getting a bit tired, this is somewhat confusing
06:16:50  <chapel>mikolalysenko: here, I changed it to passing an object as this http://jsperf.com/bind-vs-jquery-proxy/25
06:17:09  <chapel>passing a number definitely affect performance
06:19:20  <mikolalysenko>wow those numbers are crazy
06:19:41  <jcrugzz>as a hypothetical, what would be the performance in a case like https://gist.github.com/jcrugzz/38c08b14d91e4a19f60f if you guys can help me better conceptualize it
06:20:48  <chrisdickinson>jcrugzz: from trev's article, he would much rather you make the callback for getData a named function outside of the expression :)
06:20:56  <chapel>well, you'll hit the limits of the network layer and or sockets before bind would cause a performance impact there I'd guess
06:21:01  <chrisdickinson>yeah
06:21:23  <jcrugzz>chrisdickinson: haha of course ;)
06:21:38  <chapel>now if you used a custom scope based bind, you'd not have any perf issues
06:21:46  <chrisdickinson>jcrugzz: also, the fact that bound functions can't be inlined doesn't really matter in this case, because no matter what the handler you pass to http.createServer will be in a different context than the source of createServer
06:21:58  <chapel>yeah
06:22:00  <mikolalysenko>I think what it comes down to is that for small functions the overhead of bind() is likely to be pretty large
06:22:08  <jcrugzz>chrisdickinson: ok that makes sense
06:22:10  <chrisdickinson>i.e., anything you pass in there will not be inlined
06:22:18  <chapel>mikolalysenko: for sync hot code yeah definitely
06:22:28  <chapel>for actual usage, probably not an issue
06:22:32  <chrisdickinson>mikolalysenko: yeah, you double the number of calls and returns, + associated garbage generation for calls
06:22:59  <chapel>for a game though, I'd say bind is bad
06:23:08  <mikolalysenko>depends where you use it
06:23:11  <chapel>mikolalysenko: yes
06:23:30  <chapel>if its in any of the main loops or timing critical code, probably not good
06:23:30  <mikolalysenko>like you obviously don't want to do bind() on little message handler functions
06:24:01  <chapel>anyways, bind is slow, but rarely is it an issue for user code
06:25:00  <mikolalysenko>though what is puzzling me is the performance of bind here:
06:25:05  <mikolalysenko>https://github.com/mikolalysenko/specialize/blob/master/benchmark/ca.js
06:25:14  <mikolalysenko>I just ran this benchmark and here are the times I got:
06:25:18  <mikolalysenko>Time for manual inline --- 18687 ms
06:25:18  <mikolalysenko>Time for bind() --- 18532 ms
06:25:18  <mikolalysenko>Time for closure --- 19303 ms
06:25:19  <mikolalysenko>Time for specialize() --- 18613 ms
06:25:31  <mikolalysenko>and that is pretty typical, with bind() usually being the fastest
06:25:40  * dguttmanjoined
06:25:52  <mikolalysenko>I remember finding that and getting pretty irritated actually since I was hoping that I could get specialize() to beat bind()
06:26:01  <chrisdickinson>mikolalysenko: well, your warm-up loop has deleterious effects on manualInline
06:26:10  <mikolalysenko>I can take it out
06:27:01  <mikolalysenko>the point of that warmup loop was to account for cache issues which would bias against the first test cases
06:27:52  <mikolalysenko>here are the times with no warmup:
06:27:52  <mikolalysenko>Time for manual inline --- 18559 ms
06:27:52  <mikolalysenko>Time for bind() --- 18468 ms
06:27:52  <mikolalysenko>Time for closure --- 19225 ms
06:27:53  <mikolalysenko>Time for specialize() --- 18512 ms
06:27:55  <mikolalysenko>same story again
06:27:58  <chapel>this is the article the original jsperf was from http://mrale.ph/blog/2013/08/14/hidden-classes-vs-jsperf.html?utm_source=javascriptweekly&utm_medium=email
06:28:03  <chapel>well my variation of it that is
06:30:13  <chrisdickinson>mikolalysenko: yeah, rule gets inlined into updateCA, so manualinline and bind should be about the same
06:30:17  * chapelpart ("Textual IRC Client: www.textualapp.com")
06:30:22  * chapeljoined
06:30:26  <chrisdickinson>i.e., they're basically the same code
06:30:33  <mikolalysenko>yeah
06:30:39  <mikolalysenko>so why is the closure version slower?
06:30:48  <mikolalysenko>also specialize() basically does the inlining too
06:31:05  <mikolalysenko>it is basically a naive partial evaluator for js written in js
06:31:39  <chapel>night guys, have a good one
06:33:59  * dguttmanquit (Quit: dguttman)
06:33:59  <chrisdickinson>aaah
06:34:03  <chrisdickinson>i think i see what might be an issue
06:34:20  <chrisdickinson>well, not an issue per se
06:35:33  * dguttmanjoined
06:35:40  <chrisdickinson>but something makes more sense to me. for every run through the bench'd loop, we're doing 0xFFFF * 2 `rule` ops
06:36:17  <chrisdickinson>so we do 0xFFFF * 2 * 45000 rule ops -- so if that gets inlined, things get *way* speedier. the actual outer call pales in comparison to that
06:36:29  <mikolalysenko>yeah
06:36:48  <mikolalysenko>basically it seems to me that all the overhead in bind() is in that constant outer call
06:36:53  <chrisdickinson>to see "bind" fail you, you just have to call `wolfram30.bind(null)` and pass that in instead of `wolfram30`
06:37:15  <chrisdickinson>because then it won't be inline-able
06:37:24  <mikolalysenko>I see
06:37:37  * dguttmanquit (Client Quit)
06:37:43  <mikolalysenko>but the actual bound result is itself pretty fast...
06:37:54  <mikolalysenko>I wonder if bind() does something different than the closure here
06:38:18  <mikolalysenko>or if they amount to the same thing after you go through whatever extra rigamarole bind does when you call it
06:39:13  <chrisdickinson>bind's inlining of `rule` might be more intelligent than the manual inlining
06:39:34  <mikolalysenko>maybe
06:39:53  <mikolalysenko>but it also seems more intelligent than what the closure does too
06:42:34  <mikolalysenko>ok. I am going to sleep
06:42:43  <chrisdickinson>kk. night!
06:43:27  <jcrugzz>night! thanks for the discussion guys, im trying to absorb ;)
06:46:41  * mikolalysenkoquit (Ping timeout: 245 seconds)
06:51:54  * stagasjoined
06:52:38  * damonoehlmanjoined
07:05:29  * calvinfoquit (Quit: Leaving.)
07:23:38  * kumavis_quit (Quit: kumavis_)
07:52:51  * mikolalysenkojoined
07:58:03  * mikolalysenkoquit (Ping timeout: 256 seconds)
08:06:01  * calvinfojoined
08:11:03  * calvinfoquit (Ping timeout: 276 seconds)
08:54:14  * defunctzombiechanged nick to defunctzombie_zz
08:59:30  * jcrugzzquit (Ping timeout: 264 seconds)
09:03:39  * jibayjoined
09:18:40  * nicholas_joined
09:18:41  * nicholasfquit (Read error: Connection reset by peer)
09:26:00  * tilgoviquit (Remote host closed the connection)
09:37:35  * mcollinajoined
09:56:08  * mcollinaquit (Read error: No route to host)
10:00:22  * dominictarrjoined
10:01:35  * mcollinajoined
10:04:55  * jcrugzzjoined
10:13:15  * dominictarr_joined
10:13:28  <dominictarr_>you can't nest yield?
10:15:01  * ins0mniajoined
10:16:23  * dominictarrquit (Ping timeout: 245 seconds)
10:16:23  * dominictarr_changed nick to dominictarr
10:17:52  * mcollinaquit (Read error: Connection reset by peer)
10:17:58  * mcollina_joined
10:23:42  * jibayquit (Remote host closed the connection)
10:49:04  <dominictarr>jez0990: https://github.com/dominictarr/cspaas
10:49:13  <dominictarr>^ push deploy into iframes
10:49:25  <dominictarr>aha client side platform as a service
11:18:03  * mcollina_quit (Read error: Connection reset by peer)
11:18:26  * mirkokieferjoined
11:27:36  * ins0mniapart
11:29:34  * timoxleyjoined
11:46:48  * stagasquit (Ping timeout: 245 seconds)
12:01:19  * mirkokieferquit (Quit: mirkokiefer)
12:07:33  * whit537joined
12:12:14  * timoxleyquit (Ping timeout: 264 seconds)
12:43:46  * whit537quit (Ping timeout: 245 seconds)
12:46:33  * mcollinajoined
12:47:40  * whit537joined
12:53:54  * whit537quit (Quit: whit537)
12:57:10  * coderzachjoined
12:59:30  * jcrugzzquit (Ping timeout: 264 seconds)
13:06:51  * yorickjoined
13:10:52  * AvianFlujoined
13:21:27  * mcollinaquit (Remote host closed the connection)
13:41:22  * damonoehlmanquit (Quit: WeeChat 0.4.1)
13:41:31  * yorickquit (Remote host closed the connection)
13:41:43  * mikolalysenkojoined
13:54:48  * coderzachquit (Quit: coderzach)
14:13:27  * coderzachjoined
14:20:41  * vitorpachecojoined
14:26:58  * whit537joined
14:35:21  * i_m_cajoined
14:39:12  * cpettittjoined
14:39:43  * mikolalysenkoquit (Ping timeout: 245 seconds)
14:40:30  * i_m_caquit (Ping timeout: 240 seconds)
14:41:44  * i_m_cajoined
15:04:24  * mikolalysenkojoined
15:06:55  * ednapiranhajoined
15:08:34  * ednapiranhaquit (Remote host closed the connection)
15:17:46  * cpettittquit (Quit: cpettitt)
15:25:03  * thlorenzjoined
15:42:34  * dguttmanjoined
15:45:20  * dguttmanquit (Client Quit)
15:49:13  * dguttmanjoined
15:57:45  * whit537quit (Ping timeout: 276 seconds)
15:59:55  * dguttmanquit (Quit: dguttman)
16:02:33  * i_m_caquit (Ping timeout: 256 seconds)
16:11:45  * whit537joined
16:16:49  * dguttmanjoined
16:18:49  * thisandagainjoined
16:21:34  * timoxleyjoined
16:24:41  * mirkokieferjoined
16:26:43  * dguttmanquit (Quit: dguttman)
16:27:06  * mikolalysenkoquit (Ping timeout: 245 seconds)
16:37:41  * mikolalysenkojoined
16:39:54  * ins0mniajoined
16:42:44  * shamajoined
16:43:40  * calvinfojoined
16:48:03  * calvinfoquit (Ping timeout: 245 seconds)
16:53:08  <mirkokiefer>@dominictarr yes, coding graph algos without realtime graph rendering is a pain
16:53:29  <dominictarr>haha, yeah
16:53:43  <dominictarr>looks like it's all coming together
16:53:58  <dominictarr>cpettitt is getting into browserify
16:54:11  <mirkokiefer>I'd love to have a live-coding environment of the type bret victor keeps showing :)
16:54:16  <mirkokiefer>for algo coding
16:54:30  <dominictarr>my plan is to draw in a bunch of graph people and get them to do compatible graph stuff
16:54:35  <dominictarr>mirkokiefer: totally
16:54:57  <mirkokiefer>sounds great
16:54:59  <dominictarr>getting a api we can all use is the first step
16:55:17  <mirkokiefer>I've mainly been working with graphs for commit history
16:55:21  <mirkokiefer>the type of stuff git does
16:56:15  <dominictarr>yeah, that is probably a common case of things that people want to do with graphs
16:57:50  <mirkokiefer>I think that the current state of algo re-use is actually quite bad - there are so many great algo implementations hidden inside large libraries
16:58:28  <dominictarr>yes, agree.
16:58:29  <mirkokiefer>every little database is re-implementing the same stuff
16:59:52  <mirkokiefer>this testling-ci stuff is pretty awesome - didn't know it already supports mobile browsers
17:04:40  <dominictarr>that is fairly recent, last few months
17:05:18  <jaz303>dominictarr: are you going to jsconfeu in sept?
17:05:39  * evboguejoined
17:05:50  <dominictarr>jaz303: no. but I am going to nodeconf and lxjs though!
17:06:08  <jaz303>are they in the us?
17:09:59  <dominictarr>nodeconf is in ireland, and lxjs in in portrigal
17:10:21  <jaz303>oh nice
17:10:44  <jaz303>should make the most of this passport now that finally i've gone to the effort to get one
17:13:55  <dominictarr>yes indeedy
17:14:11  <mirkokiefer>lisbon sounds nice
17:15:51  * kumavis_joined
17:16:27  * kumavis_quit (Client Quit)
17:20:02  <mirkokiefer>@dominictarr didn't know about this nodeland conference - it sounds super awesome
17:20:20  <mirkokiefer>are there still tickets left?
17:21:08  * kumavis_joined
17:25:57  * itprojoined
17:25:57  * itprochanged nick to ITpro
17:36:10  * ins0mnia_joined
17:37:06  * ins0mniaquit (Ping timeout: 245 seconds)
17:39:39  * whit537quit (Quit: whit537)
17:41:09  * whit537joined
17:54:34  * thlorenzquit (Remote host closed the connection)
17:55:18  * whit537quit (Ping timeout: 264 seconds)
17:57:35  * whit537joined
18:10:59  * mirkokieferquit (Quit: mirkokiefer)
18:21:27  * timoxleyquit (Remote host closed the connection)
18:23:41  * yorickjoined
18:24:38  * mikolalysenkoquit (Ping timeout: 240 seconds)
18:27:17  * mikolalysenkojoined
18:33:17  * mirkokieferjoined
18:33:31  * i_m_cajoined
18:36:48  <dominictarr>mirkokiefer: there are only day tickets left - you have to organize your own accomadation
18:38:40  <mirkokiefer>@dominictarr hm thats unfortunate - is it hard to find accomodation?
18:39:53  <substack>dominictarr: well I'm not going so that's one more spot available
18:40:10  <dominictarr>substack: huh, how come?
18:41:30  * tilgovijoined
18:42:34  <substack>no cash available for airfare right now and lxjs already booked a flight from sf to lisbon that I would have to juggle around or else take 2 transatlantic flights right next to each other
18:44:37  * i_m_caquit (Ping timeout: 240 seconds)
18:48:23  <mirkokiefer>I would definitely be interested to join if there is a chance to get a spot
18:54:10  * jcrugzzjoined
18:57:57  * coderzachquit (Quit: coderzach)
18:58:20  * cpettittjoined
19:05:37  * dominictarr_joined
19:06:29  <dominictarr_>substack: new proof of concept for crazy idea: https://github.com/dominicatrr/cspaas
19:08:51  * dominictarrquit (Ping timeout: 276 seconds)
19:08:52  * dominictarr_changed nick to dominictarr
19:29:56  * jibayjoined
19:41:48  * whit537quit (Ping timeout: 245 seconds)
19:43:54  <dominictarr>cpettitt: hey, just published 3 graphlib modules today! https://npm.im/graphlib-dot https://npm.im/graphlib-adjacency and https://npm.im/graphlib-git
19:44:53  <cpettitt>you've been busy!
19:45:07  <cpettitt>I'm going to check out graphlib-git, that sounds interesting
19:45:14  <dominictarr>indeed.
19:45:57  <dominictarr>what I need though, is a simple module that I can just pass a graphlib instance to and get an svg or some rendering, with minimal configuration.
19:47:02  <dominictarr>looking at the examples I can see in dagre there is lots of other code, like this d3 adapter https://github.com/cpettitt/dagre/blob/master/demo/dagre-d3-simple.js
19:47:43  * coderzachjoined
19:47:53  * coderzachquit (Client Quit)
19:48:00  <cpettitt>yeah, I want to get a simple fronted added to dagre. It would probably be something based on dagre-d3-simple. The idea would be to just give it the graph and have it produce the output. Pretty much all of the configuration should be optional
19:48:28  <cpettitt>this would go into core dagre instead of into the demos did
19:48:45  <cpettitt>s/did/dir
19:51:37  * coderzachjoined
19:51:57  <cpettitt>I've pulled down all of the north graphs and found that the parser is failing on a few cases. I tweaked the benchmark script in dagre (score) to track down the failure cases.
19:52:39  * whit537joined
19:53:26  <dominictarr>hmm, how does dagre create the node positions?
19:53:43  <dominictarr>does it set them as properties of the nodes and edges?
19:54:00  <cpettitt>Yeah, they get set on the input objects, under a dagre property
19:54:24  <dominictarr>what is called a node value?
19:56:20  * calvinfojoined
19:56:32  * ins0mnia_changed nick to ins0mnia
19:57:36  <cpettitt>sorry, not quite following your question. Are you asking what the property is in the node value?
20:00:37  * kenperkinsquit (Quit: Computer has gone to sleep.)
20:01:56  * kenperkinsjoined
20:02:14  * kenperkinsquit (Client Quit)
20:09:39  <dominictarr>cpettitt: yeah, does it set a dagre property on the user's value?
20:20:15  <cpettitt>dominictarr: yeah, it sets it on the user's object under the "dagre" preoperty
20:22:01  * vitorpachecoquit (Quit: Saindo)
20:29:58  * jergasonjoined
20:41:17  * coderzachquit (Quit: coderzach)
20:53:28  * jergasonquit (Remote host closed the connection)
20:56:38  * dominictarrquit (Ping timeout: 240 seconds)
21:15:06  * thisandagainquit (Ping timeout: 264 seconds)
21:16:18  * pkruminsquit (Ping timeout: 264 seconds)
21:16:52  * pkruminsjoined
21:18:10  * nicholas_quit (Read error: Connection reset by peer)
21:18:42  * nicholasfjoined
21:20:37  * thisandagainjoined
21:22:02  * defunctzombie_zzchanged nick to defunctzombie
21:25:18  * yorickquit (Remote host closed the connection)
21:31:11  * gwenbelljoined
21:52:20  * coderzachjoined
21:52:51  * calvinfoquit (Quit: Leaving.)
21:53:21  * whit537quit (Quit: whit537)
21:56:36  * kenperkinsjoined
21:58:33  * ITproquit
22:09:09  * kenperkinsquit (Quit: Computer has gone to sleep.)
22:19:34  * kriskowalquit (Quit: kriskowal)
22:20:20  * cpettittquit (Quit: cpettitt)
22:20:49  * cpettittjoined
22:33:31  * cpettittquit (Quit: cpettitt)
22:47:12  * kenperkinsjoined
22:53:51  * kenperkinsquit (Quit: Computer has gone to sleep.)
23:06:46  * calvinfojoined
23:08:12  * gwenbellquit (Ping timeout: 260 seconds)
23:11:41  * thlorenzjoined
23:12:09  * jergasonjoined
23:12:35  <thlorenz>substack: defunctzombie so is the dedupe approach ok with both of you?
23:12:52  <thlorenz>cause then I'm gonna clean up, add more tests and PR in a few hours
23:13:38  <thlorenz>fully externalizing this is probably not possible since it needs to execute inside the bundle chain
23:16:21  * AvianFluquit (Ping timeout: 248 seconds)
23:16:35  * AvianFlujoined
23:18:35  * AvianFluquit (Remote host closed the connection)
23:21:20  <jesusabdullah>is this in the context of browserify?
23:21:26  <jesusabdullah>I was just hacking on browserify-cdn
23:21:37  <jesusabdullah>ooh, I should work on deploy tooling today :/
23:23:16  * fallsemojoined
23:23:29  * fallsemoquit (Client Quit)
23:31:43  <thlorenz>jesusabdullah: yes - it dedupes just like npm except you can tell it the criteria, i.e minor has to match or patch, etc. https://github.com/thlorenz/browser-dedupe#dedupecriteria-id-pack
23:32:31  <thlorenz>this module goes along with some minor changes to browser-resolve and a bit more to browserify itself which I hopefully can get pulled tonite
23:32:32  <jesusabdullah>nice :)
23:32:49  <jesusabdullah>I addressed a few browserify-cdn issues
23:32:55  <jesusabdullah>I'm gonna get a pull request in a few days
23:33:07  <jesusabdullah>at which point I'll probably publish and try to deploy
23:33:08  <thlorenz>cool
23:38:17  <jesusabdullah>yeah
23:39:17  * evboguequit (Ping timeout: 248 seconds)
23:47:37  * jergasonquit (Remote host closed the connection)