00:01:51  <dan336>oh I used the binary.
00:03:26  <dan336>I haven't used the new luvi/luvit mix, just because there were a few issues with it a while back on osx that I just didn't have time to look into. but it probably works now.
00:03:35  <dan336>but I'll starting looking into it soon :)
00:03:51  <dan336>luvit really is awesome just FYI.
00:05:42  <dan336>the coroutine based parser was really cool though. I had to write an async parser for a redis save file once, that was horrible. and having a stateful coroutine based parser would have really made it 1000x easier.
00:06:32  * dan336quit (Quit: Leaving.)
00:25:43  * travis-cijoined
00:25:43  <travis-ci>luvit/luvit#1255 (process - c17b373 : Ryan Phillips): The build passed.
00:25:43  <travis-ci>Change view : https://github.com/luvit/luvit/compare/c6edd210c13a...c17b373241c1
00:25:43  <travis-ci>Build details : http://travis-ci.org/luvit/luvit/builds/42027448
00:25:43  * travis-cipart
00:28:35  * a_lequit (Remote host closed the connection)
00:29:11  * a_lejoined
00:33:10  * a_lequit (Remote host closed the connection)
00:33:47  * a_lejoined
00:33:49  <rphillips>hmm. the spawn on exit handler doesn't seem to be consistently called on windows
00:35:36  <rphillips>might be the stdin issue, let's see
01:01:41  * cledevquit (Ping timeout: 264 seconds)
01:17:12  * a_lequit (Remote host closed the connection)
01:20:35  * UniOnquit (Remote host closed the connection)
01:22:01  * dan336joined
01:35:43  * cledevjoined
01:37:28  * a_lejoined
01:59:27  * cledevquit (Ping timeout: 255 seconds)
02:02:30  * dan336quit (Quit: Leaving.)
02:30:21  <rphillips>i'm going to pull in luvit-streams I think
02:30:43  <rphillips>any objections?
02:47:28  <jirwin>hdms: gladd to see you made it here
02:54:07  * a_lequit (Remote host closed the connection)
02:55:05  * cledevjoined
03:27:02  <rphillips>hdms: welcome!
03:36:40  * cledevquit (Ping timeout: 264 seconds)
04:05:03  * travis-cijoined
04:05:03  <travis-ci>luvit/luvi#172 (master - 4d4f375 : Ryan Phillips): The build passed.
04:05:03  <travis-ci>Change view : https://github.com/luvit/luvi/compare/177f066e4a5e...4d4f37503787
04:05:03  <travis-ci>Build details : http://travis-ci.org/luvit/luvi/builds/42039990
04:05:03  * travis-cipart
04:27:38  * phorejoined
04:29:59  * phorequit (Client Quit)
04:45:19  * hdmsquit (Quit: hdms)
05:39:14  <rch>rphillips: awesome
05:39:16  <rch>songgao: ^
06:52:11  * a_lejoined
08:06:04  * a_lequit (Remote host closed the connection)
08:23:34  * DarkGodjoined
08:39:04  * cledevjoined
10:19:04  * torporquit (Quit: Leaving.)
11:05:21  * cledevquit (Ping timeout: 244 seconds)
11:32:06  * cledevjoined
13:04:44  * hdmsjoined
13:33:53  <hdms>jirwin: rphillips thank you
14:50:28  * torporjoined
15:37:13  * dan336joined
15:41:53  * UniOnjoined
15:43:07  * a_lejoined
16:09:36  <songgao>rch rphillips I'm all in for pulling in luvit-stream :D
16:12:12  <creationix>luvit-stream is fine
17:05:48  * cledevquit (Ping timeout: 250 seconds)
17:07:17  * cledevjoined
17:20:22  * a_lequit (Remote host closed the connection)
17:30:57  * a_lejoined
17:37:36  * torpor1joined
17:38:32  * torporquit (Ping timeout: 256 seconds)
17:51:01  * mattlgyjoined
17:56:05  * hdmsquit (Quit: hdms)
18:06:02  * a_lequit (Read error: Connection reset by peer)
18:14:29  * DarkGodquit (Ping timeout: 264 seconds)
18:21:18  * travis-cijoined
18:21:18  <travis-ci>luvit/luvi#174 (master - ea5883a : Tim Caswell): The build passed.
18:21:18  <travis-ci>Change view : https://github.com/luvit/luvi/compare/4d4f37503787...ea5883ac1a2f
18:21:18  <travis-ci>Build details : http://travis-ci.org/luvit/luvi/builds/42110314
18:21:18  * travis-cipart
18:21:48  * cledevquit (Ping timeout: 255 seconds)
18:22:28  * phorejoined
18:24:51  * cledevjoined
19:38:33  <creationix>dan336: Yeah, being able to suspend coroutines makes parser logic a lot simpler
19:39:07  <creationix>though I’m really liking the new decoder style from imzyxwvu
19:39:48  <creationix>https://groups.google.com/forum/#!topic/luvit/y6ZB7dyAwzo
19:40:09  <creationix>here is my decoder for a binary protocol I designed over the weekend https://github.com/luvit/lit/blob/d8034de01b90cf6805e85324955045c9c1f4590a/server.lua#L19-L86
19:45:21  <dan336>cool, that looks nice. very simple
19:46:04  <dan336>creationx: so lit is going to be the pkg repo for luvit then?
19:46:09  <creationix>perhaps
19:46:29  <creationix>I’ve got a pretty good design for blob storage and syncing, still figuring out the high-level semantics
19:46:48  <creationix>also feel free to think of a new name
19:47:02  <dan336>:) alright
19:49:17  <dan336>creationx: how complete is the git-fs implementation? I just am noticing that you included those files in the repo
19:50:31  <creationix>git core is pretty simple, what were you wanting to use it for
19:50:42  <creationix>this implementation doesn’t read or write packed refs or objects
19:51:06  <creationix>(the older JS version does though, it’s not too hard to port over)
19:51:43  <creationix>so the basic idea of lit currently is I have two services:
19:52:51  <creationix>one is the selective replicating database. It’s very granualar and content addressable so that repeated files are only stored and synced once. (Updating to a newer version of a package you already have will only sync down the changed files)
19:53:02  <creationix>the actual sync protocol is very simple and should be very fast
19:53:44  <creationix>publishing to the central database will require an one-time access token from the other service to prevent people from using it as arbitrary storage and to provide some level of assurance of package contents.
19:54:04  <creationix>the high-level service will handle user accounts, package names and versions and token creation for publishing to the data store
19:54:33  <creationix>currently, the idea is to authenticate the web site using github oauth and the cli tool using your ssh private keys that you’ve registered via githib
19:54:45  <dan336>thats a good idea.
19:55:26  <dan336>the ssh keys i mean
19:55:34  <creationix>publishing a package will import the files into your local git db, create a signed tag using your private key and send the tag to the high-level server to get the token. Once you actually syncup the files the server doesn’t have yet, it will become available in the general registry
19:56:03  <creationix>the db is git-like. It uses compatable hashes for tags, trees and blobs, but isn’t based on refs and commits like git
19:56:44  <creationix>so if we decide later to mirror the central repository on github, it should be pretty trivial to convert it to an actual git repo using namespaced refs for the tags
19:56:56  <creationix>refs/tags/foo/v0.1.2 for example
19:57:10  <dan336>so will all packages be signed using the publishing users priv key?
19:57:20  <creationix>that’s the idea currently
19:57:33  <creationix>and I’ll also use that signature to authenticate publishes to the central repository
19:58:04  <creationix>the cli client will handle the importing, tagging and signing
19:58:13  * DarkGodjoined
19:58:16  <dan336>thats good.
19:58:34  <creationix>for now it will just assume $HOME/.ssh/id_rsa
19:58:40  <creationix>which will be the 95% case I think
19:58:54  <dan336>good security without forcing bothersome authentication is always very nice.
19:59:01  <dan336>yeah it will.
19:59:04  <creationix>gpg is too much, we’ll have terrible adoption
19:59:13  <creationix>but I do want to add it as an option later on
19:59:28  <creationix>it’s got a much better web of trust than just using github keys
19:59:40  <dan336>I was just going to ask :)
20:00:35  <creationix>I have ideas around integrating with projects git repos using submodules or something, but for now I think I’ll just keep the git compatable db as an implementation detail
20:00:45  <creationix>the user doesn’t even need git installed to use this tool
20:01:19  <dan336>thats cool.
20:01:36  <creationix>`lit install foo` will fetch the latest tag hash from the high-level server, sync down the missing objects into the local db and export the tree to the disk at modules/foo
20:02:27  <creationix>and if you don’t have internet, but have a cached result locally, it can work offline
20:02:41  <dan336>thats makes sense.
20:02:59  <dan336>so it really is similar to get-server then, but without the nasty protocol.
20:03:08  <dan336>and some other things thrown in.
20:03:14  <creationix>yeah, much simpler and more granular protocol
20:03:25  <creationix>probably not quite as effecient on the wire, but much lighter load on the server
20:03:34  <creationix>pack-protocol is killer on the server
20:03:50  <creationix>and much easier to implement or debug
20:03:53  <rch>not using package.lua or anything to track which packages should be installed? i.e. the install model is still check in packages to the project repo
20:04:16  <creationix>still thinking that part
20:04:25  <dan336>personally I think its a little to early to think about efficiency on the wire :) you will hit other things much sooner.
20:04:51  <rch>for me it's about correctness, a package manifest lets me use semver or something to make sure i have the right version of everything
20:04:53  <dan336>like the pack-protocol :)
20:05:20  <rch>oh heh
20:06:18  <dan336>how are the objects stored in the db? is there a 1:1 relation between the git objects and the objects stored in the db?
20:06:25  <dan336>aka, the hashes are the same?
20:13:42  <creationix>yes, the hashes are git compatable
20:14:14  <creationix>rch: so yes, I’ll be supporting semver, and a npm shrinkwrap style thing by default for top-level apps
20:14:44  <creationix>libraries will declare what semver range they support and the top app will grab the the newest versions of everything that fits and then lock it all to git hashes
20:15:02  <creationix>(this is why I was considering git submodules since that’s how they natively work)
20:15:43  <creationix>and when the user runs `lit update` or something it will unlock the versions, grab the latest versions of app deps recursivly and then re-lock. The programmer will re-test everything before commiting the new locked versions
20:16:15  <creationix>but that way library authors don’t have to bump their version just to get patch versions of deps
20:16:25  <creationix>should be the best of both worlds
20:16:30  <creationix>safe and convenient
20:16:35  <rch>creationix: cool
20:17:03  <creationix>what I’m still figuring out is how exactly to store the ranges in a package.json style file and the locked down deps in a shrinkwrap style file
20:17:24  <creationix>or maybe same file for both, but I’d rather separate hand-written files from auto-generated stuff
20:17:44  <creationix>but then again, I really like how `npm —save` adds the dep for you in package.json
20:17:48  <rch>me too
20:18:08  <creationix>one thing different from npm is I’ll install deps flat by default
20:18:14  <rch>weird
20:18:22  <rch>so only branch if versions conflict?
20:18:41  <rch>not sure if you saw https://github.com/virgo-agent-toolkit/luvit-pkg/pull/1/files btw, we worked on this a bit
20:19:21  <creationix>right, if there are conflicts ask the user if it should install both as nested deps or override the version of the older one
20:20:17  <creationix>flat is much better whenever possible for many reasons. It shouldn’t default to nested just to cover some edge case
20:20:30  <rch>interesting
20:20:40  <dan336>yeah you gotta be careful about that, I have a had few projects that I had to fork a deps just so that everything lined up correctly with the dpes versions. kindof annoying. but asking should be enough.
20:21:07  <creationix>also I plan on making pulling in deps from arbitrary git repos easy too
20:21:19  <creationix>so if you have some deps not in the central repository, they should meld well
20:21:27  <dan336>cool, even local remotes?
20:21:37  <dan336>like a file path "file:://repo?
20:21:40  <creationix>sure, why not
20:21:55  <creationix>js-git can already read local repos perfectly, it wouldn’t take much to add it here
20:22:15  <creationix>the hardest part is figuring out the mismatch between tags pointing to trees and commits pointing to trees
20:22:29  <creationix>nevermind, that’s not hard
20:22:35  <rch>heh
20:22:58  <creationix>(at one point I had allowed tags pointing to files, but if it’s always trees, then it’s the same as commits)
20:23:16  <creationix>annotated tags can point to anything
20:23:37  * hdmsjoined
20:23:38  <creationix>commits, other annotated tags, trees, or blobs
20:24:30  <creationix>dan336: and since the central db server is super simple to setup, running a custom repository should be more common
20:24:35  <creationix>or a local one
20:24:42  <creationix>(on disk)
20:25:17  * cledevquit (Ping timeout: 264 seconds)
20:25:37  <dan336>yeah, i remember when we tried setting up our own npm db.
20:25:39  <dan336>that was fun.
20:26:18  <dan336>so how will the central server work? is it going to be a single node or will it be a multi node system with a replication layer?
20:26:32  <dan336>and are you going to develop it your self?
20:26:55  <dan336>or is it still being planed?
20:27:05  <creationix>yeah, I’m working on it now. I’m not too worried about scaling it, the object store will easily shard since it’s just hashes
20:27:18  <creationix>the high-level front-end is a fairly traditional web app style thing
20:27:56  <creationix>the first version will probably just be direct hashes and skip the high-level server
20:28:01  <creationix>should have that this week
20:28:13  <dan336>sounds good.
20:28:24  <dan336>so then each shard will have a backup?
20:28:50  <creationix>yeah, replicating is simply a matter of walking the hashes and downloading anything you don’t have yet
20:29:07  <creationix>I’m using files for now, but plan on moving to leveldb later
20:29:54  <creationix>hmm, might need a more effecient replication algo once it gets bigger
20:30:09  <creationix>or just shard to keep individual databases small
20:30:55  <creationix>could also do like the distributed protocols and write to both when new data is written and read from both when files are read (and detecting missing files and syncing them)
20:31:06  <dan336>what about locating the data you want, if you connect to shard1 and the data is on shard2… do you connect to a proxy?
20:31:21  <dan336>or will requests be forwarded?
20:31:25  <creationix>the data would be sharded by hash so you should always know where it goes
20:31:47  <creationix>but I guess during transitions we could add something like that
20:31:58  <creationix>especially if data moves between machines instead of just a simple split
20:33:10  <dan336>well what I mean is if you connect up to the back end db, and ask for a,b,c and a is on node1 and b,c is on node2, does the db I connect to proxy the requests to the storage nodes where the data is located, or does the client have to connect to node1,node2,nodex… ?
20:33:40  <creationix>oh right, I would prefer a proxy as long as it’s not a bottleneck
20:33:48  <creationix>shouldn’t be hard to cluster the proxies since they are stateless
20:34:24  <dan336>yeah that makes sense.
20:34:54  <creationix>I obviously haven’t figured out all the scaling and reliability issues, but I’m fairly confident it won’t be super hard
20:35:07  <creationix>having the data in an immutable acyclic graph is ncie
20:35:09  <creationix>*nice
20:35:28  <dan336>of course, I was just curious as to what the plans were :)
20:35:49  <creationix>yeah, current plans are to get a prototype out as soon as possible so *I* can start using it
20:36:00  <creationix>I want a package manager. I feel it’s slowing us down and causing luvi and luvit to bloat.
20:36:38  <creationix>also I’ve been designing this git-style object database for years and am excited to actually implement the core of it
20:36:57  <creationix>it fits really well to the problem of a package repository
20:37:13  <dan336>I bet :) I thought about designing one for the company I work for, but we don't have a need yet.
20:38:28  <dan336>our current solution works just fine.
20:39:25  <dan336>well I stop bothering you so you can get something done today :)
20:40:00  <dan336>keep up the good work on luvit, I have really been enjoying writing a few useful utilities with it.
20:47:17  <creationix>thanks for helping me think through stuff
20:57:57  <dan336>no problem
21:39:06  * a_lejoined
21:40:50  * cledevjoined
22:04:06  <dan336>creationx: I don't know if you are interested in this, but this could be a decent alternative to leveldb -> http://symas.com/mdb/
22:10:19  <creationix>dan336: that could be useful. I think our workload will be ready heavy
22:10:26  <creationix>I imagine npm has a lot more reads than writes
22:12:12  <dan336>thats what I was thinking too, it would be heavy on the reads.
22:12:55  <creationix>I’m trying to remember if level db optimized for writes or reads
22:13:04  <creationix>I know it optimizes for random access (not sequential)
22:13:28  * ^vjoined
22:13:37  <creationix>the Multi-process concurrency is nice for scaling
22:13:48  <creationix>(assuming cpu was the bottleneck and not disk I/O
22:15:30  <dan336>i think its optimized for sequential reads.
22:16:25  <creationix>mdb, yeah
22:16:39  <creationix>my workload will be quite random, but mdb does well for random reads too
22:16:55  <creationix>I think leveldb was optimized for random writes, that’s where it seems faster than most others
22:16:59  <creationix>http://symas.com/mdb/microbench/
22:17:24  <dan336>yeah thats where I'm looking too
22:18:40  <dan336>did you see the section on 'performance using large values'?
22:19:13  <creationix>yep, though my values shouldn’t be too large. It’s a single entry per file or folder
22:19:47  <dan336>i guess thats true, git objects tend to not be in the 100k range.
22:20:20  <creationix>I expect most to be around 2-10k
22:22:41  <rphillips>not including the c++ abi would improve memory usage
22:25:51  <creationix>good point. I wasn’t keen on level being c++
22:26:04  <creationix>plus lmdb is super tiny to begin with
22:29:27  * mattlgyquit (Remote host closed the connection)
22:30:09  <creationix>I kinda wish mem-mapping was available to scripting languages. It seems to useful
22:30:15  <creationix>I wonder why libuv didn’t include that
22:36:01  <dan336>so here is a question, I am starting to test out the new luvi based luvit, and I can't manage to grab the hrtime function
22:36:02  <creationix>heh, if you strip the luvit binary it strips the zip file off the end
22:36:07  <creationix>that’s one way to get the luvi binary inside it
22:36:14  <dan336> i was doing this: `require('uv').Process.hrtime` before
22:36:18  <dan336>but now that doesn't work...
22:36:45  <dan336>it throws this error: 'attempt to index field 'Process' (a nil value)'
22:36:58  <creationix>dan336: right, the uv module is probably completely different since it’s luv and not luvit’s old C bindings
22:37:23  <dan336>alright, so I need to look in luv to see what it is then?
22:37:35  <creationix>the new API is a flat list of C bindings to libuv functions https://github.com/luvit/luv/blob/8c10a1338917d7234adbac609c48a78fc360f396/src/luv.c#L48-L234
22:37:50  <creationix>so it’s just `require(‘uv’).hrtime()`
22:37:53  <dan336>you are right, I just found it
22:38:04  <dan336>thanks
22:55:26  * dan336quit (Ping timeout: 258 seconds)
22:55:34  * dan336joined
22:57:16  * DarkGodquit (Remote host closed the connection)
23:05:57  * piernovquit (Ping timeout: 265 seconds)
23:06:56  * piernovjoined
23:18:41  * cledevquit (Ping timeout: 264 seconds)
23:54:55  * a_lequit (Remote host closed the connection)