00:00:00  * ircretaryquit (Remote host closed the connection)
00:00:08  * ircretaryjoined
00:07:41  * calvinfojoined
00:09:06  * calvinfo1joined
00:09:07  * calvinfoquit (Read error: Connection reset by peer)
00:19:01  * calvinfo1quit (Quit: Leaving.)
00:24:04  <rowbit>Hourly usage stats: [developer: 0, free: 8]
00:25:09  * AvianPhonejoined
00:38:11  * kriskowaljoined
00:50:25  * calvinfojoined
00:54:53  * thlorenzquit (Remote host closed the connection)
01:07:34  * timoxleyjoined
01:11:00  * timoxleyquit (Client Quit)
01:16:12  * st_lukejoined
01:16:19  * mikolalysenkoquit (Ping timeout: 246 seconds)
01:18:34  * okychanged nick to xko
01:24:04  <rowbit>Hourly usage stats: [developer: 0, free: 11]
01:28:11  * st_lukequit (Remote host closed the connection)
01:28:57  * calvinfoquit (Quit: Leaving.)
01:30:56  * calvinfojoined
01:34:44  * st_lukejoined
01:54:59  * jcrugzzquit (Ping timeout: 272 seconds)
01:58:41  * jhizzle4rizzlejoined
01:59:14  * jhizzle4rizzlequit (Client Quit)
01:59:27  * thlorenzjoined
02:00:10  * calvinfoquit (Quit: Leaving.)
02:03:27  * thlorenzquit (Remote host closed the connection)
02:20:34  * jcrugzzjoined
02:24:02  * calvinfojoined
02:24:04  <rowbit>Hourly usage stats: [developer: 0, free: 32]
02:27:30  * jlordjoined
02:28:20  * pkrumins_joined
02:29:27  * jcrugzzquit (Ping timeout: 272 seconds)
02:30:09  * jcrugzzjoined
02:30:25  * pfrazejoined
02:32:41  * kenperkins_joined
02:33:58  * pkruminsquit (*.net *.split)
02:33:58  * jlord_quit (*.net *.split)
02:33:59  * kenperkinsquit (*.net *.split)
02:33:59  * Altreusquit (*.net *.split)
02:39:50  * ralphtheninjaquit (Ping timeout: 264 seconds)
02:40:27  * Altreusjoined
02:47:54  * kriskowal_joined
02:49:43  * kriskowalquit (Ping timeout: 272 seconds)
02:49:44  * kriskowal_changed nick to kriskowal
02:54:28  * thlorenzjoined
02:54:57  * calvinfoquit (Quit: Leaving.)
03:04:45  * thlorenzquit (Remote host closed the connection)
03:05:43  * calvinfojoined
03:06:15  * AvianPhonequit (Ping timeout: 240 seconds)
03:06:29  * yorickquit (Remote host closed the connection)
03:10:12  * thlorenzjoined
03:17:39  * mikolalysenkojoined
03:22:26  * mikolalysenkoquit (Ping timeout: 264 seconds)
03:24:04  <rowbit>Hourly usage stats: [developer: 0, free: 13]
03:28:05  * st_lukequit (Remote host closed the connection)
03:29:14  * pkrumins_changed nick to pkrumins
03:29:27  * jcrugzzquit (Ping timeout: 240 seconds)
03:46:16  * hughskjoined
03:48:17  * calvinfoquit (Quit: Leaving.)
04:10:16  <rowbit>substack, pkrumins: These encoders are STILL down: 50.57.103.88(dev)
04:16:12  * fronxjoined
04:21:14  * fronxquit (Ping timeout: 264 seconds)
04:24:04  <rowbit>Hourly usage stats: [developer: 0, free: 8]
05:09:59  * calvinfojoined
05:19:13  * pfrazequit (Ping timeout: 246 seconds)
05:24:04  <rowbit>Hourly usage stats: [developer: 0, free: 35]
05:24:18  * calvinfoquit (Quit: Leaving.)
05:25:11  * fronxjoined
05:29:41  * fronxquit (Ping timeout: 248 seconds)
05:41:03  * dominictarrjoined
05:44:21  <dominictarr>substack, so they discuss that in the paper. they are only measuring functional groups
05:44:35  <dominictarr>not repitions, or junk, etc
05:45:03  <dominictarr>I havn't finished the whole paper yet, though.
05:46:09  * thlorenzquit (Remote host closed the connection)
05:50:02  * dominictarrquit (Ping timeout: 264 seconds)
06:13:18  <grncdr>I just made a thing that might be useful to others: http://npm.im/assert-in-order
06:13:35  <grncdr>I'm also curious whether I'm missing out on a better solution
06:16:41  * fronxjoined
06:21:03  * fronxquit (Ping timeout: 240 seconds)
06:24:04  <rowbit>Hourly usage stats: [developer: 0, free: 12]
07:13:04  * Maciek416quit (Remote host closed the connection)
07:16:43  * fronxjoined
07:21:14  * fronxquit (Ping timeout: 264 seconds)
07:24:04  <rowbit>Hourly usage stats: [developer: 0, free: 17]
07:25:38  * jcrugzzjoined
07:29:56  * jcrugzzquit (Ping timeout: 240 seconds)
07:30:43  * st_lukejoined
08:03:20  * shamaquit (Remote host closed the connection)
08:15:05  * shamajoined
08:16:36  * shamaquit (Client Quit)
08:16:43  * fronxjoined
08:20:16  * chromakodejoined
08:20:53  * fronxquit (Ping timeout: 248 seconds)
08:24:04  <rowbit>Hourly usage stats: [developer: 0, free: 28]
08:31:24  * dominictarrjoined
08:32:17  <dominictarr>substack, I think the paper mentions how genome match is not a suitable measure of complexity, and it uses functional groups instead of total length
08:32:25  <dominictarr>I havn't finished the paper yet, though.
08:34:29  <dominictarr>pkrumins, okay, looking at through
08:36:25  <dominictarr>substack, that graph you have is interesting... maybe if you take the lowest bound? of course, over time protozoa will have evolved as well, we don't have any samples from a billion years ago.
09:01:30  <dominictarr>pkrumins, okay, problem was it was using an old version of tape.
09:03:06  <dominictarr>okay, I think it should be good now.
09:10:45  * dominictarrquit (Remote host closed the connection)
09:16:43  * fronxjoined
09:18:13  * calvinfojoined
09:21:04  * fronxquit (Ping timeout: 246 seconds)
09:24:04  <rowbit>Hourly usage stats: [developer: 0, free: 25]
09:24:28  * dominictarrjoined
09:27:24  * indexzerojoined
09:38:30  * ralphtheninjajoined
09:59:02  * calvinfoquit (Quit: Leaving.)
10:10:17  <rowbit>substack, pkrumins: These encoders are STILL down: 50.57.103.88(dev)
10:24:04  <rowbit>Hourly usage stats: [developer: 0, free: 23]
11:08:35  <isaacs>dominictarr: hey, i owe you a response to your email
11:08:54  <isaacs>dominictarr: but i'm unable to sleep atm, so maybe we can chat about it now if you're around
11:11:49  <dominictarr>isaacs, okay great!
11:23:07  <dominictarr>isaacs, if you are still awake...?
11:23:13  <isaacs>dominictarr: yeah
11:23:26  <isaacs>dominictarr: so, i was thinking it'd be nice for some of npmd's featureset to be merged into a npm 2.0
11:24:01  <isaacs>but, of course, that means making things backwards compatible in some ways
11:24:04  <rowbit>Hourly usage stats: [developer: 0, free: 11]
11:24:17  <isaacs>and, it kinda sucks having a binary dep in npm
11:24:37  <dominictarr>isaacs, agreed.
11:24:54  <isaacs>but those are minor details. i mean, the level bits can be used strictly for search or something, and bootstrapped using the other bits.
11:25:05  <dominictarr>that is part of the reason splitting npmd into two parts
11:25:10  <isaacs>right
11:25:14  <dominictarr>and the client part is pure javascript.
11:25:35  <isaacs>also, *fetching* packages from anywhere that has the shasum = great. however, we do need a naming authority
11:25:45  <isaacs>that's a huge part of what makes npm easy to use
11:25:57  <dominictarr>exactly.
11:26:32  <dominictarr>I thought about if you could make it totally p2p... but you'd need to name modules as hashes or have a bitcoin like block chain...
11:26:41  <isaacs>also, i have some secret plans, of which i've told you a few bits.
11:26:44  <dominictarr>both total non starters...
11:27:00  <isaacs>so the centrality of authority is something that is actually kinda important to making that work
11:27:34  <isaacs>ironically, if you want to push software development itself in a more p2p direction, having a centralized marketplace is actually a benefit.
11:28:01  <isaacs>it's the internet problem all over again, right?
11:28:05  <dominictarr>yes, because unique names is consistency.
11:28:17  <isaacs>i mean, if we could all just speak IP addresses, we'd never need DNS, and every netowrk could be its own authority
11:28:39  * fronxjoined
11:28:56  <isaacs>with DNS, you do have centrality of authority, but you ALSO have clear rules for delegating that authority
11:28:59  <dominictarr>well, ip isn't that accurate. the router can just connect you to someone who claims to be that ip address.
11:29:06  <isaacs>oh, sure
11:29:27  <isaacs>but like, 10.0.0.7 can be anything anywhere.
11:29:52  <isaacs>we COULD do the same for 197.64.2.1, but we all agree that only these certain blocks are "localizable"
11:29:53  <dominictarr>yeah, anyways, that is too hard for now. we can leave that problem for those that come after us.
11:29:55  * fronxquit (Remote host closed the connection)
11:29:59  <isaacs>haha, sure
11:30:17  <dominictarr>just put a //TODO comment in the code...
11:30:20  <isaacs>npm 3.0
11:30:53  <dominictarr>TODO: reinvent internet so it's easier to implement npm
11:31:33  <dominictarr>one way to applease both the anarchists and the enterprise croud here would be to support multilpe registries
11:31:53  <isaacs>meh
11:32:17  <isaacs>i'd rather implement private namespaced packages within the main registry and charge for them
11:32:55  <dominictarr>how does that work from the client?
11:32:56  <isaacs>property tax in exchange for restricting the use of the commons
11:33:00  <isaacs>not sure yet :)
11:33:38  <isaacs>probably, it's going to require some serious changes to the server.
11:34:18  <isaacs>so, if you request the $mycompany.somepkg package, then it'll check to see if you're on the approved list, and then decode it for you or something
11:34:37  <dominictarr>encrypted by the client?
11:34:45  <isaacs>i dunno, maybe
11:34:49  <dominictarr>+ authorization?
11:35:02  <dominictarr>encrypted on the client is totally in right now
11:35:02  <isaacs>the biggest change will have to be that the client auths when presented with a 401 response
11:36:03  <dominictarr>yes. and maybe provide multiple auths
11:36:24  <dominictarr>actually. I like joyent's http-signature
11:36:38  <dominictarr>obviously the right approach.
11:37:04  <dominictarr>that is approaching signed packages, which I'd also like to see.
11:39:02  <isaacs>signed packages wouldn't be terribly hard to do now
11:39:13  <isaacs>but yeah, http-sig would also be great.
11:39:27  <dominictarr>yeah, you'd just put the public keys in the user doc
11:39:48  <isaacs>and have some protection against changing them
11:40:02  <isaacs>like, if a pubkey is changed, email the user to tell them it happend, is probably enough
11:40:14  <isaacs>we already protect email like that
11:40:16  <dominictarr>and you should sign the new key
11:40:30  <dominictarr>so proove you own it, at least
11:40:45  <isaacs>oh, you mean, upload the key + the signature of the pubkey?
11:40:59  <dominictarr>yes.
11:41:01  <isaacs>i dunno. that sounds kinda annoying
11:41:13  <isaacs>i'd like a button that you can click to import your github keys
11:41:15  <dominictarr>it would just be a single command
11:41:49  <dominictarr>npm authkey < ~/.ssh/id_rsa
11:42:03  <dominictarr>no, pass in the path...
11:42:41  <dominictarr>I have ideas about this... but they arn't implemented yet. email is simple for now.
11:43:15  <dominictarr>just described: https://github.com/hij1nx/pkp/issues/9
11:43:59  <dominictarr>I think youd need to sign the tarball, and the deps.
11:44:18  <dominictarr>(at least if you had a smart server that did the dependecy resolution for you)
11:44:51  <dominictarr>it would return a shrinkwrap tree with inline signatures, so you could verify everything locally
11:45:22  <dominictarr>and the dependencies are an essential part of that.
11:46:45  <dominictarr>hmm, or, you could just check the resolved dependencies against the package.jsons that you unpack.
11:53:29  <isaacs>the only thing you should ever have to sign is the shasum of a package.
11:53:33  <isaacs>never the whole package.
11:53:46  <isaacs>that's absurdly expensive, and only increases the attack surface area to eventually crack your key.
11:54:06  <isaacs>you never ever sign big messages with a private key. if you want to sign a big message, you generate a shared secret using diffie-helman, and use that.
11:54:41  <isaacs>when you publish, along with the dist.tarball and dist.shasum, there'll be a dist.signature
11:54:46  <isaacs>and that's the signature of the shasum
11:56:07  <dominictarr>well, of course you hash the thing you are signing, and then sign that. That goes without saying.
11:57:08  <dominictarr>so, while we are on the subject of signing...
11:57:57  <dominictarr>I've been using node-tar with the times stamps all set to zero (0 - timezoneoffset, to get true unix zero)
11:58:14  <isaacs>k
11:58:24  <dominictarr>so that you can pack and repack a package, and the hash is determined by the contents
11:58:28  <isaacs>so, 1970-01-01T00:00:00Z so to speak?
11:58:30  <dominictarr>not when you hash it.
11:58:43  <dominictarr>isaacs, exactly. the begginng of time.
11:58:44  <isaacs>hm. interesting
11:58:58  <dominictarr>although, I have been considering a new zero in 2009.
11:59:06  <isaacs>hahah
11:59:10  <isaacs>Feb 16?
11:59:15  <isaacs>or Sept 29?
11:59:16  <dominictarr>(the birth year of both node and bitcoin)
11:59:22  <isaacs>and npm!
11:59:31  <dominictarr>indeed!
12:00:03  <dominictarr>feb 16 is the first commit in node?
12:00:34  <dominictarr>that is a pretty reasonable zero, because you can't have possible written a node module before that time.
12:01:02  <dominictarr>it doesn't really matter when the zero is, because the files get a new created time when they are unpacked.
12:01:56  <dominictarr>some things that look at the timestamp of the files and decide whether or not the freshen wouldn't work.
12:02:10  <dominictarr>but these things don't work in a distributed system anyway, which is why we use git.
12:03:34  <dominictarr>I guess if we are gonna do something like that, we should probably choose something that is a in joke, or some outlandish statement.
12:03:59  <isaacs>sure, sure
12:04:14  <isaacs>well, you know, node-tar WILL set the mtime and atime i think, if you let it
12:04:22  <dominictarr>and the assertion that 2009 is a new era, as if node : unix :: new testament :: old testament
12:04:51  <dominictarr>yes, I override it.
12:06:46  <isaacs>ki see
12:06:53  <isaacs>so the unpack time is always now
12:06:57  <isaacs>and the pack time is always 0
12:07:15  <isaacs>i dunno how i feel about that
12:07:20  <isaacs>but i guess determinism there is nice
12:07:34  <isaacs>otoh, it's nice to know that it's definitely not the SAME thing, if you packed it on a different day
12:07:41  <isaacs>it's like a freshness seal
12:07:43  <dominictarr>https://github.com/dominictarr/npmd-pack/blob/master/index.js#L23-L31
12:08:34  <dominictarr>determinism is so helpful in distributed systems
12:08:47  <dominictarr>also, in cases like, say there is a url dep
12:08:55  <dominictarr>and you do npm shrinkwrap
12:09:21  <dominictarr>(and lets say that shrinkwrap includes the tarball)
12:09:27  <dominictarr>I mean hash(tarball)
12:09:57  <dominictarr>then you can transform the tarballs to remove the dates.
12:10:33  <dominictarr>then if you install it, and your friend installs it, you can see that you installed the same.
12:11:14  <dominictarr>you could have it error if you didn't get the same thing, rather than trusting a url
12:11:38  <dominictarr>(btw, github tarballs are always different, freshly packed)
12:12:02  <dominictarr>best to transform them...
12:12:20  * mikolalysenkojoined
12:13:26  <dominictarr>then npm will be good enough to deploy with, and you won't have to check in deps, just check in the shrinkwrap.json
12:15:54  <dominictarr>here is a facinating article on the importance of determinism re: security: https://blog.torproject.org/blog/deterministic-builds-part-one-cyberwar-and-global-compromise
12:16:52  <isaacs>yeah
12:17:05  <isaacs>i guess that's a good idea anyway. i'd take a patch to add that to npm
12:17:13  <isaacs>it rebuilds packages anyway
12:17:13  <dominictarr>sweet!
12:18:18  <dominictarr>I'd leave the packages that already match the shasum, (so, use deterministic packing when you publish them)
12:18:53  <isaacs>well, it checks the shasum on download, and then repacks them anyway
12:19:02  <isaacs>may as well just start always packing with 0-date
12:19:11  <isaacs>also, should be epoch
12:19:22  <isaacs>node comes not to destroy the prophesy, but to fulfill it
12:19:38  <dominictarr>haha, yes.
12:20:26  <dominictarr>so, this is the most important thing in all this, migrating to a content store as cache.
12:20:45  <dominictarr>you'd still keep the {modulename}/.cache.json the same, though
12:21:14  <dominictarr>so it's possible to resolve offline, and can optimistically skip some requests on publish sometimes.
12:21:32  <dominictarr>and of course, not download things twice.
12:21:33  <isaacs>right
12:23:20  <dominictarr>this means rewriting the cache, but also means a chance to squash the repack thing too.
12:23:43  * fronxjoined
12:24:04  <rowbit>Hourly usage stats: [developer: 0, free: 9]
12:28:52  <dominictarr>hmm, so the npm cache api is mostly... like cache.add({url,module@version}) and cache.get({url,module@version}, cb) ?
12:29:56  <dominictarr>ah, it's cache.read
12:30:27  <dominictarr>so that would need to change to cache.read(hash)
12:32:09  <dominictarr>and should return a stream
12:32:49  <isaacs>if you want to completely rip out the existing cache usage entirely, and make that a separate package that npm depends on, be my guest.
12:33:20  <dominictarr>perfect.
12:33:30  <isaacs>note that it's currently *both* a local package tarball repository, and a registry etag repository
12:34:09  <dominictarr>right, but with this there is more of a distinction between tarballs and metadata.
12:34:12  <isaacs>also, currently you can do stuff like set the cache-min-timeout to a big number, and then it won't even look for a 304
12:34:28  <isaacs>it'll just say, "do i have anything that was at that url any time in the last N"
12:34:46  <dominictarr>what if there is nothing?
12:35:01  <isaacs>so it might be worthwhile to have at least a reference from /registry.npmjs.org/foo/-/foo-1.2.3.tgz/... to some specific hash
12:35:07  <isaacs>ie, reinvent symlinks
12:35:09  <isaacs>like git does
12:35:20  <isaacs>if you dont' have anything there, then fetch it
12:35:32  <isaacs>but if you do have something, and otherwise it seems fine, just take it
12:35:44  <dominictarr>I think you'd need something like that for giturls, etc
12:35:52  <isaacs>but what i'm saying is, you don't always have the shasum along with the url, and sometimes still need to avoid fetching
12:36:19  <isaacs>in the package management game, every conceivable edge case is someone's build-breaking critical dependency
12:36:23  <isaacs>it sucks :)
12:36:29  <dominictarr>like, if this dep is used more than once in this installation.
12:37:07  <dominictarr>currently imagining a venn diagram
12:37:33  <isaacs>right
12:38:01  <dominictarr>one circly is "problems that turn out harder than they look", and the other is "problems that turn out easy"
12:38:18  * fronxquit (Remote host closed the connection)
12:38:41  <dominictarr>except problems that turn out easy is tiny, and the circles are tangential, but don't overlap.
12:38:51  <isaacs>they overlap a little
12:38:57  <isaacs>sometimes things are harder than they look, but still pretty easy
12:39:05  <isaacs>just, not trivial
12:40:29  <isaacs>dominictarr: hey, where are you right now, on earth?
12:40:41  <isaacs>speaking of distributed systems...
12:40:48  <isaacs>can you do this? dig +short a.sni.fastly.net
12:41:18  <isaacs>and then, take the url you get, and put a line in /etc/hosts: $that_url_you_got registry.npmjs.org
12:41:32  <isaacs>for me, it's 199.27.77.162 registry.npmjs.org
12:43:58  <isaacs>i'm gonna go to bed, but when i get up, I'll try flipping the switch to make the registry go to fastly again
12:44:04  <dominictarr>isaacs, I'm in Phnom Pehn, Cambodia
12:44:09  <isaacs>perfect
12:44:21  <isaacs>it'll probably make the registry a lot faster
12:44:34  <isaacs>and probably it won't be the same IP address i'm getting
12:45:09  <dominictarr>what is the dig command?
12:46:01  * yorickjoined
12:46:20  <dominictarr>oh, this is for the deps in CDN
12:47:28  <dominictarr>hmm, I gotta go, this place is closing. catch you later!
12:50:55  <isaacs>dominictarr: the dig command is to get the local url for fastly
12:51:12  <isaacs>er, IP address for fastly's closest POP, rather
12:51:25  <isaacs>for me, it's in san jose or lax, most of the time
12:52:01  * dominictarrquit (Ping timeout: 272 seconds)
12:52:03  * isaacs&
12:52:03  <LOUDBOT>I AM GONNA HAVE TO HAVE THAT BITCH SERVED
12:53:01  <rvagg>sleep dude
13:24:04  <rowbit>Hourly usage stats: [developer: 0, free: 21]
13:48:23  * thlorenzjoined
14:05:23  * indexzeroquit (Quit: indexzero)
14:12:52  * kevino80joined
14:13:29  * dominictarrjoined
14:23:46  * jcrugzzjoined
14:24:04  <rowbit>Hourly usage stats: [developer: 0, free: 28]
14:53:37  * dominictarrquit (Ping timeout: 272 seconds)
14:59:45  * dominictarrjoined
15:16:38  * pfrazejoined
15:24:04  <rowbit>Hourly usage stats: [developer: 1, free: 18]
15:34:30  <dominictarr>so, currently implementing a streaming sha in javascript... this is extremely fidlely!
15:40:46  <johnkpaul>Is there a good convention in node world for how to handle package.son configuration that's environment specific?
15:42:00  <johnkpaul>I want to allow for config based on NODE_ENV and I can't think of anything but splitting up the properties in the package.json, but that seems ugly
16:00:46  * dominictarrquit (Ping timeout: 246 seconds)
16:03:20  * jibayjoined
16:10:18  <rowbit>substack, pkrumins: These encoders are STILL down: 50.57.103.88(dev)
16:14:25  <johnkpaul>substack: https://github.com/johnkpaul/promethify outputDir work finished. I'm pretty happy with the configuration options
16:18:22  * Maciek416joined
16:19:04  <substack>cool!
16:20:59  <substack>johnkpaul: did a once-over, looks really good!
16:21:31  <substack>and the examples provide enough information to dive right in
16:23:34  <johnkpaul>thanks!
16:23:50  <johnkpaul>I think that the new first example makes it really straightforward what the goal is
16:24:04  <rowbit>Hourly usage stats: [developer: 0, free: 24]
16:24:31  <johnkpaul>I have intentions to write a blog post sometime before I have to go back to work
16:24:37  <johnkpaul>covering both promethify and requireify
16:32:59  <johnkpaul>substack: I used a combination of https://gist.github.com/SlexAxton/4989674 and quicktime's included screen recording to make https://dl.dropboxusercontent.com/u/21266325/requireify.gif
16:33:32  <johnkpaul>it's not _one_ tool, but it was very straightforward
16:35:47  <substack>it sucks that I can't seem to find a converter from scriptreplay timing files to gifs directly
16:38:04  <greweb>johnkpaul: aha that's funny, I've just did that too 2 hours ago, for work :D
16:38:45  <greweb>and nice tool btw, I'll use it because I always do it by hand xd
16:45:34  <johnkpaul>greweb: ah cool. it wasn't nearly as complicated as I was expecting
16:45:45  <johnkpaul>my demo wasn't just a terminal though, I needed to show the browser console too
16:46:51  <substack>byzanz-record is working well so far
17:00:14  <jcrugzz>dominictarr: cant we do that with the node crypto api? unless im missing your meaning of streaming sha
17:03:28  <rowbit>substack, pkrumins: Encoders down: 50.56.27.70 (dev-ie6-1), 50.57.171.229 (dev-ie6-2), 184.106.106.46 (dev-ie8-1), 50.57.174.105 (dev-ie8-2), 50.56.64.186 (dev-ie8-3)
17:05:22  * AvianFlujoined
17:07:07  * kevino80quit (Remote host closed the connection)
17:22:49  * mikolalysenkoquit (Ping timeout: 272 seconds)
17:23:38  <isaacs>substack: landed your thing and updated npm-registry-mock so that it's hella easy to add new fixtures.
17:24:04  <rowbit>Hourly usage stats: [free: 16]
17:34:09  <substack>hooray!~
17:35:22  * mikolalysenkojoined
17:38:07  <isaacs>substack: are you still using the cdn IP address?
17:38:24  <isaacs>substack: i've rejiggered it to make logins and fast-repeated-publishing work properly
17:38:49  <isaacs>i'm gonna flip the switch soon. just wondering if you'd seen any other new issues.
17:44:03  * kevino80joined
17:51:53  <substack>yes I'm still on it
17:52:12  <substack>nope, no issues except for the cache updates being slow after publishing
17:58:54  <isaacs>substack: that should be much better now
17:59:01  <isaacs>substack: the TTL on any json response is 1s
17:59:27  <isaacs>substack: for tgz's, it's much longer, but that'll only be a problem with force-publishing, which i'm considering just disabling anyhow
18:00:07  <isaacs>eg, you'll be able to unpublish a version, but still not able to re-publish that same version, ever. once used up, you MUST bump the version number
18:13:29  * jcrugzzquit (Ping timeout: 272 seconds)
18:13:55  <substack>hooray!
18:14:53  <chapel>isaacs: thats probably for the best
18:15:02  <chapel>more friction for the publisher, but better for users
18:15:05  <isaacs>yeah
18:15:13  <substack>isaacs: https://github.com/substack/faucet
18:15:21  <chapel>prevents breaking builds months later
18:15:22  <isaacs>chapel: of course, you can still delete the whole package, and then publish whatever the hell you want
18:15:22  <substack>tap-related, relevant to your interests
18:15:33  <substack>plus, I just published that on the IP npm
18:15:35  <chapel>isaacs: yeah, can only prevent so much
18:15:44  <isaacs>substack: oh, that is nice
18:15:54  <isaacs>chapel: well, i could make it so that only admins can completely delete packages
18:15:54  <substack>made it for defunctzombie: ^^^^
18:16:09  <isaacs>chapel: so that once a name is taken, it's taken for good
18:16:20  <defunctzombie>substack: \o/
18:16:39  <defunctzombie>humanity is saved!!
18:17:14  <substack>defunctzombie: I used your browser-resolve test suite for the mocha example gif :p
18:17:14  <defunctzombie>tap tests all the way now with such amazing pretty output :)
18:17:21  <isaacs>chapel: but that makes it harder for people to hand over names, i guess
18:17:21  <defunctzombie>hahaha nice!
18:17:31  <isaacs>and easier for squatters to do more damage
18:18:04  <defunctzombie>I should update zuul to make the tap ui prettier
18:18:06  <chapel>isaacs: not sure what is right, for users, locked down is best, for owners, its harder
18:18:08  <defunctzombie>in browser
18:18:30  <chapel>but then again transferring modules is probably an edge case more so than anything else
18:19:06  * shamajoined
18:19:21  <isaacs>yeah
18:19:43  <isaacs>it would be nice to be able to say that pkg@version is guaranteed to be only one thing, ever, and if that thing isn't there, it's a 404, forever.
18:20:53  <chapel>that makes sense to me
18:20:56  <substack>defunctzombie: you could use thlorenz's ansi rendering module and just pipe faucet ansi output to the browser >:D
18:21:08  <defunctzombie>yep, I think I might do that
18:21:14  <defunctzombie>or get thlorenz to do it :p
18:21:30  * jcrugzzjoined
18:23:07  * thlorenzquit (Remote host closed the connection)
18:24:04  <rowbit>Hourly usage stats: [free: 27]
18:33:25  <substack>thinking I should get code coverage working for faucet too
18:33:33  <substack>because now I have all the machinery to do that pretty easily
18:35:28  <rowbit>substack, pkrumins: Encoders down: 184.106.106.66 (dev-ie7-1)
18:37:30  <grncdr>substack: seems like covert tests/*.js | faucet just works
18:37:42  <grncdr>haven't tried other incantations yet
18:38:31  <substack>because unix!
18:39:07  <grncdr>indeed
18:39:41  <grncdr>just checked, I still get the covert error output as well (for when things aren't covered)
18:39:43  <substack>still a little bit of work needed to make the output prettier when combined with coverify
18:39:56  <substack>but the basic plumbing works
18:40:05  <grncdr>sure, there's also the exit codes to contend with
18:40:12  <grncdr>because unix :(
18:40:34  <substack>set -o pipefial
18:40:38  <substack>set -o pipefail
18:41:22  <grncdr>npm test don't care
18:41:30  <grncdr>cause subshells etc
18:41:46  <substack>`npm test` needs pipefail :(
18:42:59  <grncdr>yep, and now that I've got all the any-db stuff out in the wild @ 2.1 I'm going to get back on bash ports
18:48:58  <grncdr>also: https://twitter.com/grncdr/status/416279393707622401
19:19:28  * thlorenzjoined
19:24:04  <rowbit>Hourly usage stats: [free: 29]
19:24:10  <defunctzombie>I feel like we need that dog meme with the pretty dog and some text saying
19:24:14  <defunctzombie>sooo unix
19:24:18  <defunctzombie>or something like that
19:24:19  <defunctzombie>haha
19:36:45  * calvinfojoined
19:37:09  <thlorenz>defunctzombie: do what?
19:40:06  <grncdr>http://hackingdistributed.com/2013/12/26/introducing-replicant/ interesting, might be nice to have node bindings for that one
19:58:31  * jcrugzzquit (Ping timeout: 260 seconds)
19:59:51  * jcrugzzjoined
20:08:13  * ELLIOTTCABLE__quit (Ping timeout: 246 seconds)
20:10:14  * ELLIOTTCABLE__joined
20:17:06  <rowbit>/!\ ATTENTION: (default-local) dunbarb2@... successfully signed up for developer browserling plan ($20). Cash money! /!\
20:17:06  <rowbit>/!\ ATTENTION: (default-local) paid account successfully upgraded /!\
20:18:58  <rowbit>substack, pkrumins: Encoders down: 50.56.64.186 (dev-ie8-3)
20:19:58  <rowbit>substack, pkrumins: A developer is waiting in the queue for iexplore/9.0
20:20:54  <grncdr>isaacs: the tarball for a package I published last night appears to be missing: https://registry.npmjs.org/prepend-listener/-/prepend-listener-0.0.0.tgz
20:21:05  <grncdr>the publish succeeded, http://npm.im/prepend-listener
20:21:12  <isaacs>grncdr: seems to be there to me?
20:21:21  <grncdr>orly?
20:21:52  <isaacs>$ curl -I https://registry.npmjs.org/prepend-listener/-/prepend-listener-0.0.0.tgz
20:21:52  <grncdr>travis-ci can't find it either: https://travis-ci.org/grncdr/node-any-db-transaction/builds/16000027#L117
20:21:55  <isaacs>HTTP/1.1 200 OK
20:21:56  <LOUDBOT>DON'T SWEAT IT MAAAAN
20:22:20  <grncdr>thanks LOUDBOT
20:22:24  <grncdr>curl https://registry.npmjs.org/prepend-listener/-/prepend-listener-0.0.0.tgz
20:22:24  <grncdr>{"error":"not_found","reason":"missing"}
20:22:59  * calvinfoquit (Quit: Leaving.)
20:23:39  <isaacs>grncdr: that's wird.
20:23:41  <isaacs>one sec
20:23:53  * calvinfojoined
20:24:04  <rowbit>Hourly usage stats: [developer: 3, free: 40]
20:24:09  <isaacs>grncdr: try now?
20:25:12  <grncdr>seems better locally, I'll restart the travis build...
20:26:58  <grncdr>that worked too
20:27:00  <grncdr>what changed?
20:28:13  * calvinfoquit (Ping timeout: 252 seconds)
20:34:08  * calvinfojoined
20:36:28  <rowbit>substack, pkrumins: A developer is waiting in the queue for iexplore/9.0
20:37:06  <isaacs>grncdr: i clicked "save" a bunch of times in futon to make the replicator pick up a new version of it
20:37:28  <rowbit>substack, pkrumins: A developer is waiting in the queue for iexplore/9.0
20:38:28  <rowbit>substack, pkrumins: A developer is waiting in the queue for iexplore/9.0
20:39:58  <rowbit>substack, pkrumins: A developer is waiting in the queue for iexplore/9.0
20:40:58  <rowbit>substack, pkrumins: A developer is waiting in the queue for iexplore/9.0
20:45:28  <rowbit>substack, pkrumins: A developer is waiting in the queue for iexplore/9.0
20:46:07  * jcrugzzquit (Ping timeout: 260 seconds)
20:46:13  <pkrumins>jeez
20:48:07  <grncdr>rowbit's had a lot of caffeine today :P
20:48:38  <pkrumins>he's hyper
20:50:54  * jcrugzzjoined
20:51:28  <rowbit>substack, pkrumins: A developer is waiting in the queue for iexplore/9.0
20:56:58  <rowbit>substack, pkrumins: A developer is waiting in the queue for explore/9.0
20:57:46  <pkrumins>chaddap
21:02:30  * calvinfoquit (Quit: Leaving.)
21:13:53  * st_lukequit (Remote host closed the connection)
21:19:34  * indexzerojoined
21:24:04  <rowbit>Hourly usage stats: [developer: 19, free: 30]
21:34:34  * intabulasjoined
21:43:00  <defunctzombie>thlorenz: make the tap support in zuul prettier :)
21:44:13  * kevino80quit (Remote host closed the connection)
21:47:01  <thlorenz>defunctzombie: on the browser side?
21:47:05  <defunctzombie>yea
21:47:35  <thlorenz>yeah, you could use hypernal (what substack was referring to), but it's huge, so it would increase bundle time tremendously
21:47:45  <defunctzombie>ah
21:47:57  <rowbit>substack, pkrumins: Encoders down: 50.56.64.186 (dev-ie8-3)
21:47:58  <thlorenz>there oughta be a simpler solution for just the tap thing
21:48:11  <thlorenz>i.e. a browser version of faucet
21:48:24  <thlorenz>instead of piping faucet into hypernal
21:49:12  <thlorenz>defunctzombie: substack isn't there a tap parser out there already? i.e. isaacs tap must be using one
21:49:27  <defunctzombie>yea
21:49:31  <defunctzombie>thlorenz: already using one in zuul
21:49:39  <thlorenz>so we'd build some Tap syntax tree from that and rerender with html elements
21:49:40  <defunctzombie>I will just poke at it later
21:49:44  <defunctzombie>yep
21:49:45  <thlorenz>cool
21:50:01  <thlorenz>defunctzombie: needs to be streaming though
21:50:03  <substack>faucet uses https://github.com/substack/tap-parser
21:50:11  <defunctzombie>I think I use tap-parser as well
21:50:26  <thlorenz>ah, cool is that streaming substack?
21:50:37  <substack>yes
21:50:41  <substack>it's a writable stream
21:50:44  <substack>and it emits events
21:51:00  <thlorenz>cool, then the rest should be fairly simple
21:51:31  <defunctzombie>right now I am deep in the bowels of my cordova app :)
21:51:39  <defunctzombie>js all the things
21:51:47  <thlorenz>wouldn't a simple document.write work with a tap parser transform in the middle?
21:52:16  <thlorenz>i.e. tape output | tap parser | document.write
21:52:34  <thlorenz>substack: is testling using document.write or creating elements?
21:53:03  <substack>creating elements
21:53:27  <defunctzombie>all the cool kids create elements
21:57:29  <thlorenz>ok we'll do it the cool way
21:58:08  <thlorenz>defunctzombie: I could take a stab at it tomorrow or this weekend, could you point me to some sample zuul tap tests you want to make pretty?
21:58:37  <defunctzombie>thlorenz: any of substack's repos have tape tests
21:58:49  <defunctzombie>thlorenz: I don't have any repos with zuul and tap stuff right now
21:58:53  <defunctzombie>cause it wasn't well supported
21:58:53  <thlorenz>ah, but never used tap with zuul
21:59:01  <defunctzombie>ui: tape
21:59:06  <defunctzombie>that is all
21:59:11  <defunctzombie>instead of mocha or whatever
21:59:26  <thlorenz>awesome! wasn't aware that's in there already
21:59:34  <defunctzombie>I hacked it in
21:59:45  <defunctzombie>but it is ghetto I think haha
21:59:46  <thlorenz>may make us consider using tape instead of mocha for some things
21:59:59  <thlorenz>well, we'll make it look less ghetto
22:01:13  <defunctzombie>yep :)
22:01:21  <defunctzombie>I would deff start using tape for more things
22:01:35  <defunctzombie>since I already write using mocha-qunit style
22:08:16  * indexzeroquit (Ping timeout: 240 seconds)
22:11:31  <thlorenz>defunctzombie: I think you are a hopeless case for tape since you are afraid of the `t` ;)
22:11:47  <defunctzombie>I am very afraid of the 't'
22:11:56  <defunctzombie>but luckily I can rename the t to assert
22:12:03  <substack>test(function (assert) {})
22:12:04  <defunctzombie>calling it 't' is promoting very bad habit
22:12:05  <substack>yep
22:12:15  * substacklikes the `t`, whatevs
22:12:17  <thlorenz>i.e. can't grep it :) oh maybe you *could* grep /function \(t\) {/
22:12:28  * thlorenzlikes it too
22:14:02  <thlorenz>defunctzombie: however calling it assert if you prefer more desc vars is totally fine, but if test functions are short enough using just t is fine IMHO
22:16:48  * intabulasquit (Quit: Leaving...)
22:24:04  <rowbit>Hourly usage stats: [developer: 1, free: 38]
22:24:41  <chrisdickinson>just use `jik 'id[name=t]` :)
22:35:32  <thlorenz>chrisdickinson: what kinda vim magic is that? jik ??
22:35:52  <chrisdickinson>thlorenz: ah, sorry: http://npm.im/jik
22:36:06  <thlorenz>ah, nice!
22:36:33  <thlorenz>the fact that 'jk' was in there turned my vim filter on ;)
22:36:35  <chrisdickinson>you can get more context using ! as well: `jik '!* * id[name=t]` selects the 2nd parent of any identifier named "t"
22:37:55  <thlorenz>chrisdickinson: I was playing with somewhat related idea way back, never finished, guess I can unpub this now ;) https://npmjs.org/package/see
22:38:19  <chrisdickinson>ah cool :)
22:38:52  <thlorenz>yeah, jik is cooler and it actually works :)
22:41:06  <thlorenz>chrisdickinson: actually I'm getting ENOENTs
22:41:13  <chrisdickinson>wuh-oh
22:41:22  <thlorenz>also can this work recursively for multiple files?
22:42:25  <chrisdickinson>thlorenz: by default it just does the equivalent of an `ls -R`, but you do have the ability to tell it to follow paths
22:42:36  <chrisdickinson>really that should be a flag -- `--follow-requires`
22:42:49  <thlorenz>yeah that'd be nice
22:43:12  <thlorenz>so cssauron similar to JSONSelect right (think I asked you that before)
22:43:22  <chrisdickinson>yep
22:43:32  <thlorenz>ok
22:43:53  <chrisdickinson>cssauron is an api for building a css selector api for any given nested tree structure
22:43:59  <thlorenz>cool so giving it my current dir '.' worked
22:44:01  <chrisdickinson>in this case jik uses cssauron-falafel
22:44:13  <chrisdickinson>jik --help is pretty full-featured
22:44:27  <thlorenz>just when I give it one file it throws an error, except it should be ignoring node_modules by default probably ;)
22:44:44  <chrisdickinson>hmm
22:44:56  <chrisdickinson>is the file a symlink?
22:45:50  <thlorenz>nope
22:45:59  <thlorenz>hey, but this works real nice: jik '!* * id[name=t]' ./test | pygmentize -O style=monokai -f console256 -g
22:46:34  <thlorenz>that way I can actually read the code :)
22:46:40  <chrisdickinson>nice :)
22:47:16  <chrisdickinson>you can also do arbitrary commands on the nodes using `jik '!* * id[name=t]' '{print($POS, $NODE)}' ./test`
22:47:28  <thlorenz>awesome tool!
22:47:53  <thlorenz>do you want me to file an issue about the single file error (it's pretty consistent)
22:49:46  <thlorenz>chrisdickinson: actually not sure, but it seems to have fixed itself :)
22:56:30  <chrisdickinson>thlorenz: that'd be awesome! thanks!
22:57:09  <thlorenz>chrisdickinson: not filing issue since it works now ;) not sure what was going on before, but if I see it again, I will
23:02:29  <chrisdickinson>thanks!
23:05:58  * jlordquit (Read error: Operation timed out)
23:06:06  * jlordjoined
23:16:48  <isaacs>you can grep on t just fine: /\bt\b/
23:17:14  <isaacs>or in vim, <t>
23:18:00  <thlorenz>ah, nice - didn't know that
23:18:18  <thlorenz>^ defunctzombie see ? :)
23:22:33  <isaacs>oh, i think in vim it has to be either \s<t> or \<t\>
23:22:44  <isaacs>as long as you don't use t for anything ELSE, of course.
23:22:57  <isaacs>i like "assert" meaning asserts, not test cases.
23:23:21  <isaacs>assert = "if this isn't true, i cannot proceed, please throw an error, abort, dump core, flip over the server, set fire to the data center, as appropriate"
23:23:46  <isaacs>t.ok(cond) = "if this isn't true, the test doesn't pass, but you should still check the other things"
23:24:03  <rowbit>Daily usage stats: [developer: 17, free: 99]
23:24:04  <rowbit>Hourly usage stats: [developer: 0, free: 31]
23:24:10  <isaacs>also, t.* in tap-generating tests is an homage to perl, where tap is generated by *.t files
23:25:08  <jesusabdullah>suuuure
23:25:37  <isaacs>between atomic publishes and the fastly cdn, pushing a new version to npm is suddenly WAY faster than pushing the tagged commit to github
23:29:32  * mikolalysenkoquit (Ping timeout: 252 seconds)
23:29:50  <jesusabdullah>isaacs: I found something, speaking of tap http://jc.ngo.org.uk/trac-bin/trac.cgi/browser/nik/libtap/trunk/src/tap.c
23:30:21  <isaacs>neato
23:30:22  <jesusabdullah>isaacs: trying to decide if it's even worthwhile, I mean, the important part of tap is in the report not the testing api no?
23:30:41  <jesusabdullah>isaacs: http://www.jera.com/techinfo/jtns/jtn002.html I thought this was pretty sweet
23:30:44  <isaacs>the important part is generating tap output
23:30:55  <jesusabdullah>which is what I mean by "the report"
23:32:01  <isaacs>right
23:34:23  <thlorenz>jesusabdullah: I made a tap.c thing you can install with clib
23:34:36  <thlorenz>basically just forked another tap C impl
23:34:55  <thlorenz>https://github.com/thlorenz/tap.c
23:35:48  <thlorenz>I tried to make something like this work with substack's dotc, but it kinda doesn't handle macros very well
23:37:09  * mikolalysenkojoined
23:40:54  <thlorenz>jesusabdullah: if you like minimal have you seen http://c.learncodethehardway.org/book/ex30.html ?
23:40:55  * calvinfojoined
23:41:16  <thlorenz>ah, it's the same thing as you linked :)
23:46:49  <substack>thlorenz: scoped macros would be cool
23:47:32  * calvinfoquit (Quit: Leaving.)
23:47:52  <thlorenz>substack: yeah, I decided to shelf this until I understand C a bit better, but making the dotc idea work would be nice