00:01:13  * zz_eugenewarechanged nick to eugeneware
00:02:58  * briancjoined
00:03:05  <brianc>rvagg: you around?
00:03:14  * funkytekquit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
00:03:47  <brianc>is anyone around? hehe
00:04:16  * ednapiranhaquit (Ping timeout: 265 seconds)
00:09:31  * ryan_ramagequit (Ping timeout: 260 seconds)
00:12:55  * jcrugzzquit (Ping timeout: 252 seconds)
00:14:50  * jcrugzzjoined
00:16:45  * ednapiranhajoined
00:21:46  * stagasjoined
00:25:33  * tmcwquit (Remote host closed the connection)
00:28:32  * jerrysvquit (Remote host closed the connection)
00:29:43  * eugenewarechanged nick to zz_eugeneware
00:30:44  * jxsonquit (Remote host closed the connection)
00:31:11  * jxsonjoined
00:35:02  * zz_eugenewarechanged nick to eugeneware
00:35:24  * jxsonquit (Ping timeout: 246 seconds)
00:36:17  * funkytekjoined
00:41:23  * eugenewarechanged nick to zz_eugeneware
00:43:20  * zz_eugenewarechanged nick to eugeneware
00:48:08  * jxsonjoined
00:50:00  <brianc>am I hellbanned in this room or is this room just dead?
00:50:06  <brianc>hehe
00:50:22  <kenansulayman>brianc no. there's just not much up
00:50:22  * jcrugzzquit (Read error: Connection reset by peer)
00:50:31  <brianc>i wrote something for leveldb today!
00:50:33  <brianc>https://github.com/brianc/node-level-readable
00:50:36  <brianc>i havent documented it yet
00:50:46  <kenansulayman>Yes I saw the messages from levelbot
00:51:11  <kenansulayman>That is cool. What does it do?
00:51:12  <brianc>it's like multilevel's "createReadableStream()" except it doesn't leak memory and it's much more efficient and seems to handle back pressure better as well
00:51:26  <kenansulayman>^ juliangruber ^
00:51:27  <brianc>basically it's a replacement for the 1 part of multilevel that wasn't working properly
00:51:55  <brianc>multlevel is awesome and has full RPC and all these awesome features, but the readable stream in particular was causing me all kinds of grief yesterday
00:51:57  <rescrv>brianc: why not fix that one part?
00:52:22  * mikealjoined
00:52:37  <brianc>I dont think it's fixable exactly
00:52:38  <brianc>i coudl be wrong
00:52:56  <brianc>but multlevel uses rpc-stream to kinda expose your db instance across the network
00:53:05  <brianc>it does all it's readablestream stuff in object-mode
00:53:23  <brianc>I'm actually going down into level-down to grab binary out of leveldb directly
00:53:29  <kenansulayman>then propose it and create a PR ;)
00:53:35  <kenansulayman>wait, what?
00:53:38  <brianc>since it's going right back into binary when it's shot out over the network
00:54:15  <kenansulayman>is it compatible with different leveldown's?
00:54:22  <brianc>so it's not really a patch for multilevel as much as a high-performance alternative for 1 particular usecase
00:54:31  <brianc>I'm not sure - I think so? About to test it with leveldown-hyper
00:54:38  <brianc>https://github.com/brianc/node-level-readable/blob/master/index.js#L57
00:54:42  <brianc>https://github.com/brianc/node-level-readable/blob/master/index.js#L23
00:54:47  <brianc>that's the leveldown stuff I use
00:54:52  <kenansulayman>^ juliangruber ^ x2
00:55:17  <kenansulayman>I suggest that you PR this into multilevel
00:55:21  <brianc>I hope I'm not coming across as like "I AM BETTER THAN MULTILEVEL" because that's absolutely not what I'm trying to say
00:55:26  <brianc>okay I'd love to do that
00:55:27  <kenansulayman>so that it gets the reach it deserves
00:55:40  <kenansulayman>if it _is_ better, say it
00:55:40  <brianc>I'll open an issue/PR there and start a discussion
00:55:52  * esundahlquit (Remote host closed the connection)
00:56:08  <brianc>is better if you want to stream 5 million decently large records without having your process get eaten by the OOM monster
00:56:23  * esundahljoined
00:56:39  <brianc>opening PR now. :p
00:56:39  * esundahlquit (Read error: Connection reset by peer)
00:56:42  <kenansulayman>How does it transfer the data?
00:56:48  <kenansulayman>wait I'll checkout the source
00:57:07  <brianc>it uses a simple protocol based on the postgres client/server protocol
00:57:07  <kenansulayman>I see
00:57:20  <kenansulayman>^ rescrv ^
00:57:27  <brianc>1 byte for "type" which I'm not using currently, 4 bytes for length, and then the rest is just a buffer
00:58:10  <brianc>I am not supporting keyEncoding & valueEncoding yet. I just wipped this up today because I'm in turbo-crunch mode @ work and HAD to process these 5 million records and it was too slow to do in a single node process
00:59:00  <rescrv>brianc: it's quicker to stream over the network than to process in the same process?
00:59:04  <rescrv>what processing are you doing?
00:59:19  <brianc>the processing takes while for each record
00:59:35  <brianc>like 10-15 seconds
00:59:42  <rescrv>makes sense then
01:00:02  <brianc>yah i was using sqs
01:00:22  <brianc>and the worker would just take a key for the record & fetch it from postgres & do some initial calcs before the "work" started
01:00:41  <brianc>but I was like "man, I'm gonna cache the first part in leveldb so workers can start hammering right on the work"
01:00:46  <brianc>and then I went down a rabbit hole
01:00:52  <brianc>you know how it goes
01:01:30  * dguttmanquit (Quit: dguttman)
01:01:52  <rescrv>if it's just a cache, why not memcached?
01:02:19  <brianc>i have no good answer for that other than I wanted to try out something in production w/ leveldb
01:02:31  <brianc>and didn't want to learn how to install & use memcached, though in hindsite it might have been faster
01:02:43  <brianc>also: the dataset does not fit into ram - i reached for redis at first
01:03:13  <rescrv>then maybe leveldb was the right choice, unless you can do a sliding window
01:03:20  <rescrv>it's just really expensive as a cache
01:04:04  <kenansulayman>rescrv he could use lmdb for that
01:04:16  <kenansulayman>it's quicker than kyoto cabinet for realtime caching
01:04:24  <brianc>I thought about lmdb but i thought it was also memory bound?
01:04:44  <kenansulayman>rescrv suggested memcached <= it's even in the name
01:04:55  <rescrv>lmdb is memory bound for sure
01:05:21  <brianc>I'm kinda using leveldb as a cache of the data + a work queue. I have a bunch of nodes connect to the master and all start reading from a readable stream at different points in the key space
01:05:22  <rescrv>I'm not sure what you're doing, but you may benefit from just writing to a file (possibly on a distributed file system), and then just storing record id/offset pairs in *db
01:05:35  <brianc>they incrementally do stuff to the data & put it back into leveldb
01:05:45  <rescrv>leveldb is really not good with really large objects
01:06:11  <brianc>hmm mkay i'll check out some other options
01:07:18  <brianc>leveldb database not to big at this point - 80 gigs
01:07:28  * funkytekquit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
01:08:20  <rescrv>if you're doing many overwrites at that size, you may see a speedup with leveldown-hyper
01:09:28  * funkytekjoined
01:09:31  <rescrv>it'll give higher write throughput, which sounds like it might help you
01:09:44  <brianc>cool. :)
01:10:03  <rescrv>beware that if your current db has *.ldb files, you'll need to rename them to *.sst
01:10:17  <rescrv>which the stock version will happily read
01:10:29  * stagasquit (Ping timeout: 265 seconds)
01:11:17  <brianc>oh damn that's all you have to do to move to hyper?
01:11:26  <brianc>so they're binary compatible except for the naming conventions?
01:11:26  <rescrv>yes
01:11:32  * briancis happy
01:11:33  <rescrv>and that'll fix soon enough
01:12:01  <rescrv>the upstream folks made the s/sst/ldb/ change to please the Redmond gods and hyper hasn't caught up yet
01:17:29  * ednapiranhaquit (Remote host closed the connection)
01:24:11  * tmcwjoined
01:30:53  * Sorellaquit (Remote host closed the connection)
01:33:56  * briancquit (Remote host closed the connection)
01:33:58  * TehShrikejoined
01:34:13  * stagasjoined
01:35:40  * tmcwquit (Remote host closed the connection)
01:35:55  * tmcwjoined
01:36:46  * tmcwquit (Remote host closed the connection)
01:39:42  * stagasquit (Read error: Connection reset by peer)
01:40:48  <levelbot>[npm] level-http@0.1.0 <http://npm.im/level-http>: HTTP wrapper for LevelDB databases. (@fiveisprime)
01:48:23  * ednapiranhajoined
01:51:13  * TehShrikequit (Quit: Leaving.)
01:55:49  * ednapiranhaquit (Ping timeout: 250 seconds)
01:59:40  * TehShrikejoined
01:59:51  * eugenewarechanged nick to zz_eugeneware
02:00:39  * zz_eugenewarechanged nick to eugeneware
02:04:18  * funkytekquit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
02:05:22  * dguttmanjoined
02:06:04  * jxsonquit (Remote host closed the connection)
02:06:31  * jxsonjoined
02:08:04  * jcrugzzjoined
02:09:38  * jxsonquit (Remote host closed the connection)
02:09:44  * jxsonjoined
02:17:27  * jxsonquit (Remote host closed the connection)
02:44:49  * eugenewarechanged nick to zz_eugeneware
02:45:53  * tmcwjoined
02:48:34  * jxsonjoined
02:53:27  * jxsonquit (Ping timeout: 272 seconds)
02:53:34  * esundahljoined
02:55:34  * thlorenzjoined
02:58:47  * tmcwquit (Remote host closed the connection)
02:59:01  * ramitosjoined
03:03:04  * tmcwjoined
03:06:03  * zz_eugenewarechanged nick to eugeneware
03:06:47  * dguttmanquit (Quit: dguttman)
03:11:07  * kevinswiberjoined
03:15:36  * eugenewarechanged nick to zz_eugeneware
03:16:46  * funkytekjoined
03:17:23  * kevinswiberquit (Remote host closed the connection)
03:19:19  * funkytekquit (Client Quit)
03:21:40  * jjmalinajoined
03:22:02  * jjmalinaquit (Client Quit)
03:22:32  * dominictarrjoined
03:23:29  * funkytekjoined
03:25:33  * funkytekquit (Client Quit)
03:29:13  * ednapiranhajoined
03:30:53  * kevinswiberjoined
03:45:38  * kevinswiberquit (Ping timeout: 265 seconds)
03:49:19  * jxsonjoined
03:49:30  * jcrugzzquit (Ping timeout: 265 seconds)
03:54:15  * jxsonquit (Ping timeout: 272 seconds)
04:07:46  * funkytekjoined
04:15:38  * DTrejoquit (Remote host closed the connection)
04:16:05  * DTrejojoined
04:20:35  * DTrejoquit (Ping timeout: 260 seconds)
04:27:33  * jcrugzzjoined
04:31:23  * Aredridelquit (Quit: Textual IRC Client: www.textualapp.com)
04:31:41  * thlorenzquit (Remote host closed the connection)
04:32:15  * thlorenzjoined
04:36:33  * thlorenzquit (Ping timeout: 252 seconds)
04:39:32  * zz_eugenewarechanged nick to eugeneware
04:43:47  * eugenewarechanged nick to zz_eugeneware
04:44:39  * zz_eugenewarechanged nick to eugeneware
04:50:15  * jxsonjoined
04:50:49  * jxsonquit (Read error: Connection reset by peer)
04:51:01  * jxsonjoined
04:53:14  * ednapiranhaquit (Quit: Leaving...)
04:55:39  * jxsonquit (Ping timeout: 250 seconds)
05:02:01  * jxsonjoined
05:02:07  * jcrugzzquit (Read error: Connection reset by peer)
05:03:32  * mikealquit (Quit: Leaving.)
05:06:15  * jxsonquit (Ping timeout: 245 seconds)
05:06:27  * tmcwquit (Remote host closed the connection)
05:08:18  * jxsonjoined
05:14:21  * paulfryzeljoined
05:15:05  * eugenewarechanged nick to zz_eugeneware
05:17:38  * DTrejojoined
05:23:16  * dguttmanjoined
05:26:05  * dguttmanquit (Client Quit)
05:26:56  * jxsonquit (Remote host closed the connection)
05:27:22  * jxsonjoined
05:31:40  * jxsonquit (Ping timeout: 245 seconds)
05:35:00  * dominictarrquit (Ping timeout: 246 seconds)
05:39:37  * stagasjoined
05:40:13  * dguttmanjoined
05:44:52  * dguttmanquit (Client Quit)
05:50:15  * paulfryzelquit (Remote host closed the connection)
05:58:48  * funkytekquit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
05:59:52  * TehShrikequit (Quit: Leaving.)
06:09:39  * esundahlquit
06:09:44  * contrahaxquit
06:10:18  * contrahaxjoined
06:23:38  * jxsonjoined
06:28:21  * funkytekjoined
06:40:44  * levelbotquit (Remote host closed the connection)
06:41:06  * levelbotjoined
06:43:28  * mikealjoined
06:45:10  * mikealquit (Client Quit)
06:45:24  * mikealjoined
06:45:25  * mikealquit (Client Quit)
06:50:46  * funkytekquit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
07:01:26  * jxsonquit (Remote host closed the connection)
07:09:02  * funkytekjoined
07:14:10  * jxsonjoined
07:15:58  * funkytekquit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
07:23:39  * ralphtheninjaquit (Quit: leaving)
07:30:53  * ralphtheninjajoined
07:35:46  * jxsonquit (Remote host closed the connection)
07:39:29  * DTrejoquit (Remote host closed the connection)
07:43:37  * funkytekjoined
08:06:39  * stagasquit (Ping timeout: 252 seconds)
08:32:33  * dominictarrjoined
09:07:27  * timoxleyjoined
09:07:53  * dominictarrquit (Quit: Leaving)
09:08:09  * dominictarrjoined
09:09:47  * contrahaxquit (Quit: Sleeping)
09:16:10  * contrahaxjoined
09:20:42  * ramitosquit (Quit: Computer has gone to sleep.)
09:25:01  * pgtejoined
09:28:53  * contrahaxquit (Quit: Sleeping)
09:30:30  * jcrugzzjoined
09:30:35  * contrahaxjoined
09:37:12  * funkytekquit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
09:42:12  * frankblizzardjoined
09:46:07  <dominictarr>jcrugzz, hey, hows it going?
09:46:54  <jcrugzz>dominictarr: just got into bangkok today from hua hin
09:46:58  <jcrugzz>whats up?
10:02:24  <dominictarr>jcrugzz, I'm in veitnam now
10:03:00  <dominictarr>I've had an idea for the npm conflicts thing, that I'd like you to pass on to jhs, since he doesn't seem to be responding to the npm issue
10:03:33  <dominictarr>the actual publish load is quite light - like, only every few seconds, etc MAX
10:04:09  <dominictarr>so a simple way to avoid conflicts might be to always proxy writes to the same database?
10:04:44  <dominictarr>the proxy would just check if it's a p{u,os}t and then push it to the current master
10:05:24  <dominictarr>(and the proxy could just poll the servers and always make the lowest hash(ip) of the servers that are currently alive the current master.
10:05:48  <dominictarr>)
10:07:45  * JasonSmithjoined
10:07:57  <JasonSmith>dominictarr: jcrugzz is getting you a URL, standby
10:08:12  <JasonSmith>it is an internal Nodejitsu data layer (nee Iris Couch) api of sorts
10:08:28  <JasonSmith>wherein you can force the proxies and routers to send to the same couchdb backend every time
10:08:37  <JasonSmith>There is no way to get a conflict in that case
10:08:57  <JasonSmith>Very sorry for all of your headache. FWIW we will be publishing the tooling we have which is shaping up to be superb
10:09:09  <JasonSmith>jcrugzz: Please do the needful :)
10:09:21  <jcrugzz>dominictarr: couchdb.iris4.isaacs.npm-dal01.sl.cdn.iriscouch.net
10:09:37  <jcrugzz>host header that bitch and it will be determinsitic
10:09:47  <JasonSmith>dominictarr: ^^ Set that to your registry. Thanks and as we're in similar TZs ping me if you have problems
10:09:54  <dominictarr>you only need this for writes, reads should still be round robined
10:10:19  <jcrugzz>dominictarr: yea this is for publishes
10:10:36  <jcrugzz>and yea we've talked about sticky sessions but it doesnt solve all cases
10:11:19  <JasonSmith>dominictarr: We will do this indeed
10:11:37  <dominictarr>JasonSmith, I'm not having a problem right now, mainly I'm just interested in npm working well. have you seen my npmd project?
10:11:42  <JasonSmith>Ssticky sessions somewhat paper over the deeper issue though
10:12:04  <JasonSmith>we are about to release two packages, one greatly improves replication quality
10:12:17  <JasonSmith>and the other automatically resolves all conflicts in real time if they hit
10:12:32  <JasonSmith>Those will be free software
10:12:55  <JasonSmith>sticky sessions will basically be modifications to the private Iris Couch Erlang proxies and thus non-free and thus marginalyl useful to the community
10:13:14  <JasonSmith>I realize it would be more useful short-term, but we are within sniffing distance of the end on these other two projects
10:13:37  <dominictarr>I think the deepest issue is that npm creates _rev chains that are too long. 3 separate writes for a publish, and even npm star mutates the module's doc. Although, this is too deep in the thing to fix now
10:14:08  <jcrugzz>the npm star thing kills me
10:15:11  <dominictarr>we can fix some of this. I have this https://github.com/dominictarr/npm-atomic-publish
10:15:34  <dominictarr>which does a publish with a single PUT
10:15:45  <dominictarr>but it depends on this: https://github.com/isaacs/npmjs.org/pull/133 being rolled out.
10:17:22  <JasonSmith>dominictarr: Yes but those types of changes are out of my control
10:17:41  <JasonSmith>I am actually not very knowledgeable with how the npm client works. I know the requests it makes to couch pretty well
10:18:01  <JasonSmith>but yeah, totally agree 3 writes is one cause of the problem
10:18:09  <JasonSmith>couchdb actually supports HTTP COPY
10:18:13  <JasonSmith>which is a verb we made up
10:18:25  <JasonSmith>instead of package "foo" you could "publish" to "prep-foo"
10:18:33  <dominictarr>right. having rewritten npm client, I think I understand it pretty well.
10:18:35  <JasonSmith>or "xxx-prep-foo"
10:18:40  <dominictarr>JasonSmith, just use an inline attachment!
10:18:45  <JasonSmith>Let me finish
10:18:50  <JasonSmith>iniline attachments will crash servers
10:19:06  <JasonSmith>We have pacakges that have over 100MB per attachment
10:19:20  <JasonSmith>we can outlaw packages named "xxx-" (for example)
10:19:27  <JasonSmith>when npm publishes, it can do as many writes as it wants
10:19:31  <dominictarr>right
10:19:31  <JasonSmith>to xxx-foo
10:19:34  <JasonSmith>update data
10:19:35  <JasonSmith>attach
10:19:39  <JasonSmith>count to thirty
10:19:41  <JasonSmith>update more
10:19:53  <JasonSmith>once it's all ready, you do COPY /registry/xxx-foo
10:19:57  <JasonSmith>Destination: foo
10:19:58  <JasonSmith>voila
10:20:00  <JasonSmith>atomic update
10:20:16  <dominictarr>it's more like mv though? does it keep the old version?
10:20:20  <JasonSmith>now you have /registry/foo atomically jumping from e.g. 12-abcdef to 13-fedcba
10:20:50  <JasonSmith>sure COPY is just a shortcut. It tells couch, just make this document look like this other one in one atomic update. It's brilliant
10:21:06  <JasonSmith>Now, if you said to use inline attachments as an optimization e.g. for attachments <= X MB, I 100% agree
10:21:16  <JasonSmith>I have told Isaac these things :)
10:21:23  <JasonSmith>One problem is changing the server is much easier than changing the client
10:21:45  <dominictarr>right. so, what about just use inline attachments for small modules (since the vast majority are small), and fallback to the old way for large modules. It already works with my npmjs.org fork.
10:22:39  <dominictarr>you posted while i was writing, so we agree. better to write small modules anyway.
10:23:37  <dominictarr>I get the feeling that isaacs is a bit fatigued from maintaining npm. I can understand this, it's a huge responsibility, and there are MANY edgecases.
10:23:55  <JasonSmith>dominictarr: Yes if you fall back like that I would totally love that although it's not my decision
10:24:00  <JasonSmith>you are talking about a PR to isaacs/npm
10:24:07  <JasonSmith>I may technically have commit access there though
10:24:33  <JasonSmith>Baidu has lots of software in npm that they seem happy with
10:24:33  <dominictarr>no, I'm asking you because it depends on a change to the database
10:24:35  <JasonSmith>and the modules are HUGE
10:24:42  <JasonSmith>oh, what is the change?
10:24:52  <dominictarr>this https://github.com/isaacs/npmjs.org/commit/c227944
10:25:05  <JasonSmith>(Also I am the host/steward of the database, it belongs to Isaac too; although yes I am quite involved and advise and stuff)
10:25:08  <dominictarr>isaacs has already merged it, but it needs to be rolled out into the public registry.
10:25:53  <dominictarr>I just need to know when it's out so I can get back to implementing npm-atomic-publish (with fallback for large modules)
10:32:23  <JasonSmith>dominictarr: incidentally, you can use HTTP copy now
10:32:47  <JasonSmith>it's just that there will be a brief time where there is a package called "dominicpackage-please-ignore-me"
10:33:18  <JasonSmith>Even better, we change the views to not emit for documents that match /please-ignore-me$/
10:33:34  <JasonSmith>or the moral equivalent of /please-ignore-me$/
10:34:20  <JasonSmith>dominictarr: What do you need from me?
10:35:20  <dominictarr>all I need is to know when c227944 is rolled out to the public registry, or some estimation of when you think this might possibly be.
10:35:43  <jcrugzz>dominictarr: thats all on isaac
10:35:47  <jcrugzz>its his couchapp
10:35:50  <JasonSmith>dominictarr: Unless I am misunderstanding that is an Isaac question
10:36:05  <JasonSmith>I have never been involved in isaacs/npmjs.org
10:36:36  <dominictarr>okay, cool. I've hardly been running into isaacs online recently, guess because of timezones, so if you talk to him, tell him I said hi and mentioned it. :)
10:36:41  <JasonSmith>I mean, I am happy to tell you if/when I notice, if you are talking about just reducing psychic burden on Isaac
10:36:49  <JasonSmith>Sure!
10:37:27  <JasonSmith>I will set up a cron job to curl the ddoc | grep 'inline = true'
10:37:46  <JasonSmith>one that passes it would indicate that your code had been pushed
10:37:51  <JasonSmith>and I'll Ping you! :)
10:37:56  <dominictarr>okay, sure!
10:38:33  <JasonSmith>The only reason I would push isaacs/npmjs.org is if Isaac were to be trampled to death by a hippopotamus or something
10:39:09  <JasonSmith>It owuld be the first time I had done so; I am a total conservative sysadmin anti-changes Nazi for that box
10:39:27  <JasonSmith>to the extent that we are running half 1.3 and half 1.5 because I vetoed an upgrade
10:39:34  <JasonSmith>pending a more comprehensive one
10:39:39  <JasonSmith></ramble> TTYL :)
10:39:53  <dominictarr>aha, okay, thanks. good to know anyway!
10:41:12  <JasonSmith>In fact once or twice he pushed a buggy design doc and npm "was down" and people blamed Iris Couch
10:41:21  <JasonSmith>but whatever I owe him so many favors I'll never be able to catch up
10:42:17  <dominictarr>software is hard.
10:42:31  <dominictarr>they should call it hardware
10:42:38  <dominictarr>oh, wait...
10:45:38  <JasonSmith>Maybe hradware
10:45:41  <JasonSmith>pronounced "radware"
10:45:44  <JasonSmith>that would be rad
10:53:27  * Sorellajoined
10:53:37  * Sorellaquit (Changing host)
10:53:37  * Sorellajoined
11:07:10  * funkytekjoined
11:11:21  * funkytekquit (Ping timeout: 246 seconds)
11:16:36  * insertcoffeejoined
11:40:01  * funkytekjoined
11:44:15  * funkytekquit (Ping timeout: 246 seconds)
11:48:57  * contrahaxchanged nick to _Contra
12:03:08  * timoxley_joined
12:05:43  * timoxleyquit (Ping timeout: 252 seconds)
12:11:45  * funkytekjoined
12:17:03  * funkytekquit (Ping timeout: 260 seconds)
12:26:34  * levelbotquit (Remote host closed the connection)
12:27:10  * levelbotjoined
13:11:11  * tmcwjoined
13:11:20  * timoxleyjoined
13:13:20  * timoxley_quit (Ping timeout: 245 seconds)
13:20:36  * pgtequit (Remote host closed the connection)
13:23:38  * tmcwquit (Remote host closed the connection)
13:24:13  * tmcwjoined
13:28:20  * tmcwquit (Ping timeout: 245 seconds)
13:32:30  * jcrugzzquit (Ping timeout: 272 seconds)
13:49:57  * TehShrikejoined
13:51:51  * pgtejoined
13:52:33  * dominictarrquit (Ping timeout: 250 seconds)
13:55:56  * tmcwjoined
13:56:12  * pgtequit (Ping timeout: 246 seconds)
14:01:43  * kevinswiberjoined
14:03:18  * paulfryzeljoined
14:14:29  * paulfryzelquit
14:18:38  * kevinswiberquit (Remote host closed the connection)
14:26:08  * timoxley_joined
14:26:55  * pgtejoined
14:29:29  * timoxleyquit (Ping timeout: 272 seconds)
14:32:45  * TehShrike1joined
14:34:13  * TehShrikequit (Ping timeout: 246 seconds)
14:47:18  * mmckeggquit (Ping timeout: 265 seconds)
14:47:51  * mmckeggjoined
14:51:57  * dguttmanjoined
15:02:10  * thlorenzjoined
15:02:20  * kevinswiberjoined
15:14:06  * pgtequit (Read error: Connection reset by peer)
15:15:11  * pgtejoined
15:15:18  * jjmalinajoined
15:15:21  * briancjoined
15:20:49  * dominictarrjoined
15:22:19  * kevinswiberquit (Ping timeout: 260 seconds)
15:23:46  * thlorenzquit (Remote host closed the connection)
15:42:12  * thlorenzjoined
16:26:37  * jerrysvjoined
16:33:01  * jcrugzzjoined
16:37:02  * briancquit (*.net *.split)
16:37:02  * dguttmanquit (*.net *.split)
16:38:24  * briancjoined
16:38:24  * dguttmanjoined
16:40:11  * pgtequit (Remote host closed the connection)
16:43:15  * brycebarilquit (Write error: Broken pipe)
16:44:20  * chrisdickinsonquit (Ping timeout: 338 seconds)
16:44:24  * JasonSmithquit (Ping timeout: 338 seconds)
16:44:39  * JasonSmithjoined
16:45:33  * frankblizzardquit
16:46:46  * chrisdickinsonjoined
16:49:13  * tmcwquit (Remote host closed the connection)
16:49:48  * briancquit (*.net *.split)
16:49:48  * dguttmanquit (*.net *.split)
16:49:51  * tmcwjoined
16:53:14  * tmcw_joined
16:54:00  * brycebariljoined
16:54:00  * dguttmanjoined
16:54:10  * tmcwquit (Ping timeout: 245 seconds)
16:54:29  * DTrejojoined
16:58:41  * jcrugzzquit (Ping timeout: 252 seconds)
17:02:55  * esundahljoined
17:08:52  * ednapiranhajoined
17:11:45  * rescrv1joined
17:12:01  * pgtejoined
17:13:15  * rescrvquit (Ping timeout: 252 seconds)
17:13:45  * prettyrobotsquit (Ping timeout: 245 seconds)
17:17:19  * pgtequit (Ping timeout: 272 seconds)
17:19:05  * prettyrobotsjoined
17:19:30  * prettyrobotschanged nick to Guest44983
17:22:24  * insertcoffeequit (Remote host closed the connection)
17:23:03  <levelbot>[npm] cipherhub@0.1.0 <http://npm.im/cipherhub>: encrypt messages based on github ssh public keys (@substack)
17:24:20  * ednapiranhaquit (Remote host closed the connection)
17:25:35  * brycebarilquit (Read error: Operation timed out)
17:25:51  * brycebariljoined
17:30:25  * wolfeidauquit (Ping timeout: 245 seconds)
17:47:49  * paulfryzeljoined
17:57:41  * stagasjoined
17:59:08  * Guest44983changed nick to prettyrobots
18:05:53  * jcrugzzjoined
18:06:00  * pgtejoined
18:10:18  * jcrugzzquit (Ping timeout: 265 seconds)
18:10:31  * pgtequit (Ping timeout: 246 seconds)
18:14:45  * ednapiranhajoined
18:19:16  * thlorenzquit (Remote host closed the connection)
18:21:42  <brycebaril>ogd: just got your meatmessage (lurking, no camera today) multibuffer 2.1.0 breaks multibuffer-stream for some reason. Taking a looksee.
18:21:48  * kenansulaymanchanged nick to kenan|afk
18:23:12  <ogd>ah
18:23:24  <ednapiranha>brycebaril: meatmessage!
18:24:28  <ogd>:P
18:25:24  <brycebaril>meatmessage would be the perfect form of cryptography. Write with edible ink on bacon... if message was recieved, you can guarantee nobody inspected it in transit. Nobody can resist bacon...
18:26:29  <jerrysv>you had me at bacon
18:26:37  <jerrysv>(and no, bacon is not a highlight word for me)
18:26:46  <ogd>lol
18:26:46  <ednapiranha>brycebaril: some sort of wurst
18:27:16  <ogd>im trying to get my friend muan to write a new module for encrypted messaging... see my last 3 tweets https://twitter.com/maxogden
18:27:25  <ogd>tl;dr npm install dogesecrets -g; echo "hidden msg" | dogesecrets > doge.png, uses https://npmjs.org/package/lsb
18:28:02  * thlorenzjoined
18:28:23  * thlorenzquit (Remote host closed the connection)
18:29:12  * jxsonjoined
18:30:12  <ogd>brycebaril: also multibuffer-stream doesnt work in browser right now because it uses through2 which browserifies streams2 and does Buffer.isBuffer checks which fail on array buffers
18:30:13  * jxsonquit (Remote host closed the connection)
18:30:36  <ogd>brycebaril: i think maybe the new buffer stuff in browserify 3.0 will fix it maybe/??? though i'm not sure
18:30:51  <ogd>i guess i'd have to convert the array buffers to browserify Buffers first...
18:32:30  <brycebaril>huh
18:32:45  <ogd>:)
18:34:35  <dominictarr>what is the vegan alternative to a meat message?
18:34:50  <ogd>yosoymsg
18:35:11  <dominictarr>(sounds like the start of a corny joke, but I need a punch line)
18:38:47  * jxsonjoined
18:41:18  * jxsonquit (Remote host closed the connection)
18:44:48  <dominictarr>hey, on the bulk write speed problem: question: do the cbs start coming in slower as you pile things into the database?
18:45:38  <ogd>dominictarr: yea
18:46:46  <dominictarr>hmm, okay... I have an idea
18:46:51  * jxsonjoined
18:47:40  <ogd>dominictarr: what 'problem' are you referring to though
18:47:53  <ogd>dominictarr: e.g. unsolved problem
18:48:08  * ralphtheninjaquit (Quit: leaving)
18:52:22  <dominictarr>ogd, well, there is a solution in some cases (tuning)
18:52:22  * timoxley_quit (Remote host closed the connection)
18:52:45  <dominictarr>(the problem of loading massive data rapidly)
18:53:06  <dominictarr>but it's difficult to apply in general, like, with lots of indexes and stuff
18:53:15  <dominictarr>(like the way that npmd uses level)
18:54:33  <ogd>ive found that https://npmjs.org/package/byte-stream set at the write buffer size is a good building block, i use it here https://github.com/maxogden/dat/blob/master/lib/write-stream.js
18:55:28  <ogd>but i dont have any perf issues anymore
18:59:34  <dominictarr>sometimes I wish that node had a process.on('tick', ...) that triggered as the program continued to run, but didn't block node from exiting.
19:00:15  * pgtejoined
19:02:39  <ednapiranha>dominictarr: vegan alt = soy massage
19:02:54  <ednapiranha>this is the kind of clever talk i come up with before i've fully ingested caffeine. carry on.
19:05:21  * pgtequit (Ping timeout: 272 seconds)
19:05:38  <ogd>brycebaril: hmm i dont think through2 works in the browser very well...
19:08:10  <brycebaril>ogd: I haven't used it there :/
19:08:17  <brycebaril>ogd at least not in a while
19:09:11  <ogd>brycebaril: im trying to write a binary version of meatspace chat using multibuffer-stream but shit is hella broke, ill revisit this problem later
19:10:41  <brycebaril>aha I figured out the problem
19:10:54  <brycebaril>at least with multibuffer-stream's tests
19:11:50  <brycebaril>I was using through2-map, which calls the provided function like Array.map, i.e. with the index... which meant that it was giving the index as the new 'extra' stuff you added
19:12:00  <ogd>ah
19:12:10  <ogd>heh
19:12:22  <brycebaril>yep, silly me for trying to be clever :(
19:13:43  * dominictarrquit (Read error: Connection reset by peer)
19:14:53  * dominictarrjoined
19:15:02  <brycebaril>ogd: ok, pushed multibuffer-stream as 2.0.1
19:16:26  <ogd>kewl
19:19:05  <dominictarr>ogd, I'm having trouble reproducing the problem...
19:19:18  <dominictarr>writes seem to be super fast no matter how large they get.
19:19:24  * mikealjoined
19:22:55  <dominictarr>oh, hang on... if I write json I get 20 mb/second
19:23:08  <dominictarr>if I write strings it's 250 mb/s
19:26:23  <brycebaril>whoa that seems telling
19:31:18  * tmcw_quit (Remote host closed the connection)
19:32:33  * tmcwjoined
19:37:12  <dominictarr>that is obv the cpu bound json
19:37:18  * tmcwquit (Ping timeout: 265 seconds)
19:38:41  * thlorenzjoined
19:47:18  * esundahlquit (Remote host closed the connection)
19:47:46  * esundahljoined
19:51:57  * esundahlquit (Ping timeout: 240 seconds)
19:54:46  * pgtejoined
19:55:08  * tmcwjoined
19:59:01  * pgtequit (Ping timeout: 248 seconds)
19:59:50  * dominictarrquit (Ping timeout: 240 seconds)
20:18:12  * esundahljoined
20:22:19  * jjmalina1joined
20:23:29  * jxsonquit (Remote host closed the connection)
20:23:56  * jxsonjoined
20:24:17  * jjmalinaquit (Ping timeout: 250 seconds)
20:29:02  * jxsonquit (Ping timeout: 264 seconds)
20:31:35  * pgtejoined
20:34:28  * ralphtheninjajoined
20:54:34  * jxsonjoined
20:59:01  * jxsonquit (Ping timeout: 245 seconds)
21:02:24  <ogd>brycebaril: i think npm is buggy, can you publish -f multibuffer? https://registry.npmjs.org/multibuffer/-/multibuffer-2.1.0.tgz is a 404 but shouldnt be
21:05:57  * pgtequit
21:11:08  <brycebaril>hmm. dunno. updated some deps to make daviddm happy and pushed 2.1.1 izzat working?
21:11:19  <brycebaril>(just published now)
21:11:49  <brycebaril>I was getting 2.1.0 from npm earlier just fine though, fyi
21:11:57  <ogd>brycebaril: yea their load balancer is buggin out lately
21:12:06  * mikealquit (Quit: Leaving.)
21:12:08  <ogd>2.1.1 works for me now, thx
21:12:17  <brycebaril>\m/
21:12:28  <ogd>e.g. https://github.com/isaacs/npm/issues/4278
21:13:04  <levelbot>[npm] modella-level-relations@1.7.0 <http://npm.im/modella-level-relations>: levelup based modella relations (@ramitos)
21:14:19  * mikealjoined
21:15:52  * _Contrachanged nick to contrahax
21:18:04  <levelbot>[npm] modella-level-relations@1.7.1 <http://npm.im/modella-level-relations>: levelup based modella relations (@ramitos)
21:21:23  * ramitosjoined
21:27:17  * jxsonjoined
21:44:30  * wolfeidaujoined
21:55:00  * mikealquit (Quit: Leaving.)
22:06:24  * thlorenzquit (Remote host closed the connection)
22:07:47  * thlorenzjoined
22:21:06  * thlorenzquit (Ping timeout: 245 seconds)
22:42:49  * timoxleyjoined
22:49:04  * stagasquit (Read error: Connection reset by peer)
22:51:12  * mikealjoined
22:55:02  * mikealquit (Client Quit)
22:57:01  * mikealjoined
23:01:16  * zz_eugenewarechanged nick to eugeneware
23:04:14  <rvagg>we need to get on a leveldb-based npm backend, STAT
23:04:43  <jerrysv>rvagg: i thought i saw one
23:05:31  <jerrysv>hm. maybe not.
23:06:17  <rvagg>well, there's npmd but it's not really a backend as such
23:08:58  * paulfryz_joined
23:10:48  * eugenewarechanged nick to zz_eugeneware
23:11:59  * paulfryzelquit (Ping timeout: 250 seconds)
23:12:19  <jerrysv>yeah, that's what i looked at right away - haven't looked at how decoupled npm is from couchdb
23:12:44  <jerrysv>might be interesting to just write a couchdb frontend to leveldb
23:13:06  <rvagg>jerrysv: mikeal did that already -- couchup
23:13:17  <jerrysv>ha, i'm way behind then
23:13:24  <rvagg>and regarding coupling... npm is TOTALLY coupled to couch, it basically IS a couch application
23:13:40  <rvagg>like writing an application that's totally stored procedures in MS SQL, it's that coupled
23:13:48  <mikeal>well...
23:13:52  <mikeal>i don't entirely agree
23:14:04  <jerrysv>i wonder if my sql front end for couchdb will run on couchup
23:14:04  <mikeal>most of the npm is embedded in a couchapp, which couch does but is actually quite terrible at
23:14:10  <mikeal>compared to say, node
23:14:35  * DTrejoquit (Remote host closed the connection)
23:15:02  * DTrejojoined
23:15:34  <mikeal>if you were going to write an alternative the big things you need to worry about creating are 1) identical consistency guarantees on _rev changes for attachments, 2) a user system that works identical to couch along with support the validation function
23:15:55  <mikeal>TBH, there should be a node process in front of couch that handles 99% of what most requests do today
23:16:11  <mikeal>and then just falls back to couchdb for some of the harder bits
23:17:08  <mikeal>its using _list and _show, which is pure fuckin insanity
23:17:30  <mikeal>and every tarball is directly served from couch's attachments
23:17:48  <mikeal>a CDN cache in front would go a long way
23:17:48  <jerrysv>yeah, that hurts
23:17:59  <mikeal>which i know isaacs is looking in to
23:19:33  * DTrejoquit (Ping timeout: 248 seconds)
23:19:36  <mikeal>i wish there was all this interest in making the registry better architecturally 2 years ago :)
23:19:42  <mikeal>at this point there's a limit to what you can do
23:19:56  <mikeal>existing clients have to remain supported for a few more years
23:20:54  <jerrysv>maybe that's ok though, push the reliance off and use the couchdb based system as a secondary, move the load with those that can update
23:22:07  * timoxleyquit (Remote host closed the connection)
23:23:42  * blessYahujoined
23:29:33  * tmcwquit (Remote host closed the connection)
23:30:06  * tmcwjoined
23:31:02  <mikeal>yeah
23:31:16  <mikeal>and for people that do want to take the data offline, couchdb has a great solution.
23:34:43  * tmcwquit (Ping timeout: 260 seconds)
23:43:55  * DTrejojoined
23:44:43  * ednapiranhaquit (Remote host closed the connection)
23:45:10  <jerrysv>i wish i had time to work on something like this - working on vows rewrite in my "copious spare time"
23:51:56  * mikealquit (Quit: Leaving.)
23:54:24  * mikealjoined