00:11:13  * jxsonquit (Remote host closed the connection)
00:15:09  * jxsonjoined
00:34:01  * ednapiranhaquit (Remote host closed the connection)
00:35:51  * julianduquequit (Ping timeout: 245 seconds)
00:38:31  * jxsonquit (Remote host closed the connection)
00:39:16  * julianduquejoined
00:40:27  * missinglinkjoined
00:54:16  <levelbot>[npm] level-subtree@1.0.0 <http://npm.im/level-subtree>: build and maintain a tree from the sublevels in a leveldb instance (@hij1nx)
01:20:51  * missinglinkquit (Ping timeout: 245 seconds)
01:24:46  <levelbot>[npm] valuepack-core@0.3.7 <http://npm.im/valuepack-core>: Core utils and configurations for valuepack, not at all useful by itself. (@thlorenz)
01:34:44  * dominictarrquit (Quit: dominictarr)
01:50:49  * jerrysvquit (Remote host closed the connection)
02:06:10  * fallsemoquit (Quit: Leaving.)
02:13:37  * esundahljoined
02:22:46  * timoxleyjoined
02:26:24  * thlorenzquit (Remote host closed the connection)
02:27:40  * thlorenzjoined
02:27:50  * thlorenzquit (Remote host closed the connection)
02:57:33  * timoxleyquit (Remote host closed the connection)
02:59:16  * timoxleyjoined
03:26:37  * timoxleyquit (Remote host closed the connection)
03:36:15  * werlequit (Ping timeout: 256 seconds)
04:02:16  * timoxleyjoined
04:06:40  * timoxleyquit (Ping timeout: 245 seconds)
04:11:08  * tmcwjoined
04:11:58  * tmcwquit (Remote host closed the connection)
04:32:05  * i_m_cajoined
04:33:28  <mbalho>rvagg: https://github.com/maxogden/level-bulk-load
04:40:45  <mbalho>OH CRAPPER wait i forgot to check in the batching version lol
04:45:25  * i_m_caquit (Ping timeout: 245 seconds)
05:03:09  <mbalho>ok fixed it
05:13:09  * julianduquequit (Quit: leaving)
05:27:33  <brycebaril>mbalho: trying to figure out a couple details with level-bulk-load -- what's the significance of the bufferSize 16, and batch size of 1600?
05:28:05  <mbalho>brycebaril: see comment in issue #1
05:39:16  * jcrugzzjoined
05:41:35  * esundahlquit (Remote host closed the connection)
06:02:56  <brycebaril>mbalho: pull request sent
06:03:32  <brycebaril>I'm not sure how interesting the various tunables I added are, but this way I can at least play around with them more easily
06:08:30  <mbalho>im pretty sure that small documents are wayyyy faster
06:08:50  <brycebaril>Well, not if you fill the 16mb write buffer with them
06:08:56  <mbalho>but 10kb isnt that big, but i wonder where the dropoff point is
06:08:58  <brycebaril>that is actually much slower
06:09:19  <mbalho>interesting
06:09:26  <brycebaril>I am playing with 250k documents, which ends up about the same as 10k docs
06:09:51  <brycebaril>You can totally set it to hit the memory leak really fast
06:10:01  <mbalho>hah
06:10:09  <mbalho>apparently the memory leak is fixed now with npm install level
06:10:29  <brycebaril>This may actually be a different issue, it may just be pure v8 allocation limits
06:10:38  <brycebaril>time node load-batches.js -b 10 -n 2000 -l 250000
06:10:38  <brycebaril>loading 10 batches of 2000 records
06:10:38  <brycebaril>500000000 bytes per batch
06:10:38  <brycebaril>batch of 2000: 6891ms
06:10:38  <brycebaril>FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory
06:10:45  <brycebaril>500MB batches :)
06:10:50  <mbalho>hah
06:10:55  <mbalho>if you ue buffers it wont do that
06:10:58  <brycebaril>doesn't even finish the second batch
06:11:31  <brycebaril>So I switched it to use hyperlevel and i'm not seeing the occasional pauses that you see with the standard level
06:11:38  <mbalho>oh cool
06:12:14  <mbalho>gotta run, but you should definitely leave a comment
06:12:18  <brycebaril>The average batch size isn't much different, but the slowest batch I've seen is 400ms vs 5000-10000ms
06:12:40  * esundahljoined
06:21:14  * esundahlquit (Ping timeout: 268 seconds)
06:58:23  * sveisvei_joined
07:00:27  * timoxleyjoined
07:05:07  * rescrv1joined
07:05:47  * alanhoffquit (Ping timeout: 260 seconds)
07:06:02  * alanhoffjoined
07:10:15  * prettyrobots_joined
07:11:25  * rescrvquit (*.net *.split)
07:11:26  * prettyrobotsquit (*.net *.split)
07:11:26  * sveisveiquit (*.net *.split)
07:11:51  * sveisvei_changed nick to sveisvei
08:04:34  * dominictarrjoined
08:16:05  * jcrugzzquit (Ping timeout: 248 seconds)
08:17:53  * esundahljoined
08:22:02  * esundahlquit (Ping timeout: 240 seconds)
08:42:25  * jcrugzzjoined
08:50:33  * jcrugzzquit (Ping timeout: 245 seconds)
09:11:43  * fb55joined
09:16:39  * fb55quit (Remote host closed the connection)
09:20:27  * fb55joined
09:24:47  * fb55quit (Remote host closed the connection)
09:31:12  * fb55joined
09:55:53  <levelbot>[npm] level-json-wrapper@0.0.1 <http://npm.im/level-json-wrapper>: LevelDB JSON Wrapper (@azer)
10:00:45  <levelbot>[npm] level-json@0.0.1 <http://npm.im/level-json>: LevelDB JSON Wrapper (@azer)
10:05:54  * dominictarrquit (Quit: dominictarr)
10:09:08  * Acconutjoined
10:10:01  * Acconutquit (Client Quit)
10:35:49  * dominictarrjoined
10:37:02  * timoxleyquit (Read error: Connection reset by peer)
10:37:18  * timoxleyjoined
11:16:20  * fb55quit (Remote host closed the connection)
11:17:59  * missinglinkjoined
11:19:03  * esundahljoined
11:19:55  * kenansulaymanjoined
11:23:07  * esundahlquit (Ping timeout: 240 seconds)
11:29:24  * kenansulaymanquit (Quit: ≈ and thus my mac took a subtle yet profound nap ≈)
11:48:28  * timoxleyquit (Ping timeout: 240 seconds)
11:48:55  * fb55joined
12:05:22  <alanhoff>hello guys, anyone here using leveldb inside docker?
12:11:21  * missinglinkquit (Ping timeout: 256 seconds)
12:11:58  * jcrugzzjoined
12:17:19  * jcrugzzquit (Ping timeout: 264 seconds)
12:33:26  * fb55quit (Remote host closed the connection)
12:34:41  * fb55joined
12:36:46  * fb55quit (Remote host closed the connection)
13:11:14  * kenansulaymanjoined
13:12:36  <rescrv1>brycebaril: rather than reporting the 25/50/75 percentiles, look at the 95,99,99.9 percentiles, or, better yet, create a CDF of latency. It'll give you a better idea of what's going on.
13:14:08  * rescrv1changed nick to rescrv
13:16:29  * kenansulaymanquit (Remote host closed the connection)
13:30:18  * werlejoined
13:38:31  * jcrugzzjoined
13:41:50  * kenansulaymanjoined
13:42:53  * timoxleyjoined
13:43:22  * jcrugzzquit (Ping timeout: 256 seconds)
14:12:58  * missinglinkjoined
14:23:12  * esundahl_joined
14:49:05  * jcrugzzjoined
14:52:21  * thlorenzjoined
14:53:55  * jcrugzzquit (Ping timeout: 264 seconds)
15:17:17  * fb55joined
15:17:42  * fb55quit (Remote host closed the connection)
15:51:50  <mbalho>wow the reverse order thing is hilarious
15:54:55  <rescrv>mbalho: how so?
16:09:13  <rescrv>I wrote that on intuition and sleep deprivation. Was I wrong?
16:09:38  <brycebaril>No you were definitely correct
16:09:52  <brycebaril>That completely eliminated the spike batches on vainalla leveldb
16:11:33  <rescrv>It's the absolute best case for LevelDB. Of course, it requires O(n log n) work up-front, which is not always feasible.
16:12:40  <brycebaril>Yeah, I cheated in the load tester though, since we're just assigning arbitrary keys, I just chose a decrementing key id
16:12:42  <rescrv>if you can always have sorted data, you shouldn't be using leveldb anyway
16:28:10  <mbalho>what if we did a reverse sort on insert (to the batch) in JS when preparing the batch, would there be marginal improvements? surely the leveldb sorter is more efficient...
16:28:46  * Acconutjoined
16:29:13  * Acconutquit (Client Quit)
16:31:29  <brycebaril>I'm out for the rest of the day, interested to hear what anyone else finds
16:32:29  <mbalho>brycebaril: thanks!
16:32:51  <rescrv>mbalho: it depends on the size of your data set
16:33:19  <rescrv>if you have a small dataset, and it's in a format that's friendly to sorting in JS-land, you'll get a win sorting there.
16:33:32  <rescrv>If you have a big dataset, or sorts in JS are costly, leveldb will do better.
16:33:58  <rescrv>asymptotically, LevelDB is better because it can deal with the large dataset on disk, which JS would need the OS pager for.
16:34:07  <mbalho>ahh
16:34:49  <rescrv>I suggested it, not because I seriously think you should presort everything, but because I wanted to illustrate how you could use a similar technique to your advantage.
16:35:14  <rescrv>for instance, if you can construct your data in a way that is already going to reverse-sort in LevelDB without increased cost, do so.
16:35:47  <rescrv>if you can sort small batches, it may be a win. Internally, LevelDB uses a skiplist, so you've got, essentially, the cache perf of a linked list.
16:36:03  <rescrv>but if your app has arrays, and the JS impl lays them out contiguously, sort in JS.
16:45:34  * rudjoined
16:45:55  * kenansulaymanquit (Remote host closed the connection)
17:18:45  <levelbot>[npm] valuepack-mine-npm@0.2.2 <http://npm.im/valuepack-mine-npm>: Mines the npm registry for user and package data used by valuepack. (@thlorenz)
17:21:15  * DTrejojoined
17:31:53  * timoxleyquit (Remote host closed the connection)
17:40:15  <levelbot>[npm] valuepack-mine-npm@0.2.3 <http://npm.im/valuepack-mine-npm>: Mines the npm registry for user and package data used by valuepack. (@thlorenz)
17:40:45  <levelbot>[npm] valuepack-core@0.3.8 <http://npm.im/valuepack-core>: Core utils and configurations for valuepack, not at all useful by itself. (@thlorenz)
17:43:17  * mikealjoined
17:47:47  * DTrejoquit (Remote host closed the connection)
17:49:42  * DTrejojoined
17:56:08  * dominictarrquit (Quit: dominictarr)
17:56:40  * dominictarrjoined
18:13:16  <levelbot>[npm] valuepack-mine-github@0.2.1 <http://npm.im/valuepack-mine-github>: Mines github for user and repository data used by valuepack. (@thlorenz)
18:16:45  <levelbot>[npm] valuepack-mine-npm@0.2.3 <http://npm.im/valuepack-mine-npm>: Mines the npm registry for user and package data used by valuepack. (@thlorenz)
18:22:26  * thlorenzquit (Remote host closed the connection)
18:30:13  * DTrejoquit (Remote host closed the connection)
18:55:58  * rudquit (Quit: rud)
18:57:31  * werlequit (Ping timeout: 264 seconds)
19:05:52  * tmcwjoined
19:38:00  * jcrugzzjoined
19:42:16  * thlorenzjoined
19:43:24  * rudjoined
19:43:24  * rudquit (Changing host)
19:43:24  * rudjoined
19:46:44  * tmcwquit (Remote host closed the connection)
19:50:39  * DTrejojoined
19:52:38  * jcrugzzquit (Ping timeout: 256 seconds)
19:59:15  <levelbot>[npm] valuepack-core@0.3.9 <http://npm.im/valuepack-core>: Core utils and configurations for valuepack, not at all useful by itself. (@thlorenz)
20:02:20  * DTrejoquit (Remote host closed the connection)
20:03:45  <levelbot>[npm] valuepack-mine-npm@0.2.4 <http://npm.im/valuepack-mine-npm>: Mines the npm registry for user and package data used by valuepack. (@thlorenz)
20:03:45  <levelbot>[npm] valuepack-mine-github@0.2.2 <http://npm.im/valuepack-mine-github>: Mines github for user and repository data used by valuepack. (@thlorenz)
20:04:31  <mbalho>thlorenz: youre on a roll
20:04:37  <thlorenz>sorry for spamming this channel guys, linking modules that depend on level is not working
20:04:49  <thlorenz>so I gotta publish and upgrade all the time
20:05:13  <mbalho>thlorenz: what does work?
20:05:15  <thlorenz>mbalho: it has problems finding leveldown when I link things together ;)
20:05:17  <mbalho>err i mean what doesnt
20:06:18  <thlorenz>still has problems - only depending on level in core, but I get leveldown not found in other modules kinda weird (actually even if I install core)
20:06:46  <mbalho>weird
20:07:39  <thlorenz>shoot, even after npm dedupe it still barfs at me :(
20:16:51  * kenansulaymanjoined
20:18:33  * mikealquit (Quit: Leaving.)
20:23:35  * Acconutjoined
20:23:52  * Acconutquit (Client Quit)
20:24:42  * Acconutjoined
20:25:00  <thlorenz>mbalho: dominictarr interesting, got it narrowed down to using node 0.8
20:25:09  <thlorenz>leveldown lookup breaks at that point
20:25:47  <thlorenz>using 0.10 works fine (except then I get my famous "[Error: UNABLE_TO_VERIFY_LEAF_SIGNATURE]" but that's stream related I thinkg
20:28:27  <thlorenz>rvagg: dominictarr: mbalho: made an issue in my own repo that shows the leveldown lookup problem when using node0.8 https://github.com/thlorenz/valuepack-mine/issues/1
20:28:50  <mbalho>thlorenz: is it still there on node 0.10?
20:28:57  <thlorenz>nop
20:29:14  <thlorenz>I just used 0.8 to work around that other error I got
20:29:22  <thlorenz>very weird
20:29:25  <kenansulayman>thlorenz Level only does require("leveldown")
20:29:30  * jcrugzzjoined
20:30:03  <thlorenz>kenansulayman: not exactly -- it does require('leveldown/package'). .... does some version checks etc./
20:30:18  <thlorenz>it breaks at that point
20:30:19  * Acconutquit (Quit: Acconut)
20:30:42  <kenansulayman>Ya right https://github.com/rvagg/node-levelup/blob/master/lib/util.js#L103
20:31:18  <kenansulayman>That's because it's compiled and referenced using that entrypoint
20:31:18  <thlorenz>kenansulayman: exactly, thats it
20:31:45  <kenansulayman>wait
20:32:24  <thlorenz>not sure what would break there, as far as I know, node0.8 will also look for '.json' if you give no extension
20:32:27  <kenansulayman>What OS are you on?
20:32:30  <thlorenz>Mac
20:32:42  <kenansulayman>Version?
20:33:13  <thlorenz>(Darwin) MacOSX 10.7.5
20:33:25  <kenansulayman>let me simulate the env, mom
20:33:29  <mbalho>dominictarr: wrong channel
20:33:32  <thlorenz>which is LION
20:33:45  <dominictarr>it's very bursty
20:34:01  <kenansulayman>What specific node version?
20:34:03  <mbalho>dominictarr: the bursts are caused by sorting as bryce + rescrv found out
20:34:16  <thlorenz>kenansulayman: breaks with 0.8.25
20:34:28  <dominictarr>aha, so for large batches you want to presort them?
20:34:36  * jcrugzzquit (Ping timeout: 245 seconds)
20:34:39  <thlorenz>kenansulayman: works with v0.10.15
20:34:43  <mbalho>dominictarr: well sometimes that doesnt make computational sense
20:34:58  <mbalho>dominictarr: i havent thought about it enough though
20:35:16  <mbalho>dominictarr: maybe in my use case i can download the entire source file, then sort it, then insert it
20:35:24  <dominictarr>hmm, this seems like something that can be fixed in leveldb?
20:35:26  <thlorenz>btw I don't think it is a good idea in general to magically look up a module somewhere if it should be configurable
20:35:40  <thlorenz>why don't I just always pass in my x-down into levelup?
20:35:42  <dominictarr>mbalho: I have a feeling that in your use case, you'll often have appending data
20:35:43  <mbalho>dominictarr: hyperleveldb solves much of the problem but sometimes data will be coming in faster than it can be isnerted
20:35:48  <kenansulayman>thlorenz Give me a second.
20:35:51  <thlorenz>ok
20:35:58  <mbalho>dominictarr: im focused right now on the initial bulk import use case
20:36:11  <kenansulayman>mbalho hyperlevel ftw
20:36:11  <mbalho>dominictarr: e.g. 30 years of flight data or something, millions of rows
20:36:13  <kenansulayman>:D
20:36:40  <mbalho>kenansulayman: its great but also there are no silver bullets :)
20:36:53  <kenansulayman>mbalho Alas :(
20:36:57  <dominictarr>hmm, so, need a transfer scheme optimized for import.
20:37:22  <rescrv>mbalho: it's not the sorting. It's that compaction is more work than not having compaction. The data is already sorted before compaction happens. I would suggest that you not spend too much time on importing data unless you can make specific assumptions about the structure of the ddata
20:37:49  <kenansulayman>rescrv You have a listener on hyper, right? ;)
20:38:31  <mbalho>rescrv: thanks, i am not considering trying to optimize import by presorting or anything, just trying to get dominictarr up to speed cause he doesnt read irc backlogs :)
20:39:33  <mbalho>i do think though that a use case like importing a 1gb csv or something like that, that there needs to be a way to split the import data into the optimal batch size and then make sure tcp/fs backpressure signals are being sent correctly
20:39:38  * jcrugzzjoined
20:39:52  <kenansulayman>thlorenz Try a rm -rf node_modules ; npm i — for me it works
20:40:08  <thlorenz>ok, will do
20:40:27  <thlorenz>otherwise I'll just manually hook things up via opts.db = leveldown()
20:40:36  <kenansulayman>thlorenz no wait!
20:40:45  <kenansulayman>Getting it too
20:40:57  <thlorenz>ok weird eh?
20:41:13  <kenansulayman>nah seems to be the /package from inside a module
20:41:16  <kenansulayman>let me check it
20:41:33  <thlorenz>already blew away my node_modules, moving on with the manual hook approach
20:41:43  <kenansulayman>thlorenz chill a second
20:41:51  <thlorenz>kenansulayman: the core module which is the only place where I mention level by name
20:41:53  <rescrv>kenansulayman: I read everything +- 20 lines of where I'm mentioned
20:42:09  <kenansulayman>rescrv Ok :)
20:42:51  <mbalho>i also host logs https://www.dropbox.com/s/7gz3779sxg6ygkw/%23%23leveldb.log
20:43:04  <mbalho>(i would add to the channel topic but i'm not an admin). cc rescrv
20:43:07  <mbalho>oops meant rvagg
20:43:49  <rescrv>mbalho: sounds good. We're going to be making hyper even faster in the coming months to help HyperDex get an even bigger speed boost. That may satisfy your need for speed.
20:45:29  <kenansulayman>thlorenz The issue is that the referenced leveldown triggers an error in modules.js which is caught in util.js:128 as error to load leveldown
20:45:44  <mbalho>rescrv: nice! speed is always nice, what im really trying to figure out is how to avoid huge compaction times e.g. when inserting batches that are too big. i'd like to be able to stream parse a 100gb csv in and have it chug along at some dependable rate
20:45:58  <thlorenz>kenansulayman: what's the error?
20:46:13  <kenansulayman>thlorenz Error: Module version mismatch, refusing to load.
20:46:39  <thlorenz>kenansulayman: that is odd since I upgraded all modules (since I tried figuring this out)
20:46:54  <thlorenz>could it be that one of them has a node>=0.10 requirement?
20:47:00  <kenansulayman>thlorenz No it's 0.8 exclusive
20:47:05  <thlorenz>odd
20:47:11  <kenansulayman>thlorenz wait
20:48:22  <rescrv>mbalho: that's part of what we're targeting. The problem now is just mutex contention. Compaction holds the mutex when it doesn't have to, limiting throughput in those cases.
20:48:36  <rescrv>there's more subtlety to it than that, but that's the gist of it
20:50:11  <kenansulayman>thlorenz Could you post your npm ls?
20:50:17  <mbalho>rescrv: oh very interesting
20:50:52  <thlorenz>kenansulayman: hold on - I now am manually hooking things together and am getting the error you were talking about
20:51:09  <kenansulayman>thlorenz I know that's why I'm asking
20:51:20  <kenansulayman>thlorenz Just do require("leveldown")
20:52:22  <thlorenz>kenansulayman: from core that works, anywhere else I don't even install it should just use my leveldb module in core
20:52:59  <kenansulayman>thlorenz"From core that works"?
20:53:00  <thlorenz>kenansulayman: will npm ls be ok even if I linked all my modules (ln -s)
20:53:42  <thlorenz>kenansulayman: https://github.com/thlorenz/valuepack-core/blob/master/mine/leveldb.js#L10-L15
20:54:04  <thlorenz>kenansulayman: https://github.com/thlorenz/valuepack-core/blob/master/leveldb.js#L3-L4
20:55:09  <kenansulayman>thlorenz Ya well there's not much you can do from your module since the error is thrown by the global module.js
20:56:33  <thlorenz>kenansulayman: added npm ls output: https://github.com/thlorenz/valuepack-mine/issues/1
20:56:35  <kenansulayman>thlorenz But it's specific to leveldown — hyperlevel works
20:56:45  <kenansulayman>mysterious
20:57:16  <thlorenz>btw - the invalids are there b/c I linked those modules (I think)
20:57:34  <thlorenz>kenansulayman: thanks for helping me track this down - will stick to 0.10 ;)
20:57:55  <kenansulayman>thlorenz Well yes, but I'd like to find the issue anyway :)
21:00:57  * Acconutjoined
21:02:10  <thlorenz>kenansulayman: me too, but this is really hard to debug let me know if you find anything
21:02:30  <thlorenz>I'll go and try to fix my node0.10 stream issue so I can use that instead
21:03:37  <thlorenz>kenansulayman: btw where in Berlin? I lived in Potsdam for most of my life
21:03:55  <mbalho>ill be in berlin in 2 weeks :D
21:04:11  <kenansulayman>thlorenz I used to live in Kreuzberg, now in Tempelhof-Mariendorf
21:04:11  <thlorenz>mbalho: jsconf I guess? lucky you :)
21:04:17  <thlorenz>kenansulayman: nice
21:04:37  <thlorenz>mbalho: I'm only gonna make it as far as Ireland ;)
21:04:43  <mbalho>thlorenz: thats still pretty good
21:04:46  <thlorenz>:)
21:04:57  <kenansulayman>mbalho 999€ for a ticket is overkill in terms of "javascript community"
21:05:01  <thlorenz>mbalho: oh and Lisbon a month later
21:05:02  <kenansulayman>:(
21:05:51  <thlorenz>kenansulayman: yeah, you gotta get yourself sponsored for those (i.e. your employer)
21:05:51  * Acconutquit (Quit: Acconut)
21:05:57  <mbalho>kenansulayman: reject.js, nodecopter are both cheap and good
21:06:06  <mbalho>kenansulayman: and berlin has a good js meetup every month anyway
21:06:21  <kenansulayman>thlorenz Haha, I'm the cto there's no employer :x
21:06:26  <mbalho>kenansulayman: jsconfeu sells out and is mostly made possible by companies buying employee tickets and sponsorships
21:07:03  <thlorenz>kenansulayman: well then you enjoy other benefits that people like me don't :)
21:07:08  <kenansulayman>I find it pretty uncool if you gotta be sponsored in order to be part of a community^
21:07:51  <mbalho>events cost a lot of money to produce
21:08:02  <kenansulayman>mbalho That's true though
21:08:14  <mbalho>especially when youre paying for international flights and hotels for speakers
21:08:56  <thlorenz>kenansulayman: another way to get in is do a lot of cool things, submit a lot of talks and get in as a speaker
21:09:18  <mbalho>yea thats a good point
21:09:29  <mbalho>also the night time activities are usually open to the public
21:09:39  <mbalho>and the events before and after like reject.js
21:09:56  <kenansulayman>We should start our own conference
21:10:29  <mbalho>running a conference is a great way to learn about how hard it is to run a conference :)
21:10:37  <kenansulayman>"ubercode" :D
21:11:06  <kenansulayman>thlorenz Still in Brandenburg?
21:11:20  <thlorenz>kenansulayman: nope I live in NYC for 5 years now
21:11:25  <kenansulayman>eewww
21:11:40  <thlorenz>that's what happens when you get married ;)
21:11:50  <kenansulayman>Wanted to propose a coffee but that's done then :D
21:12:15  <thlorenz>I can send you a coffee-script, but wait, I don't like coffee-script ;)
21:12:30  <kenansulayman>Even more ewww
21:12:52  <kenansulayman>There's a nodereaction I found to like
21:13:33  <kenansulayman>thlorenz http://data.sly.mn/250J3h250f0k :D
21:13:51  * esundahl_quit (Remote host closed the connection)
21:13:58  <thlorenz>kenansulayman: LOL - yeah I've seen that one :)
21:15:08  <kenansulayman>thlorenz Your scriptie-talkie is broken :(
21:15:09  * werlejoined
21:15:19  <thlorenz>how?
21:15:23  <kenansulayman>Error: SecurityError: DOM Exception 18
21:15:31  <thlorenz>what browser?
21:15:51  <kenansulayman>Version 7.0 (9537.59) / OSX 10.9
21:15:58  <thlorenz>try chrome
21:16:16  <thlorenz>safari doesn't like it too much - I'm doing some hacky things to make this work
21:16:31  <thlorenz>kenansulayman: also scriptie-talkie is best consumed with replpad ;)
21:16:38  * jmartinsquit (Ping timeout: 246 seconds)
21:16:52  <kenansulayman>thlorenz haha. Why does console.log("y") yield: Proxy: {?
21:17:25  <thlorenz>weird, probably b/c your browser adds magic properties on the fly
21:17:34  <thlorenz>FF I guess?
21:17:35  <kenansulayman>31.0.1614.0 canary
21:17:40  <thlorenz>ah interesting
21:18:15  <kenansulayman>But I'm sad Chrome still doesn't allow generators
21:18:28  <kenansulayman>Even though v8 had it for ages (half a year at least)
21:18:39  <thlorenz>kenansulayman: odd, it doesn't do this with my canary: http://thlorenz.github.io/scriptie-talkie/?code=console.log(%22y%22)
21:19:12  <kenansulayman>thlorenz http://data.sly.mn/image/0I2Z3Y150L3k
21:19:32  <thlorenz>kenansulayman: yeah, it didn't yield that
21:19:43  <thlorenz>the + means that Proxy was added to the context
21:19:55  <thlorenz>it's some magic that chrome seems to do in your case
21:19:56  <kenansulayman>hm
21:20:03  <kenansulayman>let me kill the exts
21:20:10  <thlorenz>I bet the second console.log will not do that
21:21:10  <thlorenz>anyhoo gotta focus on this stupid error that I get with node0.10 streams
21:21:11  <kenansulayman>thlorenz Right
21:21:38  <thlorenz>if anyone knows about what UNABLE_TO_VERIFY_LEAF_SIGNATURE means I'd appreciate any help on this: https://github.com/thlorenz/valuepack-mine-npm/issues/1
21:22:00  <thlorenz>^ mbalho dominictarr Raynos
21:22:36  <thlorenz>it happens when I pipe a leveldb readstream into a fs.writeStream
21:22:54  <mbalho>i think the npm cert expired, try http instead of https
21:22:59  <kenansulayman>Is it over https? since UNABLE_TO_VERIFY_LEAF_SIGNATURE is usually a cert error
21:23:14  * wolfeidaujoined
21:23:35  <thlorenz>mbalho: kenansulayman you guys are right, it's an https stream
21:23:39  <thlorenz>https://github.com/thlorenz/valuepack-mine-npm/blob/master/scripts/fetch-npm-users.js#L13
21:23:54  <kenansulayman>;)
21:24:06  <thlorenz>userStream comes from here: https://github.com/thlorenz/valuepack-mine-npm/blob/master/scripts/fetch-npm-users.js#L13
21:24:35  <dominictarr>thlorenz: what is a leaf signature?
21:24:52  <dominictarr>assuming this is something to do with CA
21:24:54  <kenansulayman>dominictarr It's when you want to verify a certificate against a CA
21:25:03  <thlorenz>dominictarr: didn't know until now, but I confirmed that this has to do with my certificate like mbalho said
21:25:10  <mbalho>its npms certs not yours
21:25:21  <mbalho>open https://isaacs.ic.ht/registry in chrome
21:25:40  <kenansulayman>mbalho That's because they're self-signing the certificate
21:25:49  <thlorenz>mbalho: awesome, looks like I can reach npm registry with http as well
21:25:56  <kenansulayman>thlorenz +1
21:26:11  <kenansulayman>gotta get some dinner. afk
21:26:23  * kenansulaymanchanged nick to apex|afk
21:26:30  <dominictarr>if you look in the npm install script you'll see a hard coded cert for npm
21:26:40  <dominictarr>that it uses with curl
21:26:47  <thlorenz>dominictarr: mbalho: so two questions
21:26:54  <dominictarr>sure
21:27:02  <thlorenz>why does it work when I do 'nave use 0.8' ?
21:27:19  <thlorenz>and how would I renew my certificate without doing it via the browser?
21:27:30  <dominictarr>maybe different versions of openssl?
21:27:48  <dominictarr>think you can provide a specific cert to trust in tls
21:28:02  <dominictarr>there is probably something within npm
21:28:11  <dominictarr>ask isaacs
21:28:14  <thlorenz>that makes sense
21:28:41  <thlorenz>thanks, also I'm not entirely positive yet if http works as well - this is taking a very long time already ;)
21:35:46  <dominictarr>thlorenz: how are you getting data out?
21:36:26  <thlorenz>dominictarr: not sure what you mean, I'm requesting from: https://registry.npmjs.org/-/users/ to get all users
21:36:34  <thlorenz>changing https to http
21:37:06  <thlorenz>since I am positive this works, i.e.: curl http://registry.npmjs.org/levelup does
21:37:10  <dominictarr>that is what I mean, more or less
21:37:32  <thlorenz>ok, so it seems like http works as well - npmjs is just reallllly slow right now
21:38:20  <thlorenz>so I'll switch to that in order to avoid the certificate problems, so thanks for pointing me in the right direction
21:44:44  <levelbot>[npm] valuepack-mine-npm@0.2.5 <http://npm.im/valuepack-mine-npm>: Mines the npm registry for user and package data used by valuepack. (@thlorenz)
21:54:46  * dominictarrquit (Ping timeout: 246 seconds)
21:55:32  * thlorenzquit (Remote host closed the connection)
21:58:15  <levelbot>[npm] polyclay-levelup@0.0.6 <http://npm.im/polyclay-levelup>: levelup persistence adapter for polyclay, the schema-enforcing document mapper (@ceejbot)
22:03:20  * dominictarrjoined
22:14:03  * mikealjoined
22:15:23  * apex|afkquit (Remote host closed the connection)
22:18:00  * kenansulaymanjoined
22:18:32  * timoxleyjoined
22:35:49  * mcollinajoined
22:36:26  * timoxleyquit (Remote host closed the connection)
22:36:36  * kenansulaymanquit (Remote host closed the connection)
22:37:07  * jcrugzzquit (Ping timeout: 264 seconds)
23:09:41  * dominictarrquit (Quit: dominictarr)
23:11:18  * mcollinaquit (Remote host closed the connection)
23:16:41  * mcollinajoined
23:23:27  * mcollinaquit (Remote host closed the connection)
23:32:30  * missinglinkquit (Ping timeout: 264 seconds)
23:43:20  * jcrugzzjoined
23:45:41  * mikealquit (Quit: Leaving.)
23:47:50  * jcrugzzquit (Ping timeout: 246 seconds)
23:48:49  * mikealjoined
23:57:05  * esundahljoined