00:00:35  * esundahlquit (Ping timeout: 268 seconds)
00:02:15  * thlorenzjoined
00:05:21  * mikealquit (Quit: Leaving.)
00:08:44  * tmcwjoined
00:10:18  * mikealjoined
00:13:15  * tmcwquit (Ping timeout: 248 seconds)
00:18:11  * ramitosquit (Ping timeout: 260 seconds)
00:22:54  * fallsemojoined
00:29:47  * tmcwjoined
00:32:21  * eugenewarejoined
00:51:45  * mikealquit (Quit: Leaving.)
01:06:35  * tmcwquit (Remote host closed the connection)
01:06:42  * thlorenzquit (Remote host closed the connection)
01:18:28  * kenansulaymanquit (Ping timeout: 264 seconds)
01:20:28  * kenansulaymanjoined
01:25:14  * esundahljoined
01:25:48  * esundahl_joined
01:29:54  * esundahlquit (Ping timeout: 252 seconds)
01:32:16  * thlorenzjoined
01:32:36  * eugenewarequit (Remote host closed the connection)
01:40:06  * dguttmanquit (Quit: dguttman)
02:28:21  * dguttmanjoined
02:36:29  * dguttmanquit (Quit: dguttman)
03:16:40  * mikealjoined
03:22:35  * ehdquit (*.net *.split)
03:23:08  * eugenewarejoined
03:25:04  * fallsemoquit (Ping timeout: 264 seconds)
03:28:07  * thlorenzquit (Remote host closed the connection)
03:36:31  * fallsemojoined
03:38:50  * eugenewarequit (Remote host closed the connection)
03:40:43  * fallsemoquit (Ping timeout: 248 seconds)
03:47:33  * mikealquit (Quit: Leaving.)
03:49:25  * timoxleyquit (Remote host closed the connection)
03:50:30  * mikealjoined
03:54:21  * fallsemojoined
04:10:40  * fallsemoquit (Ping timeout: 248 seconds)
04:19:54  * jxson_joined
04:23:28  * jxsonquit (Ping timeout: 248 seconds)
04:24:03  * jxson_quit (Ping timeout: 240 seconds)
04:25:57  * jxsonjoined
04:43:11  * fallsemojoined
04:47:52  * fallsemoquit (Ping timeout: 264 seconds)
05:11:29  * jxsonquit (Remote host closed the connection)
05:12:32  * ehdjoined
05:29:47  * thlorenzjoined
05:33:07  * dguttmanjoined
05:34:24  * thlorenzquit (Ping timeout: 248 seconds)
05:40:12  * dguttmanquit (Quit: dguttman)
05:50:19  * esundahl_quit (Remote host closed the connection)
06:00:40  * dguttmanjoined
06:02:23  * dguttmanquit (Client Quit)
06:18:08  * Guest82035quit (Ping timeout: 240 seconds)
06:19:10  * Guest36601joined
06:26:39  * niftylettucequit (Ping timeout: 240 seconds)
06:30:24  * thlorenzjoined
06:34:47  * thlorenzquit (Ping timeout: 260 seconds)
06:56:49  * kenansulaymanquit (Quit: ≈ and thus my mac took a subtle yet profound nap ≈)
07:17:03  * eugenewarejoined
07:19:23  * eugenewarequit (Remote host closed the connection)
07:21:46  <levelbot>[npm] leveldown-hyper@0.9.1 <http://npm.im/leveldown-hyper>: A Node.js LevelDB binding, primary backend for LevelUP (HyperDex fork) (@rvagg)
07:31:03  * thlorenzjoined
07:35:27  * thlorenzquit (Ping timeout: 260 seconds)
07:50:37  * frankblizzardjoined
08:18:43  <levelbot>[npm] level-packager@0.17.0 <http://npm.im/level-packager>: LevelUP package helper for distributing with a LevelDOWN-compatible back-end (@rvagg)
08:21:44  <levelbot>[npm] level-packager@0.17.0-1 <http://npm.im/level-packager>: LevelUP package helper for distributing with a LevelDOWN-compatible back-end (@rvagg)
08:22:37  * niftylettucejoined
08:22:46  <levelbot>[npm] level@0.17.0-1 <http://npm.im/level>: Fast & simple storage - a Node.js-style LevelDB wrapper (a convenience package bundling LevelUP & LevelDOWN) (@rvagg)
08:24:15  <levelbot>[npm] level-hyper@0.17.0 <http://npm.im/level-hyper>: Fast & simple storage - a Node.js-style LevelDB wrapper (a convenience package bundling LevelUP & LevelDOWN) [HyperDex fork] (@rvagg)
08:24:44  <levelbot>[npm] level-lmdb@0.17.0 <http://npm.im/level-lmdb>: Fast & simple storage - a Node.js-style LMDB wrapper (a convenience package bundling LevelUP & LMDB) (@rvagg)
08:31:17  * timoxleyjoined
08:52:43  <levelbot>[npm] level-lmdb@0.17.0 <http://npm.im/level-lmdb>: Fast & simple storage - a Node.js-style LMDB wrapper (a convenience package bundling LevelUP & LMDB) (@rvagg)
08:54:14  <levelbot>[npm] level@0.17.0-1 <http://npm.im/level>: Fast & simple storage - a Node.js-style LevelDB wrapper (a convenience package bundling LevelUP & LevelDOWN) (@rvagg)
08:55:14  <levelbot>[npm] level-hyper@0.17.0 <http://npm.im/level-hyper>: Fast & simple storage - a Node.js-style LevelDB wrapper (a convenience package bundling LevelUP & LevelDOWN) [HyperDex fork] (@rvagg)
09:05:14  <levelbot>[npm] leveldown-basho@0.9.1 <http://npm.im/leveldown-basho>: A Node.js LevelDB (Basho fork) binding, primary backend for LevelUP (@rvagg)
09:12:43  <levelbot>[npm] level-basho@0.17.0 <http://npm.im/level-basho>: Fast & simple storage - a Node.js-style LevelDB wrapper (a convenience package bundling LevelUP & LevelDOWN) [Basho fork] (@rvagg)
09:20:44  <levelbot>[npm] levelmeup@0.1.4 <http://npm.im/levelmeup>: Level Me Up Scotty! An intro to Node.js databases via a set of self-guided workshops. (@rvagg)
09:22:48  <rvagg>level all the things
09:32:16  * thlorenzjoined
09:36:28  * thlorenzquit (Ping timeout: 240 seconds)
10:12:13  <levelbot>[npm] level-pubsub@0.1.1 <http://npm.im/level-pubsub>: leveldb based pub-sub (@hij1nx)
10:25:28  * brycebarilquit (Read error: Connection reset by peer)
10:32:53  * thlorenzjoined
10:37:20  * thlorenzquit (Ping timeout: 248 seconds)
11:01:56  * tarrudajoined
11:14:24  * jcrugzzquit (Ping timeout: 256 seconds)
11:33:28  * thlorenzjoined
11:33:30  * missinglinkjoined
11:34:11  * missinglinkchanged nick to insertcoffee
11:35:37  * insertcoffeequit (Client Quit)
11:35:44  * insertcoffeejoined
11:37:46  * insertcoffeequit (Client Quit)
11:38:05  * insertcoffeejoined
11:41:41  * thlorenzquit (Ping timeout: 245 seconds)
11:49:14  * eugenewarejoined
11:53:11  * eugenewarequit (Remote host closed the connection)
11:53:19  * eugenewarejoined
11:53:41  * eugenewarequit (Remote host closed the connection)
12:06:14  * kenansulaymanjoined
12:23:58  * eugenewarejoined
12:28:08  * eugenewarequit (Ping timeout: 240 seconds)
12:29:27  <kenansulayman>rescrv Any updates on HyperDex + Node?
12:30:14  <rescrv>kenansulayman: nothing public. I'm working on the next release, and I'm behind on that. Cannot focus on Node until that is done
12:30:31  <kenansulayman>rescrv sure, ty for briefing
12:31:29  * insertcoffeequit (Remote host closed the connection)
12:33:10  * insertcoffeejoined
12:54:34  * eugenewarejoined
13:09:38  * rudquit (Quit: rud)
13:12:33  * nnnnathannjoined
13:25:14  <levelbot>[npm] connect-leveldb@0.1.4 <http://npm.im/connect-leveldb>: This module provides a session store for connect which uses leveldb via levelup. (@wolfeidau)
13:25:38  <wolfeidau>woot #shipping
13:28:04  * jmartinsjoined
13:34:25  <kenansulayman>wolfeidau gj
13:39:57  * Acconutjoined
13:44:21  * tmcwjoined
13:46:13  <levelbot>[npm] level-channels@0.1.1 <http://npm.im/level-channels>: leveldb based pub-sub (@hij1nx)
13:46:21  <hij1nx>shit
13:46:53  <kenansulayman>hij1nx That sounds promising
13:46:56  <kenansulayman>what's it doing?
13:47:22  <kenansulayman>ah I see, something like level-hooks?
13:48:17  <kenansulayman>That thrives creativity. Let's implement some server-side extensions? :)
13:48:43  <levelbot>[npm] level-channels@0.1.0 <http://npm.im/level-channels>: A simpler way to selectively stream future or historic changes in leveldb. (@hij1nx)
13:49:08  <hij1nx>kenansulayman: more like level-live-stream, but much simpler
13:49:37  * Acconutquit (Quit: Acconut)
13:50:10  <hij1nx>level-live-stream is good for just creating a stream, you can do whatever you want with it
13:50:24  * thlorenzjoined
13:50:50  <kenansulayman>hij1nx Sounds like one could do some log-file propagation with it? :)
13:51:05  <hij1nx>possibly
13:51:13  <hij1nx>i was writting a chat thingy
13:51:25  <hij1nx>one that i actually need to use in production
13:51:34  * thlorenz_joined
13:51:41  <kenansulayman>Where you bind a channel as key?
13:52:37  <hij1nx>pretty much
13:52:40  <hij1nx>similar
13:54:36  * thlorenzquit (Ping timeout: 252 seconds)
13:54:36  * ramitosjoined
13:55:37  <hij1nx>i called it level-pubsub but that confused the hell out of people because it didnt delete values after publishing them.
13:55:42  <hij1nx>so technically its not pubsub
13:55:44  * thlorenz_quit (Ping timeout: 248 seconds)
13:56:03  <hij1nx>even though you publish and subscribe
13:57:45  * thlorenzjoined
13:57:49  <hij1nx>s/didnt delete value after publishing/keep everything in memory/
13:59:06  <kenansulayman>hij1nx sorry, call
13:59:20  <kenansulayman>Well that's not necessarily bad
13:59:31  <kenansulayman>PubSub with history
13:59:35  <kenansulayman>hm
14:00:28  <kenansulayman>that could be in fact extremely usable for inter-datacenter communication
14:00:35  <kenansulayman>I'll give it a spin!
14:00:40  <kenansulayman>hij1nx btw 3 days to go ;)
14:05:16  * jjmalinajoined
14:10:15  * eugenewarequit (Ping timeout: 260 seconds)
14:30:40  * kenansulaymanquit (Remote host closed the connection)
14:35:29  * eugenewarejoined
14:36:42  * kenansulaymanjoined
14:37:33  * Acconutjoined
14:40:05  * Acconutquit (Client Quit)
14:41:46  * brycebariljoined
14:41:51  * esundahljoined
14:44:38  * Acconutjoined
14:50:13  * ryan_ramagejoined
14:51:01  * thlorenz_joined
14:51:40  * Acconutquit (Ping timeout: 268 seconds)
14:52:13  * mikealquit (Quit: Leaving.)
14:55:22  * thlorenz_quit (Ping timeout: 240 seconds)
14:58:23  * dguttmanjoined
14:59:59  * julianduquejoined
15:00:44  * Acconutjoined
15:00:51  * Acconutquit (Client Quit)
15:02:58  * mikealjoined
15:05:51  * jerrysvjoined
15:07:17  * ednapiranhajoined
15:10:37  * mikealquit (Quit: Leaving.)
15:10:40  * julianduquequit (Ping timeout: 264 seconds)
15:14:25  * tarrudaquit (Ping timeout: 250 seconds)
15:14:53  * kenansulaymanquit (Quit: ≈ and thus my mac took a subtle yet profound nap ≈)
15:16:09  * kenansulaymanjoined
15:38:19  * timoxleyquit (Remote host closed the connection)
15:45:35  * timoxleyjoined
15:50:47  <levelbot>[npm] nitrogen@0.1.44 <http://npm.im/nitrogen>: Nitrogen is a platform for building connected devices. Nitrogen provides the authentication, authorization, and real time message passing framework so that you can focus on your device and application. All with a consistent development platform that leverages the ubiquity of Javascript. (@tpark)
15:58:39  * tarrudajoined
15:58:54  * wolfeidauquit (Ping timeout: 264 seconds)
15:59:32  * mikealjoined
15:59:35  <hij1nx>ooh! Is nitrogen built with leveldb?
16:00:19  <ryan_ramage>weird I was just watching that lxjs video on nitrogen 5 min ago
16:01:22  <ryan_ramage>the client does seem to have level as a depend
16:02:14  <hij1nx>its nicely branded
16:05:33  * wolfeidaujoined
16:10:00  * rudjoined
16:10:00  * rudquit (Changing host)
16:10:00  * rudjoined
16:10:00  <ednapiranha>hij1nx: hi!
16:13:31  * frankblizzardquit (Remote host closed the connection)
16:17:14  <mbalho>i open this issue so often https://github.com/nitrogenjs/client/issues/10
16:24:28  <thlorenz>mbalho: I usually do 'on-github-PRs' for these kind of one liner fixes
16:24:56  <mbalho>yea but im lazy this morning
16:26:27  <ednapiranha>mbalho: see you next week!
16:26:31  <thlorenz>mbalho: https://github.com/nitrogenjs/client/pull/11
16:27:11  <mbalho>ednapiranha: WOOOOOT. i get there on the 16th, was gonna hack for a day or two, where do you work out of?
16:27:27  <ednapiranha>moz office
16:27:32  <ednapiranha>i can let you in as a guest
16:27:41  <ednapiranha>on the open area
16:27:47  <mbalho>kewl
16:27:57  <ednapiranha>mbalho: just lmk if you want to do that
16:28:48  <mikeal>ednapiranha: btw, sorry about stepping on your namespace :)
16:29:04  <ednapiranha>mikeal: lol
16:29:06  <ednapiranha>i dont care
16:29:16  <ednapiranha>it's not even related .. but should confuse the hell out of people :)
16:29:36  <mikeal>naming things in node.js === confusing :)
16:30:36  <mikeal>https://npmjs.org/search?q=meat
16:30:55  <mikeal>meat vs nodejs-meat
16:31:15  <ednapiranha>mikeal: obviously you need to create node-space
16:31:24  <ednapiranha>is that even taken? i bet it is
16:31:30  <mbalho>The Nodeiverse
16:31:37  <ednapiranha>the new domain name hoarding
16:31:43  <mbalho>Mikeal Rogers: putting the Ive in Nodeiverse
16:32:00  <ednapiranha>gahhh he didnt name it node-space on his gh! https://npmjs.org/package/space
16:32:02  <ednapiranha>THE HORROR
16:32:08  <ednapiranha>now everything is fucked up!
16:32:09  <mikeal>i've had better luck getting domain names lately than node.js module name
16:32:10  * rudquit (Quit: rud)
16:32:22  <mbalho>https://npmjs.org/package/spaceballs
16:32:26  <ednapiranha>mikeal: starting name modules after rivers
16:32:36  <ednapiranha>node-nile
16:32:42  <mikeal>the one that i actually fuck up all the time is "uuid" vs "node-uuid"
16:32:42  <mbalho>module prefixes are the way to go
16:32:45  <mbalho>like npm sublevels
16:32:48  <mikeal>uuid is a compiled module
16:32:56  <mikeal>node-uuid is pure js and the one you actually want
16:33:02  <mikeal>bleh!
16:33:05  <ednapiranha>there should be a law
16:33:15  <mbalho>i tried to bitch abotu this 3 years ago and nobody cared
16:33:21  <mikeal>i'm going to prefix all my modules with "best"
16:33:32  <mbalho>or suffix with 2.0
16:33:35  <mikeal>npm install best-timezone
16:33:46  <ednapiranha>mikeal: i started out with a prefix of 'noodle' for a while.. then 'general'.. and now 'meatspace'
16:33:50  <ednapiranha>it's worked out ok so far
16:34:22  <mbalho>isaac convinced me to switch to boring + long descriptive names
16:34:22  <mikeal>i try to find names so general they are confusing OR names that are a strange reference
16:34:26  <mikeal>max prefers PUNs
16:34:33  <mbalho>e.g. http://npmjs.org/lexicographic-integer
16:34:36  <ryan_ramage>hah, 'best' is open as a package, then can make a meta best- depend system
16:34:45  <mikeal>yeah, isaac published a module called http-duplex-client
16:34:49  <ednapiranha>mikeal: prefix with 'wurst'
16:34:52  <mikeal>and then i published an extension
16:34:59  <mikeal>http-duplex-gzip-client
16:35:01  <mikeal>rediculous!
16:35:06  <mbalho>no thats perfect
16:35:12  <mbalho>super descriptive
16:35:29  <mbalho>authenticated-duplex-stream-abstraction
16:35:46  <ednapiranha>mbalho: mikeal: did you guys sign up for that dinner thing at rtconf?
16:35:50  <ednapiranha>i didn't
16:35:53  <mbalho>the french one
16:35:58  <mbalho>you should go
16:36:03  <mikeal>but what poor bastard has to come along and publish a new duplex gzip client of his own CAUSE SEMICOLONS
16:36:03  <ednapiranha>so priceyyy
16:36:07  <ednapiranha>:)
16:36:09  <mikeal>what does he/she name the new one
16:36:20  <ednapiranha>mbalho: do we get a free hang out time or something?
16:36:29  <mbalho>during the conf? not sure
16:36:29  <mikeal>ednapiranha: which one? i did the friends of Eran Hammer's cute fake name
16:36:37  <ednapiranha>is it sold out yet
16:36:41  <ednapiranha>you guys are peer pressuring me
16:36:58  <mbalho>eat the food, do it
16:37:03  <mbalho>http://i.imgur.com/RV2uz2H.jpg
16:37:06  * insertcoffeequit (Ping timeout: 245 seconds)
16:37:22  <mikeal>well, i figured, i'm not paying for a ticket to the conference, so i can buy the fancy dinner
16:37:32  <mbalho>its 60 bucks for dinner https://tito.io/&yet/realtime-week-2013?release_id=gmahxsotkg8
16:37:34  <mikeal>also, i expense it
16:37:41  <mbalho>which is like 200 bucks in SF dinner dollars
16:37:44  <ednapiranha>i thought it was $200
16:37:53  <mikeal>Guests of Moses Driver is $200
16:38:00  <mikeal>and is sold out
16:38:02  <mbalho>ednapiranha: do option 1
16:38:07  <ednapiranha>the poor person one?
16:38:09  <ednapiranha>lolol
16:38:13  <mikeal>i think Adam opened up a few extra tickets for speakers to buy tho
16:38:16  <mbalho>no its the 'i live in portland' one
16:38:17  <ednapiranha>din din?
16:38:19  <mbalho>ednapiranha: yes
16:38:28  <ednapiranha>DRINKS NOT INCLUDED
16:38:31  * ednapiranha's head explodes
16:38:34  <ednapiranha>it's ok i'll deal
16:38:40  <mikeal>max, you're going to regret not doing the Moses Diver one
16:38:46  <mbalho>i missed the window
16:39:09  <ednapiranha>mbalho: we can invite whoever to the dinners right?
16:39:09  <mbalho>and already bought a din din ticket
16:39:19  <mbalho>ednapiranha: dunno
16:39:30  <ednapiranha>says you can order multiple tickets
16:39:46  <mikeal>oh shit
16:39:56  <mikeal>i need to make sure Anna can get in to this dinner
16:40:06  <mikeal>it won't be good for my marriage if i eat a fancier dinner than my wife
16:40:07  <ednapiranha>mikeal: try ordering another ticket
16:40:10  <ednapiranha>and i'll order two too
16:40:12  <mikeal>it is sold out
16:40:17  <ednapiranha>din din!
16:40:18  <ednapiranha>is not
16:40:19  <mbalho>lol
16:40:24  <mikeal>i'm doing moses diver
16:40:38  <ednapiranha>fine mikeal. FINE
16:40:40  <ednapiranha>see if we care.
16:40:48  <mbalho>http://obiedog.com/
16:40:54  <mbalho>#portlandia
16:41:43  <ednapiranha>i should probably just email adam or the organizers
16:41:44  <ednapiranha>and verify
16:41:47  <ednapiranha>im sure it's ok
16:41:47  <mikeal>i'm pinging adam about it
16:41:51  <ednapiranha>cool
16:41:55  <ednapiranha>i'll just wait for you to respond
16:41:55  <ednapiranha>lol
16:42:03  <mikeal>well, mine is just to see if i can sneak in my wife
16:42:10  <mikeal>you may want to send him your own thing :)
16:42:21  <ednapiranha>mikeal: can you ask him if we can invite whoever to the dinners if we buy an extra ticket?
16:42:22  <ednapiranha>dammit
16:42:49  <mikeal>i wonder if the SO track tickets include the dinner allotment ?
16:42:54  <mikeal>so confused!
16:43:34  * ramitosquit (Quit: Computer has gone to sleep.)
16:43:41  <ednapiranha>yeah cuz .. my SO lives in pdx
16:43:46  <ednapiranha>i dont think he needs to see anything here
16:44:11  <mikeal>actually… doing a tourist thing of your own city is super fun
16:44:18  <mikeal>but yeah, not strictly necessary :)
16:44:21  <mbalho>ednapiranha: have you taken the tram to ohsu yet
16:44:33  <ednapiranha>mbalho: i have! but in 2010 or something
16:46:25  <ednapiranha>mikeal: yeah it's not clear on the website whether SO is allowed in on the dinner outside of the track
16:47:01  <jerrysv>mikeal: the food at mwl is also very good
16:47:19  <jerrysv>so if you need to help up the wife's game ...
16:49:43  <levelbot>[npm] level-channels@0.1.1 <http://npm.im/level-channels>: A simpler way to selectively stream future or historic changes in leveldb. (@hij1nx)
16:49:55  * insertcoffeejoined
16:50:32  <mikeal>there are other issues
16:51:03  <mikeal>like the fact that my wife is friends with Eran's husband, so even if i get us in to another dinner she's going to hear all about how amazing this dinner was and how it's my fault we didn't go :)
16:51:53  <jerrysv>hm. not sure i can help you there then. if you get a chance, raven and rose (and the rookery upstairs) are also phenominal
16:52:17  * thlorenz_joined
16:54:18  <ednapiranha>mikeal: mbalho: what about the thurs night party
16:54:22  <ednapiranha>can we invite people there?
16:54:36  * tmcwquit (Remote host closed the connection)
16:55:02  <mikeal>ednapiranha: yeah, SO's are always welcome at the parties
16:55:11  * tmcwjoined
16:55:21  <ednapiranha>sweet
16:55:21  <ednapiranha>ok
16:55:30  * tmcwquit (Read error: Connection reset by peer)
16:55:52  * tmcwjoined
16:56:27  * thlorenz_quit (Ping timeout: 248 seconds)
17:01:36  * mikealquit (Quit: Leaving.)
17:05:30  <jerrysv>ednapiranha: you're in pdx now, right?
17:05:47  <ednapiranha>yep
17:05:47  * tarrudaquit (Ping timeout: 250 seconds)
17:07:30  * tmcwquit
17:08:16  <jerrysv>do you hit any of the beer bars with other mozilla folks?
17:08:32  <ednapiranha>jerrysv: not yet unfortunately :\
17:08:46  <jerrysv>you should change that :)
17:09:29  <ednapiranha>i live by amnesia brewing though so i go there from time to time
17:09:32  <ednapiranha>:)
17:10:27  <jerrysv>i rarely make it outside of downtown on a regular basis at this point
17:10:47  <jerrysv>but our whole team will be at RTC
17:13:21  <ednapiranha>dammit mikeal
17:13:28  <ednapiranha>mbalho: when he gets back tell him SO IS OK FOR DINNERS
17:13:32  <ednapiranha>without the track thing
17:18:01  * ramitosjoined
17:27:00  * ednapira_joined
17:27:26  * ednapiranhaquit (Read error: Connection reset by peer)
17:28:19  * ednapira_changed nick to ednapiranha
17:32:22  * ramitosquit (Quit: Computer has gone to sleep.)
17:49:04  * rudjoined
17:49:04  * rudquit (Changing host)
17:49:04  * rudjoined
17:49:37  * jerrysvquit (Remote host closed the connection)
17:53:03  * mikealjoined
17:54:12  * ramitosjoined
18:02:29  * dominictarrjoined
18:17:03  * jxsonjoined
18:22:59  * brycebarilis also going to be at rtc
18:23:04  * thlorenz_joined
18:23:44  <brycebaril>I managed to get into the Dinner with Moses, though I'm gonna have to turn some tricks or something to be able to afford it.
18:27:31  * thlorenz_quit (Ping timeout: 245 seconds)
18:30:12  * jxsonquit (Remote host closed the connection)
18:30:24  <mbalho>brycebaril: lol
18:30:32  <mbalho>realtimeconf, AKA ##leveldb IRL
18:30:47  <ednapiranha>mbalho: lol
18:31:51  <brycebaril>helps that it's a short drive from Seattle and I can crash at my parent's place while there :)
18:36:55  * jxsonjoined
18:38:45  * rudquit (Quit: rud)
18:39:20  * jxson_joined
18:39:52  * jxson_quit (Read error: Connection reset by peer)
18:40:07  * jxson_joined
18:40:16  * jxson_quit (Remote host closed the connection)
18:40:44  * jxson_joined
18:41:12  * jxsonquit (Read error: Connection reset by peer)
18:53:11  * jxson_changed nick to jxson
18:55:52  * insertcoffeequit (Ping timeout: 268 seconds)
19:03:47  <mbalho>oh yea i need to find a place to crash one of the nights i think
19:08:47  * esundahlquit (Read error: Connection reset by peer)
19:08:58  * esundahljoined
19:20:16  * ednapiranhaquit (Remote host closed the connection)
19:22:36  * Acconutjoined
19:22:49  * Acconutquit (Client Quit)
19:23:43  * thlorenz_joined
19:27:56  * thlorenz_quit (Ping timeout: 245 seconds)
19:34:26  * dominictarrquit (Quit: dominictarr)
19:36:59  * fallsemojoined
19:45:03  * jcrugzzjoined
19:49:34  * dominictarrjoined
19:51:16  * ednapiranhajoined
19:53:14  <thlorenz>dominictarr: having problems with live stream - getting a stack overflow inside json-buffer on JSON.stringify
19:53:36  <thlorenz>have you seen something like this before?
19:55:06  <dominictarr>it's probably a circular reference
19:55:14  <dominictarr>can you add a test case?
19:55:39  <dominictarr>thlorenz: ^
19:56:22  <thlorenz>dominictarr: I think so, hard to track it down since even catching the error results in another error :(
19:56:58  <thlorenz>dominictarr: who would generate the circ. reference - live-stream itself?
19:57:00  <dominictarr>hmm, can you stringify your data with normal JSON?
19:57:09  <thlorenz>it's not my data
19:57:21  <dominictarr>well, "the data"
19:57:35  <thlorenz>my data works fine without live-stream, it's just when I install it and then use live stream with multilevel that I run into it
19:58:13  <thlorenz>ode.js:204
19:58:14  <thlorenz> global.__defineGetter__('console', function() {
19:58:42  <thlorenz>dominictarr: is what I'm getting when trying to catch the error and logging it
19:59:12  <thlorenz>with an max stack-size exceeded - guess I need to log earlier ;)
19:59:29  <dominictarr>do you have a runnable example?
19:59:36  * ednapiranhaquit (Ping timeout: 245 seconds)
19:59:54  <thlorenz>not really, you'd need to have my data - I'll see if I can isolate it
20:02:21  <thlorenz>dominictarr: ah - looks like it happens due to sublevel
20:02:25  <thlorenz>the _parent ;)
20:02:51  <thlorenz>that'll cause some circularity (if that is a word)
20:03:17  <dominictarr>can you make a script to reproduce?
20:03:42  <thlorenz>dominictarr: I'll try - is this how it is meant to be used though?
20:03:57  <thlorenz>or should I just install a live stream on each sublevel?
20:04:17  <dominictarr>I'm not sure what you are trying to do.. at least, post an issue with your example code
20:04:32  <dominictarr>yes - you should install live on each sublevel
20:06:58  <thlorenz>dominictarr: I may be using it wrong - was doing install(db) which had multiple sublevels
20:07:30  <thlorenz>I'll try to install on each sublevel and see if that fixes it, otherwise I'll make a repro script
20:07:50  <dominictarr>thlorenz: cool
20:15:45  * gwenbelljoined
20:18:09  * ednapiranhajoined
20:21:44  <thlorenz>dominictarr: working great now - so I guess was my mistake
20:22:02  <dominictarr>ah, okay cool
20:22:07  <thlorenz>maybe we can add a big NOTE to not install a live stream on a root db with sublevels
20:22:13  <thlorenz>thanks for your help :)
20:23:58  <dominictarr>can you post an issue with the code you where attempting to use before?
20:24:19  <thlorenz>I can make an example - the code is way too big
20:28:23  * esundahlquit (Remote host closed the connection)
20:28:49  * esundahljoined
20:31:37  <thlorenz>dominictarr: https://github.com/dominictarr/level-live-stream/issues/7
20:33:23  * esundahl_joined
20:33:28  * esundahlquit (Ping timeout: 264 seconds)
20:34:40  * tmcwjoined
21:10:45  * gwenbellquit (Quit: Lost terminal)
21:37:04  * dominictarrquit (Quit: dominictarr)
21:40:47  * ramitosquit (Quit: Textual IRC Client: www.textualapp.com)
21:47:35  * rudjoined
21:57:13  <levelbot>[npm] level-json-edit@0.0.1 <http://npm.im/level-json-edit>: Taking editing json to the next level with multilevel. (@thlorenz)
22:04:16  * mikealquit (Quit: Leaving.)
22:05:13  * mikealjoined
22:13:36  * thlorenzquit (Remote host closed the connection)
22:23:15  * ramitosjoined
22:24:56  * mikealquit (Quit: Leaving.)
22:28:27  * mikealjoined
22:32:20  <jmartins>hi
22:32:42  <kenansulayman>h
22:33:00  <jmartins>each npm level index is recommend to use with multilevel ?
22:33:21  <kenansulayman>Could you elaborate?
22:35:10  * alanhoffjoined
22:36:11  <alanhoff>We are looking for some good index engine for leveldb, someone have a tip?
22:36:39  <jmartins>I need using index with multilevel and I would like knows if have diferent performance between level-indico, subindex and node-level-mapped-index ?
22:36:45  <kenansulayman>alanhoff There's that Eugene creation and mine
22:37:05  <alanhoff>kenansulayman, witch one is yours?
22:37:13  <kenansulayman>namequery
22:37:21  <alanhoff>let me take a look
22:38:24  <kenansulayman>It's primarily for indexing an ID using "free text"
22:38:52  <kenansulayman>So if you're going to want to index whole documents with other stuff, Eugeneware's might be better suited
22:38:56  <alanhoff>It only have text search?
22:39:09  <kenansulayman>What do you mean?
22:39:41  <alanhoff>"nq.search" only search for text patterns?
22:40:09  <kenansulayman>jmartins I'm not too deep with that. But I'm sure mbalho or eugeneware
22:41:09  <kenansulayman>alanhoff nq.search allows search like "Beautiful Beast" and will yield all matching results (id, matching terms, sift3 distance to the search query)
22:41:31  <alanhoff>hmm k
22:41:42  <kenansulayman>alanhoff given it was indexed alike. Let's say we index 123kenansulayman as "Woodchoppers Paradise"
22:42:08  <kenansulayman>It would map W, Wo, Woo, Wood..., Woodchoppers
22:42:11  <kenansulayman>match*
22:42:25  <kenansulayman>(same applies to paradise of course)
22:42:31  <kenansulayman>What are you trying to deploy? :)
22:42:38  <alanhoff>Understood, thx
22:43:49  * thlorenzjoined
22:44:40  <alanhoff>kenansulayman, we are trying to deploy an app, and we are thinking in adopt leveldb as it's main db
22:45:03  <kenansulayman>"We are looking for some good index engine for leveldb"
22:45:15  <kenansulayman>What are you trying to do I mean :)
22:45:35  <alanhoff>haha :D
22:46:15  <alanhoff>We are trying to index our database (leveldb) to speed up queries
22:46:59  <kenansulayman>... what type of input and what are you trying to index?
22:47:14  <mbalho>chrisdickinson: question for you --- im trying to decide on an on-disk format for rows of data for dat
22:47:26  <mbalho>chrisdickinson: http://npmjs.org/multibuffer is one option
22:47:32  <alanhoff>mostly user profiles, names and data ranges
22:48:01  <alanhoff>all json
22:48:06  <mbalho>chrisdickinson: but it means for a csv row like foo,bar,baz it would add 4 bytes for each of the 3 byte cells (since it has a 4 byte precision length integer)
22:48:38  <mbalho>chrisdickinson: part of me wants to just convert everything to csv and store csv on disk
22:49:16  <mbalho>chrisdickinson: basically i see 2 options: delimited or framed
22:49:27  <mbalho>chrisdickinson: i wanted a sanity check to see if you had other ideas
22:49:31  <kenansulayman>alanhoff You might fit with eugeneware's implementation then
22:49:44  * tmcwquit (Remote host closed the connection)
22:49:49  <alanhoff>k, I'll take a look
22:50:29  <kenansulayman>alanhoff https://npmjs.org/package/fulltext-engine
22:50:57  <kenansulayman>We're deploying namequery since we identify users by id
22:51:21  <kenansulayman>then rank by mutual connections and the distance to the users query
22:52:40  * thlorenzquit (Ping timeout: 264 seconds)
22:55:26  * ednapiranhaquit (Remote host closed the connection)
22:58:55  <chrisdickinson>mbalho: hm, hm. does dat have a conception of data types?
22:59:44  <chrisdickinson>also, i'm not too familiar with dat-in-the-large -- i seem to recall it was sort of like git / the cypherlinks stuff (or i might be remembering wrong)?
23:00:24  * alanhoffquit (Remote host closed the connection)
23:01:09  <chrisdickinson>also: could you use something with a varint size specifier to pack instead of a dedicated 4 bytes?
23:01:11  <chrisdickinson>http://npm.im/varint
23:01:43  * fallsemoquit (Quit: Leaving.)
23:06:16  * jjmalinaquit (Quit: Leaving.)
23:11:16  * ryan_ramagequit (Ping timeout: 264 seconds)
23:16:44  <mbalho>chrisdickinson: i wrote up my thoughts here https://github.com/maxogden/dat/blob/master/notes.md
23:17:29  <mbalho>chrisdickinson: you can think of dat as a thing that stores excel spreadsheets with many many rows and then handles synchronization for updates
23:17:58  <mbalho>chrisdickinson: you wouldnt store blobs/large binary data, all of the data is going to be spreadsheet cell sized, so usually pretty small, but there will be many rows
23:19:25  <mbalho>im checking out varint now
23:23:19  * esundahl_quit (Remote host closed the connection)
23:23:46  * esundahljoined
23:27:06  <brycebaril>mbalho: one other option you could consider is some sort of fixed-width format with a header. You lose the streaming, though.
23:28:00  * esundahlquit (Ping timeout: 248 seconds)
23:28:22  <mbalho>i should really benchmark this but i have a gut feeling that having to scan all my rows for delimiters every time won't be that bad
23:28:46  <mbalho>i guess ultimately its a tradeoff between disk space and speed, so i should probably err ont he side of being greedy with disk space to get more speed
23:29:29  <brycebaril>True. I bet multibuffers where the columns are typically similar lengths compresses almost equivalently to csv
23:30:30  * fallsemojoined
23:31:02  <brycebaril>If you don't want to support fields over 64kB it'd be pretty easy to make a 16 bit multibuffer, too. My biggest worry with that is having two specs & no easy way to immediately differentiate them
23:31:43  <mbalho>yea i was thinking of that too
23:32:23  <mbalho>and i dont wanna impose a cell length
23:33:05  <brycebaril>Yep. I considered making a 24bit version of buffer.readUInt32BE but decided that wouldn't save much
23:33:57  <mbalho>im trying to wrap my head around varint
23:34:12  <mbalho>> varint.encode(127)
23:34:13  <mbalho>[ 127 ]
23:34:21  <mbalho>varint.encode(5000)
23:34:21  <mbalho>[ 136, 39 ]
23:38:25  * rudquit (Quit: rud)
23:38:36  <mbalho>ah so if you set the 256th bit it knows to terminate i guess
23:38:56  <mbalho>err 128th
23:39:47  <brycebaril>Hmm, I'm not sure that varints will help for this though, it's almost going to end up a combination of scanning for delimiters + framing
23:40:38  * eugenewarequit (Remote host closed the connection)
23:40:45  * eugenewarejoined
23:40:45  <mbalho>its a nice middle ground though, because the same approach works for small and large buffers (i think)
23:44:12  <mbalho>like if i went with either framing or delimited, it would work well with small or large but not both
23:46:35  <mbalho>brycebaril: can you see any argument why multibuffer couldn't use varint?
23:47:53  <mbalho>the perf difference is that you can slice(4) up front with the current implementation to get the length but with varint you'd have to read one byte at a time until the varint returned. think its worth benchmarking?
23:51:01  <brycebaril>How does varint know when it is done?
23:52:08  <mbalho>i think, though i may be wrong here, that for every 128 bytes it receives it uses the 128th bit as a done flag
23:52:16  <mbalho>sorry 128 bits
23:52:35  <rvagg>while (b >= 0x80)
23:52:44  <rvagg>if less than that then it's the last byte to consume
23:53:12  <mbalho>ohhh
23:53:18  <mbalho>that makes sense
23:53:40  <rvagg>here's my impl: https://github.com/rvagg/leveljs-coding/blob/master/protobuf.js#L20-L35
23:53:49  <rvagg>there's a 64-bit int version above that
23:54:41  <mbalho>nice
23:55:03  <rvagg>http://www.reactiongifs.com/wp-content/uploads/2013/10/woah.gif
23:55:16  <mbalho>haha
23:56:13  <rvagg>heh, last commit on that file has the desc: "bugfix! .. can't explain this one" .. super professional
23:56:56  <mbalho>lol