00:19:03  * ralphtheninjaquit (Ping timeout: 276 seconds)
00:19:10  * ralphtheninjajoined
02:15:25  * jerrysvquit (Remote host closed the connection)
02:16:04  * jerrysvjoined
02:16:48  * jerrysv_joined
02:20:26  * jerrysvquit (Ping timeout: 244 seconds)
02:23:14  * jerrysv_quit (Remote host closed the connection)
02:23:51  * jerrysvjoined
03:10:40  * jerrysv_joined
03:13:54  * jerrysvquit (Ping timeout: 276 seconds)
03:15:02  * jerrysv_changed nick to jerrysv
03:26:22  * jerrysvquit (Remote host closed the connection)
13:22:03  * topi`joined
13:22:43  <topi`>how does Snappy work inside LevelDB? If I store a lot of tiny bits of JSON data in LevelDB, is Snappy able to "cross the boundaries" so to speak and hence compress better?
13:22:59  <topi`>let's say I'm storing temperature data, like {"temperature":20.0}
13:23:16  <topi`>that's a lot of bytes when you only need the "20.0" bit
15:08:17  <lennon>topi`: afaik snappy compresses the files (so a lots of records at once, so i guess it can compress those JSON objects very well)
15:09:23  <lennon>also, snappy aims for speed, not compression ratio
18:01:31  * serapathjoined
18:37:18  * dguttmanjoined
18:41:46  <jameskyburz>mafintosh: I am creating a new module using a rpc solutiom along the lines of multileveldown using protocol-buffers. Do you have time for some questions?
19:13:38  <mafintosh>jameskyburz: shoot
19:23:40  <jameskyburz>mafintosh: thanks! I am writing an rpc event emitter using ltgt ranges. Using the same internals as multileveldown....subscribe(range, (key) => {}) publish(key)....
19:27:17  <jameskyburz>mafintosh: All is going the plan except for encoding. When the messages are encoding uses level-protocol ranges and keys end up as buffers when decoded by the server.
19:28:47  <mafintosh>jameskyburz: is your schema using bytes as the type?
19:28:53  <jameskyburz>mafintosh: I have looked into level-codec for using the same key encoding as the leveldb instance however on the server side there is no decode for buffers.
19:29:05  <jameskyburz>mafintosh: yes the schema is using bytes
19:30:05  <jameskyburz>mafintosh: I will only be using utf-8 keys but thought it would be nice to publish a module where binary key pubsub was also possible.
19:30:51  <mafintosh>jameskyburz: in multileveldown i just always use buffers and use a leveldown instead levelup
19:31:17  <mafintosh>jameskyburz: then you dont need to worry about any encoding
19:35:19  <jameskyburz>mafintosh: :) My idea is to have this publish/subscribe using ltgt ranges but not tied to level. The problem is decoding publish(key) and subscribe(range) messages when they are sent as buffers.
19:36:38  <mafintosh>ah okay.
19:37:13  <mafintosh>jameskyburz: and you cannot just tostring the buffer? :)
19:39:11  <jameskyburz>mafintosh: I can with the exception of json and custom encoding. Am wondering wether the best solution isn't using string types :)
19:41:45  <mafintosh>jameskyburz: so cool thing about protobuf is that strings and bytes are encoded the same way on the wire
19:42:11  <mafintosh>jameskyburz: so you can always use strings now and change it later
19:42:28  <jameskyburz>mafintosh: awesome thanks!!!!
19:45:42  <jameskyburz>mafintosh: Am loving playing with multileveldown. I tried an experiment using the server code in the browser (using memdown) and the client code in the server. Worked like a charm :) just for fun....Also thanks to your length-prefixed-stream and such I managed to have a double duplex stream over the same websocket. Not sure if it works 100% yet but the fact it worked at all is pretty cool!
19:46:50  <jameskyburz>mafintosh: Thanks for your help will use string types for now and forget about encoding until I need to!!
23:45:35  * jerrysv_joined