00:00:51  * eugenewarequit (Ping timeout: 260 seconds)
00:03:42  * esundahlquit (Ping timeout: 264 seconds)
00:06:28  * fallsemoquit (Quit: Leaving.)
00:10:15  * timoxleyquit (Remote host closed the connection)
00:12:45  * timoxley_joined
00:14:40  * i_m_caquit (Ping timeout: 256 seconds)
00:16:47  * mikealquit (Quit: Leaving.)
00:22:44  <mbalho>recommendations on how to get batch sizes? im trying to figure out if i'm exceeding my writeBufferSize
00:23:20  * timoxley_quit (Remote host closed the connection)
00:27:09  * kenansulaymanquit (Quit: ∞♡∞)
00:34:25  * thlorenzjoined
00:38:55  * thlorenzquit (Ping timeout: 264 seconds)
00:42:31  * soldairquit (Quit: Page closed)
00:44:33  * dguttmanquit (Quit: dguttman)
00:46:54  * jxsonquit (Remote host closed the connection)
00:52:36  <Raynos>rvagg: levelup does not fsync by default. If fsync fails. i.e. the value being put at a key was not actually committed to the database how do I get an error of that failure ?
00:52:51  <Raynos>I genuinely don't understand how one handles errors if you dont wait for fsync
00:54:58  * timoxleyjoined
00:58:32  * thlorenzjoined
01:04:43  * chapelquit (Ping timeout: 264 seconds)
01:11:28  * eugenewarejoined
01:16:55  * jcrugzz_joined
01:19:40  * jcrugzzquit (Ping timeout: 264 seconds)
01:27:33  * eugenewarequit (Remote host closed the connection)
01:28:37  <rvagg>Raynos: all you have available to you is the 'sync' flag for writes (batch and put and writestream)
01:28:55  <rvagg>if you don't use fsync then you're probably going to have to wait for an error from the next i/o operation that has a problem
01:29:29  <rvagg>apart from that... I'm not sure if errors bubble up from the OS when you don't do an fsync
01:29:54  <brycebaril>I bet you could test it pretty easily
01:30:22  <brycebaril>just start writing every few ms and in the middle of that change the write perms on the db
01:30:22  <rvagg>mbalho: Riak chooses a random writeBuffer between 30M and 60M for each of its nodes
01:30:37  <rvagg>mbalho: so feel free to go into that order; seems a little high for me but they've obviously found that it works
01:30:42  * esundahljoined
01:30:51  <rvagg>I probably wouldn't push beyond 128M unless you can prove perf improvement
01:31:13  <rvagg>mbalho: one minor thing about write buffer size is that it increases the time it takes to *open* a database the next time because it does a flush on open
01:31:35  <rvagg>s/128/64
01:31:42  <rvagg>not sure why I said 128...
01:32:02  * esundahlquit (Remote host closed the connection)
01:32:59  * eugenewarejoined
01:34:07  <rescrv>brycebaril, rvagg: OS errors bubble up and are returned on subsequent writes
01:34:43  <rescrv>brycebaril: I found 16M to be a sweet-spot for HyperLevelDB and LevelDB. Basho's design needs a little higher write buffer.
01:34:48  * thlorenz_joined
01:35:08  <rescrv>Anything abouve 64M is useless as the implementation will internally cap it at ~64MB
01:35:30  * jmartinsquit (Ping timeout: 264 seconds)
01:35:37  <brycebaril>rescrv: so in hyperdex does it always sync (or default to fsync)?
01:36:53  <rescrv>Raynos: by default, LevelDB will have passed the operation off to the OS for flushing. It's not a guarantee (even with fsync, you can have gaps if the fs is bad), but it's "good enough" for some apps.
01:37:43  <rescrv>brycebaril: HyperDex does not employ local fsync. Fsync provides durability on one node, which is a useful property, but HyperDex provides f-fault-tolerance, which is a stronger guarantee.
01:38:07  <rescrv>Of course, it'd be trivial to enable it.
01:38:27  <brycebaril>interesting, is there a paper about f-fault-tolerance I can check out?
01:39:14  <rescrv>We're in talks with some people researching nvram (and who have contacts with big vendors, who are willing to provide early prototypes) and will be leveraging those contacts to push the bleeding edge further.
01:39:17  * thlorenz_quit (Ping timeout: 248 seconds)
01:41:03  <rescrv>brycebaril: there's no one paper about "f-fault-tolerance," primarily because it's a convention used in almost all fault tolerance papers. You can think of it as the answer to the question, "When can I remain available?"
01:41:21  * alanhoffquit
01:41:38  <rescrv>If you want to remain available despite $f$ faults, you'll need to have N nodes where N is some function of $f$.
01:41:53  * alanhoffjoined
01:41:58  <brycebaril>Ahh, ok.
01:42:12  <rescrv>For example, Paxos, Raft, ZooKeeper, all are able to remain available in the presence of $f$ failures if you have $2f + 1$ servers.
01:42:18  <rescrv>HyperDex's coordinator is the same way
01:43:05  <brycebaril>So I guess my next question still applies -- does it acknowledge writes prior to ensuring the adequate number of copies?
01:43:09  <rescrv>For HyperDex itself, we replicate within partitions. To safeguard data, you need $f + 1$ nodes. Any $f$ can fail, and there'll still be one node remaining with your data, and that node is available.
01:43:46  <rescrv>brycebaril: It guarantees that the data has made it to all $f + 1$ nodes (except in the active failure case, where up to $f$ of the nodes are failed).
01:43:58  <rescrv>so no, it does not acknowledge before it is safe to do so
01:44:38  <rescrv>that's one of the reasons why we're able to guarantee strong consistency
01:44:42  <brycebaril>Gotcha, so for the case of this leveldb library that's more or less analagous to fsync before reply
01:45:02  <rescrv>it depends upon your failure assumptions.
01:45:24  <rescrv>If you assume that spontaneous reboots are the norm, and not process crashes, then you'll want to fsync before reply.
01:45:57  <rescrv>If, instead, it's likely that your app is what will crash, and your machines are stable, you'll can assume that the machine will act safely.
01:46:02  <rescrv>Personally, I'd take neither option
01:46:05  <brycebaril>Of course, bullet to the drive won't matter if fsync or not :P
01:46:30  <Raynos>rvagg: I dont really understand what happens when fsync fails and you dont have sync set to true
01:46:50  <rescrv>You either have to rely on the entire stack to get fsync correct (LevelDB has been getting better at this), or it all falls apart.
01:47:07  <brycebaril>Raynos: I think the state of your app will be incorrect -- it will assume writes to have succeeded that won't, and then subsequent operations will error.
01:47:13  * jxsonjoined
01:47:22  <rescrv>I don't have the citation handy, but there's a paper where automatic analysis of several Linux filesystems found serious bugs that could lose data on common fsync paths.
01:47:29  <Raynos>rescrv: what is f-fault-tolerance ?
01:47:55  <rescrv>Raynos: if sync is not set to true, then the only way fsync will be called is from the background threads
01:47:58  <rescrv>ls
01:48:05  <Raynos>brycebaril: by subsequent operations will error you mean that level db instance will error on all future mutation and reading operations because its in a corrupted state ?
01:48:49  <rescrv>Raynos: the error should be returned exactly once. Where the error is returned, you should be able to retry the op that returns the failed error.
01:48:49  <brycebaril>That instance, no idea about reads, I have about 5 minutes, let me see if I can write a quick test
01:49:08  <rescrv>Raynos: I can look in the code and tell you if you gimme 5
01:50:26  * chapeljoined
01:50:39  <Raynos>rescrv: so if write to key A with value A has sync: false and it has an fsync failure then some future operation to key B with value B sync: false will fail with an error ?
01:51:16  <Raynos>I genuinely have no idea how this works or how disk tolerance and fault tolerance works in general
01:51:41  * jxsonquit (Ping timeout: 245 seconds)
01:52:18  <rescrv>Raynos: speaking strictly from a LevelDB standpoint (I don't touch the higher level sugar), the write to key(A) will either succeed, or fail. If it succeeds, then the fsync was successful. If anything, including the fsync fails, then the write to A is considered failed as well.
01:53:04  <rescrv>the write to B will proceed independently from the stand point of fsyncs. A's fsync status will not affect B's status unless the system has a bigger issue that affects all writes.
01:54:04  <rescrv>it's worth noting that LevelDB is going to perform poorly when fsync'ing every write. Even if you fsync only every thousandth write, that write will hold up every non-synchronous write that comes after until the fsync completes.
01:54:34  * esundahl_joined
01:54:35  <rescrv>We've got a partial patch to HyperLevelDB that significantly improves fsync/non-fsync workloads, but I never finished it because we have no use case for it
01:55:25  <Raynos>rescrv: so levelDB itself always fsyncs ?
01:56:31  <rescrv>Raynos: there are times where LevelDB will fsync for internal structures, but it won't fsync after each write unless you request it to do so.
01:57:08  <Raynos>rescrv: so if if I write to A without asking it to fsync how do I know whether its in disk ?
01:57:43  <rescrv>I maybe jumped the gun, but you were proposing a scenario with mixed sync/non-sync writes, and I was simply pointing out that although such a scenario will have an increased performance boost, it will still be almost as slow as a sync workload.
01:57:46  <rescrv>Raynos: you won't
01:58:00  <Raynos>So, in my head I think of fsync on means "delay the callback until we have an fsync result" and fsync off means "call the callback immediately, we get the fsync result later and either discard or we give it to some global db error handler"
01:59:35  <rescrv>rvagg or others would need to clarify what the JS wrapper is doing, but I imagine that the sync option is passed through to LevelDB and the callback is called only once the LevelDB call completes.
02:00:36  <rescrv>If that is how the JS wrapper works, then "sync=true" -> "call the callback only after LevelDB calls 'write'/'fsync'" and "sync=false" -> "call the callback only after LevelDB calls 'write'"
02:01:33  <rescrv>note that in both cases, LevelDB has passed the data all the way to the OS.
02:01:46  <Raynos>rescrv: so if sync is false I have no way of knowing whether a thing is in disk and `db.put("a", "a", function (err) { if (err) throw err; db.get("a", function (err, v) { if (err) throw err; assert(v === null) }) })` can happen
02:02:30  * eugenewarequit (Remote host closed the connection)
02:02:57  <brycebaril>Raynos: it may be worse than that... my test is... interesting
02:03:41  <rescrv>Raynos: this is all above the level where I work. I'm strictly a LevelDB and below kind of guy. You'll have to talk to the others to see if they are wrapping the API in a non-standard way.
02:04:29  <Raynos>what im asking is can you PUT a key without sync=true and GET it and level returns whatever its not found or key does not exist error is ?
02:04:44  <brycebaril>brycebaril: my test has left me with only questions... I'll make a gist later and see what people say
02:05:11  <brycebaril>Essentially it is just writing/reading every few ms and I'm deleting the db out from under it/etc. And it just keeps trucking no matter what (sync or not)
02:05:51  <brycebaril>but for now, I have to go :(
02:06:07  <rescrv>Raynos: LevelDB will return the stored object even if sync=false. Doing (pseudo-C) 'put("key-a", "val-a", sync=false); get("key-a");' should always return "val-a".
02:06:21  <rescrv>brycebaril: that's not surprising
02:06:23  <Raynos>rescrv: leveldb does this because it reads it from cache ?
02:06:24  <rescrv>it opens the log file
02:06:37  * jmartinsjoined
02:06:40  <rescrv>you can unlink this file, but the file still exists until it is closed
02:06:55  <rescrv>so no matter what you can sync to the file
02:06:56  <Raynos>if you were to close and restart the process do a put before the process restart and a get afterwards will it still be always consistent ?
02:07:00  <rescrv>the FS still has a reference
02:07:43  <rescrv>Raynos: leveldb does this because the written value is in the log (so it'll be there on a non-failure-case restart), and it's in the memtable (so the current process sees it).
02:08:06  <rescrv>Raynos: yes, LevelDB will allow you to close/open like that and it'll still be there (assuming no failures)
02:08:29  <Raynos>what happens if the disk failed ? it will just not be there
02:08:41  <rescrv>if the disk failed, nothing will be there. It's dead
02:08:45  <Raynos>I'm asking about the case where sync=false but if sync=true the operation would have failed
02:09:00  <rescrv>in that case, the close will fail
02:09:05  <Raynos>by nothing will be there you mean that key wont be or the entire disk / db is corrupted ?
02:09:08  <rescrv>s/will/should/
02:09:36  <Raynos>so if a leveldb is an inconsistent state and you were not notified because you didnt wait for fsync you will be notified on close() ?
02:09:36  <rescrv>if your disk has failed, then it has failed and doesn't work anymore
02:10:12  <rescrv>Raynos: I can confirm that in the code, but the design they've used and the care they've taken elsewhere lead me to believe that they'll do that, yes.
02:10:20  <rescrv>I can verify in the code.
02:10:30  <Raynos>Ok cool
02:11:32  <rescrv>Raynos: For what it's worth, we, the HyperDex team, offer commercial support for LevelDB and HyperLevelDB, even when not used in HyperDex. If you're just a hobbiest that's cool. If you need someone to call in SHTF scenarios, we offer support and consulting for that.
02:12:35  <Raynos>rescrv: cool :)
02:12:55  * julianduquequit (Ping timeout: 260 seconds)
02:13:10  <rvagg>sorry, had to disappear for a while-- Raynos, rescrv, you get the callback from a .put() or .batch() after it gets handed back control from leveldb, which in turn returns when it gets handed back control from the OS, that last bit is what 'sync'=true/false determines so it's all in the hands of the OS - but that doesn't mean you will never get an I/O error if you sync=false (default), there are other kinds of things that can go
02:13:10  <rvagg>wrong
02:13:51  <rescrv>Raynos: "Close" will always succeed. You'll need to do a final write to return any residual errors.
02:14:13  <rvagg>rescrv: we're not quite set up properly yet but I'm going to try and make sure that hyperleveldb is always available as an alternative to the default google leveldb in Node and I'll probably be promoting it as the ideal choice for perf in Node; there's a chance we may even switch to it as the default but that discussion is a bit of a way off yet
02:14:50  <rescrv>rvagg: sounds great! We're trying to keep as close to upstream while adding the features we need (see LiveBackup).
02:14:57  <rescrv>We've even pushed bug fixes upstream.
02:15:08  <rvagg>yerp and you've been much more responsive in dealing with reported bugs
02:15:27  <rvagg>the google guys are a little blinkered, focused on chromium as the primary target and they obviously have a whole lot of other things on their place
02:15:30  <rvagg>s/place/plate
02:15:36  <rescrv>If there's anything I can do to make it easy for you guys to keep HyperLevelDB as an option, please let me know before we become enough of a burden that you drop us ;-)
02:16:08  <rvagg>heh, no, it's actually pretty easy, much easier than the basho port which is a bit of a mess tbh
02:16:17  <rvagg>the only thing I need to do is stay on top of your versioning
02:16:26  <rescrv>rvagg: I honestly don't think Sanjay and Jeff care about Chromium much. If they were solely focused on that, LevelDB would be much worse off than it is.
02:16:47  <rescrv>rvagg: We've not cut a release. We'll do that with the next HyperDex release. I don't know how the numbers will go.
02:17:13  <rvagg>ok, well I'd better sync with you some time about how to obtain correct releases and make sure I'm not just grabbing an unstable branch
02:17:27  <rescrv>Sanjay and Jeff are cool guys though. I was at lunch with Jeff and someone asked him about Google's self-driving Prius, and why Google would invest in it. His answer was, "Why not?" with a big-ass grin.
02:17:31  <rvagg>it's easy with leveldb cause I can subscribe to the google code wiki feed for it
02:18:34  <rescrv>rvagg: anything unstable/experimental doesn't get pushed until it's better than what it replaces or is under a "dev/" prefix.
02:18:45  <rvagg>k
02:19:01  <rvagg>I'm still on leveldb-1.11.0, need to get around to pushing out a 1.13.0 version
02:19:43  <mbalho>rescrv: how did you implement livebackup?
02:19:52  <rescrv>Raynos: your final "write" need not be sync=true. And making it sync=true will not allow you to assume that other, previous operations completed.
02:20:01  <rescrv>mbalho: you mean how does it work?
02:20:30  <mbalho>rescrv: yes. i was working on a similar problem this week
02:22:09  <rescrv>There's a LiveBackup(const Slice& name) call. It'll create the directory "backup-<name>" under the DB directory. Then it'll copy/link every LevelDB-related file in a manner that's consistent. What you end up with is a directory that is itself a LevelDB instance.
02:22:42  <rescrv>It's great for making incremental backups, because it copies at most 2*sizeof(write_buffer) + sizeof(metadata), and the rest is hard-links
02:23:05  <rvagg>you build on Env for that don't you?
02:23:20  <rescrv>combine it with rsync, and you have an efficient way to backup solely that data that's changed since the last backup
02:23:29  <rescrv>rvagg: lemme check, but I think I was as general as I could be
02:23:50  <rvagg>we have a custom Env for Windows, does hyperleveldb support windows at all?
02:24:30  <rescrv>rvagg: it uses env https://github.com/rescrv/HyperLevelDB/commit/486ca7f6e81c00796a5c24396039fd1a108b582f
02:24:55  <rescrv>I see now that I used some GCC intrinsics. I should fix those sometime
02:25:49  <rescrv>rvagg: we don't actively avoid Windows, but it's not a supported target, and will likely not become an active focus for us unless there's external financial backing.
02:26:23  <rvagg>ahhh, you've built the locking into db_impl.cc, that's nicer than the suggestions flying around on the leveldb mailing list, so it should be relatively easy for us to expose that too
02:26:24  <rescrv>enough to hire someone else to deal with it, because I don't touch Windows
02:26:42  <rvagg>tho it looks like we'll have to make sure our Env has CopyFile() and LinkFile()
02:26:50  <mbalho>rescrv: my backup solution was actually for initial bulk cloning (im working on a sort of distributed system). implementation is here https://github.com/maxogden/dat/blob/master/lib/commands.js#L65 and https://github.com/maxogden/dat/blob/master/lib/commands.js#L114-L120. haven't tackled consistency yet, was gonna start with a global write lock during cloning
02:26:59  <rvagg>yeah, we had to add Windows because a surprising number of people are using Windows + Node
02:27:08  <rescrv>rvagg: Yep. We are restructuring all of the locking to make sense from a performance perspective
02:28:15  <rvagg>https://github.com/rvagg/node-leveldown/tree/master/deps/leveldb/port-libuv actually wasn't too difficult in the end to do windows, uses libuv (native with Node) to do all the thread and locking stuff in port.h, then uses a borrowed env.cc from a guy that did a windows port himself
02:28:20  <rescrv>mbalho: you'll need to completely block both writers and compaction. It'll be expensive, but should be correct.
02:28:38  <rvagg>apart from that there's only a single line in the main port.h that we have to change to load our windows stuff when compiling on windows, otherwise leveldb is left alone
02:29:13  <mbalho>rvagg: is there any way to temporarily disable compaction from JS-land?
02:29:20  <rescrv>rvagg: the only thing we've done that could hurt Windows compilation then is the additional copy/link (does Windows even know what a hardlink is?) calls in the env, and reliance upon mmap.
02:29:22  <rvagg>mbalho: nope
02:29:34  <rvagg>mbalho: the best you can do is close()/backup/open()
02:29:42  <rescrv>mbalho: there's no way to temporarily disable from C++ land either
02:29:52  <mbalho>ah too bad
02:29:58  <rvagg>but if you wanted to then you could do a custom leveldown layer that does that for you and just buffers incoming reads & writes, it wouldn't be hard but the buffering could get out of control
02:30:18  <mbalho>is there any way to monitor compaction state?
02:30:30  <rescrv>mbalho: override the log and parse log messages?
02:30:48  <rvagg>rescrv: well, tbh, anyone that's seriously using it in node is deploying on linux or smartos/solaris, mostly developing in osx I think
02:31:04  <rvagg>there's a few people playing with node+leveldb+azure, but even then you can just use linux on azure
02:31:20  <rescrv>the HyperLevelDB code is BSD-licensed. You may be able to use our live-backup code to deal with that
02:31:37  <mbalho>rescrv: oh great to hear, i'll look into it
02:31:40  <rvagg>mbalho: hang on, dominic and I played with exposing the logging, let me see where that's at
02:32:36  <mbalho>rvagg: also you mentioned somewhere sometime about making hyperleveldb + basho leveldb easier to swap out in leveldown, did that ever happen?
02:32:48  <rvagg>mbalho: https://github.com/rvagg/node-leveldown/compare/logger-play
02:33:26  * niftylettucequit (Quit: Updating details, brb)
02:33:32  <rvagg>mbalho: it's available but a bit behind latest, it's just a matter of me dealing with the leveldown branches I have, syncing with their upstream repos for leveldb, compiling & testing & publishing
02:33:34  <mbalho>rvagg: ah nice. do you think it would impact performance to expose the log as a stream in JS?
02:33:41  * niftylettucejoined
02:34:34  <rvagg>mbalho: I honestly don't know, probably a little cause of lots of string copying that needs to go on, it'd probably have to be something you can turn off and on while you're using the db
02:34:49  <mbalho>ahh yea makes sense
02:35:16  * thlorenz_joined
02:35:41  <rescrv>mbalho: I'd recommend strongly against the logging-based approach. You'll have a TTCTTU race condition that'll make your backups prone to random corruption upon restore
02:36:32  <rvagg>mbalho: there's also this, don't know if you've looked at it at all: https://github.com/rvagg/node-leveldown#leveldown_getProperty
02:36:52  <mbalho>oh nice
02:37:04  * alanhoffquit (Ping timeout: 264 seconds)
02:37:10  <rvagg>it's off leveldown so, db.db.getProperty('foo')
02:38:26  <mbalho>i have some crazy ideas like open a second leveldb and direct all writes to that during the file copy and then after the file copy is done then network replicate the buffered writes into the main db again.
02:39:02  <rescrv>mbalho: is there a reason for copy vs hardlink?
02:39:30  <mbalho>rescrv: its a client server relationship, e.g. git clone
02:39:49  <mbalho>or multi-master i guess
02:40:07  * thlorenz_quit (Ping timeout: 264 seconds)
02:40:46  <rescrv>mbalho: I'm just trying to understand why you'd have two LevelDB instances if you're just going to use the second to load the first.
02:41:11  <mbalho>the secondary one is just for availability during the write lock while the client clones the files of the first one
02:41:56  <mbalho>because copying the leveldb files is a lot faster than iterating over them and serializing them
02:42:28  <rescrv>mbalho: so your overall goal is to create a snapshot of the files on the filesystem which may then be sent directly to the client. And, you want to do it without blocking future writes.
02:42:33  <rvagg>yar and you could transparently replace the storage engine at runtime without impacting on the exposed api... but it would get pretty messy
02:43:20  <rvagg>rescrv: do you have a page that talks about your hyperdex/leveldb support options? this might be good to mention to people wanting to deploy serious stuff with node
02:43:24  <mbalho>rescrv: right. the project im working on is for scientists to publish large datasets in a way that is syncable
02:44:02  <rvagg>https://github.com/maxogden/dat
02:44:04  <mbalho>rescrv: so if someone is downloading the data for the first time it would be nice if i could optimize that, but also would be nice to allow the source data to get updated while the clones are happening
02:44:53  <rescrv>mbalho: it sounds like HyperLevelDB's live-backup is exactly what you need. I can backup a 10GB DB with a constant stream of new writes on the order of milliseconds.
02:45:09  <rescrv>downloading the data would then just be rsync
02:45:13  <rescrv>it'll automatically optimize
02:45:32  <mbalho>i'm more keen on node gzip streams as they are more portable than rsync
02:45:36  <mbalho>but yea, taht sounds great
02:46:47  <rvagg>you should include No9 in your discussions about this mbalho, if you make an issue on github anywhere CC him
02:46:55  <rvagg>he ended up just using zfs snapshots for backups
02:47:03  <rvagg>which isn't exactly a portable solution!
02:47:26  <rescrv>rvagg: it's also not likely correct unless you stop compaction and grab the internal lock.
02:47:32  <rescrv>or do something equivalent
02:47:47  <rvagg>yeah, but it's close-enough to serve as a just-in-case
02:49:07  <rescrv>yeah. especially if you're willing to fix it up manuallly
02:49:16  <mbalho>does the on disk format change between leveldb implementations?
02:49:33  <rvagg>no
02:49:41  <mbalho>nice
02:50:00  <rescrv>mbalho: HyperLevelDB is binary compatible and we will keep it that way well into the future
02:50:24  <rescrv>err... the files are binary compatible. It's not ABI compatible
02:54:07  <mbalho>for windows users (i expect lots of scientists + governments etc to be on windows using dat) as long as streaming the leveldb files from unix users works when they open them with google leveldb then i'm all good
02:54:18  <mbalho>and by unix users i mean hyperleveldb users
02:54:31  <mbalho>i can just disable live backup if youre a windows server
02:55:03  <rvagg>if I had time I'd tinker with getting hyperleveldb working for us in windows but I don't and I just don't care enough about windows!
02:55:24  <mbalho>yea i dont think many people will care
02:56:27  <mbalho>i just got a new big empty external hard drive, and github has a 1GB internet connection
02:56:37  <mbalho>gonna go there tomorrow and download some huge datasets to play with for dat :D
02:56:48  <rescrv>mbalho: you work for GitHub?
02:56:58  <mbalho>rescrv: my girlfriend does
02:57:09  <mbalho>rescrv: im grant funded, working on dat full time now
02:57:44  <rescrv>neat! I've chatted with someone who had an "in" there.
02:59:10  <mbalho>if you need anything let me know. mostly i can offer api rate limit increases :D
02:59:35  <mbalho>my secret plan is to get them to use dat down the road
02:59:40  <rescrv>rvagg: we've not yet put together a page with our support options. I'm just one guy though, and code comes first.
03:00:30  <rescrv>mbalho: I keep a list in the back of my mind of who has connections where, mainly because I'm a grad student/hacker and one of the reqs of the job is going around promoting my research
03:00:43  <rescrv>my research just happens to be much more implementation heavy than most
03:01:12  <rescrv>which is why we're moving toward the commercial support for both HyperDex and LevelDB
03:02:42  <rescrv>it'd be cool to talk with people at GitHub about HyperDex. I suspect they may be interested in some of the upcoming projects we're building around it.
03:03:43  <mbalho>they have some smart folks working on git and their data center ops + storage stuff
03:04:37  <mbalho>alot of the programmers there do rails full time. but there is a contingent of low level hackers too
03:05:01  <rescrv>I had a great time chatting with the Dropbox folks who do much of Dropbox's work on HBase/HDFS and it inspired some of our current work. I'd love to know more about the backend structure, and where their pain points are.
03:05:38  <rvagg>DDoS #1 pain point I suspect
03:05:58  <rescrv>rvagg: I suspect that's a problem for the ruby guys, not so much the backend
03:06:13  <mbalho>rackspace is the #1 pain point, which prevents them from preventing the ddoses
03:06:14  <rvagg>although I've always suspected that "DDoS" is code for "out stuff is freaking out and can't handle what's going on!"
03:06:21  <mbalho>so theyre moving to their own hardware
03:06:28  <rvagg>or DDoS = Rails can't cope!
03:06:40  <rescrv>by the time you've got one person connected to the ruby process, it's ground to enough of a halt that it cannot hurt the backend (can you tell I dislike Ruby)
03:06:43  <mbalho>rvagg: nah the ddoses are definitely legimiate
03:06:56  <mbalho>legitimate*
03:07:04  <mbalho>le*git* hehehe
03:07:10  <rvagg>yeah, I'm sure they are legit, it's just such an odd thing to happen to github of all sites
03:07:15  <rescrv>can you give insight into why people are ddos'ing GitHub?
03:07:35  <mbalho>i dunno the answer to that sadly
03:07:37  <rescrv>is it just because it's a convenient target? are they masking intrusions? or is it just that people like taking down big sites?
03:07:59  <mbalho>i think the last one. ive seen some data about volume + geo ip distribution and its definitely botnets
03:08:35  * julianduquejoined
03:09:42  <rescrv>those who can, do; those who can't DDOS everyone else's infrastructure?
03:10:24  <rvagg>it must be Syria, we should bomb them
03:11:08  <mbalho>rvagg: since when are you an american!??
03:11:19  <rvagg>oh, sorry, forgot for a moment
03:12:01  <rescrv>I'd just like to take a moment to say hello to the NSA agent who's silently joined ##leveldb
03:18:40  * thlorenzquit (Remote host closed the connection)
03:22:41  * jmartinsquit (Remote host closed the connection)
03:35:40  * thlorenzjoined
03:37:07  * chapelquit (Ping timeout: 264 seconds)
03:40:03  * ryan_ramagejoined
03:40:07  * thlorenzquit (Ping timeout: 264 seconds)
03:48:31  * timoxleyquit (Ping timeout: 264 seconds)
03:50:43  <mbalho>inserting 20 million 10kb rows in 1600 row batches, gonna see how long it takes
03:51:52  * chapeljoined
03:54:51  <mbalho>wow, after 20 or so batches of 1600 the time per batch went from around 1 second to around 20 seconds
03:55:57  <mbalho>oooh seems i had a memory leak
03:56:31  <rvagg>ugh, could be that memory leak that we're discussing on levelup/leveldown that we haven't managed to find yet
03:56:56  * eugenewarejoined
03:57:16  <mbalho>good to know, i have to rule otu my own code first
03:59:48  <mbalho>rvagg: in case you wanna reproduce https://github.com/maxogden/dat/blob/master/test/insert-junk-data.js
04:00:21  <mbalho>rvagg: (ignore the ugly dat programmatic api)
04:00:51  <mbalho>it uses level-mutex which automatically batches
04:02:08  <mbalho>this is the output i got https://gist.github.com/maxogden/0ddccdd28263391a2251
04:02:52  <rvagg>ouch
04:03:57  <rvagg>similar to what others have reported
04:04:03  <rvagg>darn it
04:04:46  * jxsonjoined
04:05:19  * chapelquit (Ping timeout: 264 seconds)
04:15:31  * jxsonquit (Remote host closed the connection)
04:20:30  * wolfeidauquit (Remote host closed the connection)
04:20:51  * wolfeidaujoined
04:25:40  * timoxleyjoined
04:30:14  * eugenewarequit (Remote host closed the connection)
04:36:04  * thlorenzjoined
04:40:40  * thlorenzquit (Ping timeout: 264 seconds)
04:42:29  * dguttmanjoined
04:44:11  * SomeoneWeirdjoined
04:50:41  * dguttmanquit (Quit: dguttman)
05:06:23  * esundahl_quit (Remote host closed the connection)
05:06:56  * esundahljoined
05:11:54  * esundahlquit (Ping timeout: 264 seconds)
05:36:28  * thlorenzjoined
05:37:31  * esundahljoined
05:40:51  * thlorenzquit (Ping timeout: 245 seconds)
05:42:19  * tomerdjoined
05:46:04  * esundahlquit (Ping timeout: 264 seconds)
05:46:11  * timoxleyquit (Ping timeout: 260 seconds)
05:49:48  * timoxleyjoined
05:51:51  * chapeljoined
05:53:50  * timoxleyquit (Read error: Connection reset by peer)
05:54:16  * timoxleyjoined
06:05:55  * tomerdquit (Remote host closed the connection)
06:06:30  * tomerdjoined
06:08:49  * ryan_ramagequit (Quit: ryan_ramage)
06:09:04  * jcrugzz_quit (Ping timeout: 268 seconds)
06:11:16  * tomerdquit (Ping timeout: 264 seconds)
06:21:49  * ryan_ramagejoined
06:22:00  * timoxleyquit (Read error: No route to host)
06:22:22  * ryan_ramagequit (Client Quit)
06:22:23  * timoxleyjoined
06:23:40  * timoxleyquit (Read error: Connection reset by peer)
06:24:01  * timoxleyjoined
06:36:55  * thlorenzjoined
06:41:12  * thlorenzquit (Ping timeout: 240 seconds)
06:42:18  * esundahljoined
06:46:19  <levelbot>[npm] abstract-leveldown@0.10.1 <http://npm.im/abstract-leveldown>: An abstract prototype matching the LevelDOWN API (@rvagg)
06:46:54  * esundahlquit (Ping timeout: 264 seconds)
06:49:44  * kenansulaymanjoined
06:59:54  * chapelquit (Ping timeout: 264 seconds)
07:02:28  * chapeljoined
07:04:21  * dominictarrjoined
07:12:56  * jcrugzzjoined
07:21:27  * tomerdjoined
07:25:21  * timoxleyquit (Remote host closed the connection)
07:33:30  * chapelquit (Ping timeout: 264 seconds)
07:37:20  * thlorenzjoined
07:42:11  * thlorenzquit (Ping timeout: 268 seconds)
07:42:50  * esundahljoined
07:43:03  * tomerdquit (Remote host closed the connection)
07:43:38  * tomerdjoined
07:47:44  * esundahlquit (Ping timeout: 268 seconds)
07:48:30  * tomerdquit (Ping timeout: 264 seconds)
07:51:34  * chapeljoined
07:59:25  * tomerdjoined
07:59:56  * tomerdquit (Remote host closed the connection)
08:15:00  * julianduquequit (Quit: leaving)
08:20:49  * dominictarrquit (Quit: dominictarr)
08:37:46  * thlorenzjoined
08:42:06  * thlorenzquit (Ping timeout: 245 seconds)
08:47:18  * dominictarrjoined
08:52:29  * jcrugzzquit (Ping timeout: 268 seconds)
08:53:55  * chapelquit (Ping timeout: 264 seconds)
08:56:36  <kenansulayman>substack gj @ archy
08:57:11  <kenansulayman>I'd be amazed if it'd be feasible to create flowcharts like that with node
09:01:20  <dominictarr>kenansulayman: sure! it's just a graphlayout problem
09:01:38  <dominictarr>there are already graph layout modules like dagre
09:02:09  <dominictarr>doing so in the terminal would also be possible, just more constrained.
09:02:34  <kenansulayman>I'd focus on terminal / CL apps
09:02:51  <kenansulayman>We're currently outputting Wolfram Mathematica code for debugging relations
09:03:26  <kenansulayman>That'd really be awesome, maybe I find some spartime for it :)
09:03:58  <substack>kenansulayman: see also https://github.com/substack/undirender
09:04:08  <substack>archy is not actually very interesting, graph-wise
09:04:26  <kenansulayman>I know
09:04:35  <kenansulayman>But it's inspiring ;)
09:05:12  <kenansulayman>undirender is cool
09:06:26  <kenansulayman>substack Does it support words?
09:07:42  <kenansulayman>That's roughly what we're currently generating for debugging: http://data.sly.mn/R5Yc
09:10:03  <kenansulayman>woah dirender is pure sex
09:10:09  <kenansulayman>gf
09:10:13  <kenansulayman>gj* :D
09:10:25  <kenansulayman>though some bugs
09:21:02  <kenansulayman>substack why is it that the lines are shifted?
09:21:03  <kenansulayman> \__ Lucas¯¯¯¯¯¯¯¯
09:21:03  <kenansulayman> \__ /¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
09:21:03  <kenansulayman> Laura
09:21:46  * jcrugzzjoined
09:22:05  <kenansulayman>oh I see
09:22:12  <kenansulayman>it's ok if I change the viewport size
09:27:31  * jcrugzzquit (Ping timeout: 264 seconds)
09:34:03  <kenansulayman>awesome it works!
09:34:12  <kenansulayman>0ebe2799b23c32ab5bf579895fb8227fe103ac8c290cafa653d8393b0b66d65316c5c91807014a4a7f20530ebf359f2cd194d8e150289a56cf6aa8edb75edf1e
09:34:13  <kenansulayman>|
09:34:13  <kenansulayman>| ffc45912c3796d7bad69390ce184d68be2e1776904be7237dddaad630442435b745f1925cc204dc29a50955f251623cb9634d9b4bb6e11a6fffa360825e5a6e5¯¯¯¯¯¯¯¯
09:34:13  <kenansulayman>| /¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
09:34:13  <kenansulayman>606fe34513ee3816f17b8734b735e360ca806eb37b32f19cd64d4594ac4104f34d51d79cab5d15846150adc4edbd8de07a1b3b5dc1530dbfc31e8c5979e30c39
09:38:28  * thlorenzjoined
09:43:06  * thlorenzquit (Ping timeout: 264 seconds)
09:50:21  * chapeljoined
10:04:42  * chapelquit (Ping timeout: 264 seconds)
10:09:38  * timoxleyjoined
10:11:29  * timoxleyquit (Remote host closed the connection)
10:17:48  * mcollinajoined
10:25:47  <dominictarr>kenansulayman: substack one thing that would be useful (graphlayoutwise) is archy + back links
10:26:10  <dominictarr>you could show the npm dep tree, but also show visually which modules resolved to what.
10:26:24  * timoxleyjoined
10:28:29  <kenansulayman>which modules resolved to what?
10:30:54  <kenansulayman>k gotta go. cheers
10:31:44  * kenansulaymanquit (Quit: ∞♡∞)
10:33:34  * timoxleyquit (Remote host closed the connection)
10:38:51  * thlorenzjoined
10:43:40  * thlorenzquit (Ping timeout: 264 seconds)
11:39:13  * thlorenzjoined
11:43:21  * thlorenzquit (Ping timeout: 245 seconds)
11:50:01  * chapeljoined
11:56:13  * thlorenzjoined
12:09:30  * chapelquit (Ping timeout: 264 seconds)
12:39:39  * thlorenz_joined
12:43:43  * thlorenz_quit (Ping timeout: 240 seconds)
12:58:44  * mcollinaquit (Read error: Connection reset by peer)
13:04:36  * mcollinajoined
13:13:00  * thlorenzquit (Remote host closed the connection)
13:14:43  * tmcwjoined
13:17:06  * timoxleyjoined
13:40:03  * thlorenzjoined
13:44:05  * mcollinaquit (Read error: Connection reset by peer)
13:44:47  * thlorenzquit (Ping timeout: 268 seconds)
13:50:13  * thlorenzjoined
13:51:47  * werlejoined
13:51:57  * werle
13:52:12  <werle>juliangruber: when are we going to do sealevel haha
13:52:31  <juliangruber>werle: that was the c rewrite of multilevel, right?
13:52:42  * chapeljoined
13:52:52  <werle>juliangruber: yeah
14:12:04  <juliangruber>werle: that's a _big_ project
14:23:55  * chapelquit (Ping timeout: 264 seconds)
14:25:12  * esundahljoined
14:27:04  * esundahlquit (Remote host closed the connection)
14:27:30  * esundahljoined
14:28:14  * esundahl_joined
14:32:16  * esundahlquit (Ping timeout: 264 seconds)
14:41:11  * fallsemojoined
14:44:07  * jerrysvjoined
14:48:43  <dominictarr>juliangruber: werle we need a way to make node_modules pattern work in c
14:49:17  <dominictarr>I think I could stand writing C if it was modular and collaborative
14:50:15  <juliangruber>dominictarr: yes
14:50:25  <juliangruber>dominictarr: i have this dream about npm being my operating system's package manager
14:51:11  <dominictarr>juliangruber: me too
14:51:35  <juliangruber>dominictarr: kickstarter?
14:51:38  <dominictarr>we are calling that idea "anarchy os"
14:51:43  * rickbergfalkjoined
14:52:07  <juliangruber>sweet
14:52:21  <dominictarr>juliangruber: it already has the best logo :)
14:52:31  <juliangruber>dominictarr: substacks' drawing?
14:52:44  <dominictarr>just the (A) thing in general
14:52:51  <dominictarr>you can draw it loads of ways
14:53:48  <dominictarr>juliangruber: I have other priorites (before c node_modules, but this needs to happen eventually)
14:54:16  <juliangruber>yeah, same situation
14:55:07  * werlequit (Ping timeout: 264 seconds)
14:55:11  * julianduquejoined
14:57:31  * werlejoined
15:01:45  <jerrysv>but if dependency hell went away, what would we have to bitch about?
15:01:54  * werlequit (Ping timeout: 264 seconds)
15:10:00  * timoxleyquit (Remote host closed the connection)
15:10:09  * kenansulaymanjoined
15:14:23  <dominictarr>jerrysv: there would still be callbacks!
15:14:37  <dominictarr>jerrysv: also, pointer overruns
15:14:59  <dominictarr>and that all the good module names are already taken.
15:18:45  <jerrysv>hmph.
15:19:03  <dominictarr>also, stupid kickstarter projects that should never be funded
15:19:29  <dominictarr>and how twitter trends never reflects important current events
15:20:10  <dominictarr>or people using whitespace the wrong way
15:20:35  <dominictarr>spelling mistakes
15:21:03  <dominictarr>jerrysv: ! I know !
15:21:04  <jerrysv>c is my native language
15:21:24  <dominictarr>what we'll also give C optional semicolons!
15:22:00  <dominictarr>that will add a new thing, to replace dependency hell
15:23:09  <mbalho>good idea
15:31:41  <kenansulayman>I got some spare time, what about cloning logs.nodejs.org for level
15:39:50  * jxsonjoined
15:50:26  * chapeljoined
16:07:07  * chapelquit (Ping timeout: 264 seconds)
16:16:38  * in3rgr4mmjoined
16:17:06  * in3rgr4mmquit (Remote host closed the connection)
16:17:41  * in3rgr4mmjoined
16:18:19  * kenansulaymanchanged nick to apexpredator
16:18:21  <apexpredator>!reinit
16:18:26  * apexpredatorchanged nick to kenansulayman
16:19:39  * in3rgr4mmquit (Remote host closed the connection)
16:23:07  * alanhoffjoined
16:28:15  * jxsonquit (Remote host closed the connection)
16:30:56  * jcrugzzjoined
16:45:41  <mbalho>rvagg: have you thought about publishing levelhyperdown and levelbashodown to npm?
16:45:52  <mbalho>(or something along those lines)
16:51:08  <jerrysv>levelbashodown?
16:51:33  * dominictarrquit (Quit: dominictarr)
16:54:18  <mbalho>jerrysv: http://r.va.gg/presentations/sf.nodebase.meetup/leveldown_write_random_g+h+b_100M.png
16:54:52  <jerrysv>mbalho: thanks!
16:56:10  <jerrysv>so is that using riak as a backend as opposed to leveldb, or ?
16:56:54  <brycebaril>jerrysv: it uses basho's fork of leveldb
16:57:03  <jerrysv>brycebaril: aha, i see. thanks!
16:57:34  <jerrysv>we have a ton of basho people here in town, but i only talk to them about spatial indexes, riak, google glass, and beer.
17:00:51  * alanhoffquit (Ping timeout: 245 seconds)
17:01:06  * alanhoffjoined
17:10:37  * jxsonjoined
17:10:43  * jxsonquit (Remote host closed the connection)
17:11:17  * jxsonjoined
17:14:40  <kenansulayman>mbalho Why does Hyper LevelDB stop that early?
17:16:26  <brycebaril>kenansulayman: because it is that much faster, actually. That's why rvagg is considering making it the default implementation
17:16:58  <kenansulayman>Can't we just drop it in as a replacement of Google leveldb?
17:17:33  <kenansulayman>That speed is seriously astonishing
17:20:53  <nlacasse>I've got two read streams coming from leveldb. The objects in the streams are keyed by timestamp, and each stream is sorted by timestamp.
17:21:08  <nlacasse>I'd like to combine those two streams into one, and preserve the timestamp-sorting.
17:21:27  <nlacasse>Kind of like a "zipping" the two streams together.
17:21:51  <nlacasse>Does anybody know of a module or method that can do this stream "zip" operation?
17:22:07  <brycebaril>nlacasse: I've been working on something that does that, but it isn't released yet
17:22:17  <brycebaril>lemme get you a gist
17:23:20  <nlacasse>brycebaril: cool! i was about to start writing my own, but i figured somebody had to have done this already.
17:23:51  <kenansulayman>brycebaril The hyper leveldb compiles just nice with leveldown let's run a bench
17:24:58  <brycebaril>kenansulayman: rvagg has done some of that, I think he may have a blog post...
17:25:17  <brycebaril>nlacasse: https://gist.github.com/brycebaril/6380931
17:25:46  <brycebaril>It still has some implementation specific stuff, but you can probably get a good idea of how it works
17:26:21  <brycebaril>Then I have other modules that work on top of it that do things like join/intersect/compliment/etc. based on timestamp
17:29:51  <brycebaril>kenansulayman: http://r.va.gg/2013/06/leveldown-alternatives.html
17:30:05  <nlacasse>brycebaril: cool. this looks great!
17:30:06  <nlacasse>thanks
17:30:21  <kenansulayman>brycebaril looks like the API differs a bit: http://data.sly.mn/R6Ja
17:31:21  <brycebaril>nlacasse: sure thing :)
17:32:13  <brycebaril>kenansulayman: were you using https://npmjs.org/package/leveldown-hyper ?
17:32:25  <kenansulayman>haha no
17:32:31  <kenansulayman>just tried my own replacement ;)
17:32:35  <brycebaril>ahh! :)
17:33:23  <brycebaril>nlacasse: eventually (soon) that will be part of a level-tsdb library that sits on top of https://npmjs.org/package/level-version
17:34:53  * jxsonquit (Remote host closed the connection)
17:36:51  * jxsonjoined
17:42:27  <kenansulayman>brycebaril Is there a repo for leveldown-hyper? Otherwise I'll just grab the tar
17:43:12  <brycebaril>kenansulayman: it is a branch on the leveldown repo
17:43:22  <brycebaril>https://github.com/rvagg/node-leveldown/tree/hyper-leveldb
17:43:22  <kenansulayman>I see
17:46:31  * mikealjoined
17:47:43  * ryan_ramagejoined
17:51:11  * jxsonquit (Remote host closed the connection)
17:53:07  * jxsonjoined
17:53:10  * chapeljoined
18:03:59  * tmcwquit (Remote host closed the connection)
18:04:34  * tmcwjoined
18:08:13  * tmcwquit (Read error: Connection reset by peer)
18:08:30  * tmcwjoined
18:09:16  <kenansulayman>brycebaril So is HyperDex a new key/value store or is it only an optimization fork?
18:11:27  <brycebaril>an optimization fork as far as I know
18:14:18  <levelbot>[npm] hyperlevel@0.15.0 <http://npm.im/hyperlevel>: A HyperDex-LevelDB wrapper (a convenience package bundling LevelUP & LevelDOWN-hyper) (@kenansulayman)
18:14:37  <mikeal>entirely optimizations
18:14:56  <mikeal>if your work load looks like HyperDex it is definitely the best :)
18:16:37  <kenansulayman>After I looked at the benchmark I had to get this baby running
18:18:49  <kenansulayman>juliangruber Why is msgpack a multilevel dep? It should be up to the user to chose between either performance or traffic-load
18:18:51  <kenansulayman>:)
18:23:48  <levelbot>[npm] level-namequery@0.1.4 <http://npm.im/level-namequery>: An intelligent search engine on top of LevelDB for Name <-> User-ID relations. (@kenansulayman)
18:26:46  <juliangruber>kenansulayman: that's why the user can choose between msgpack and jsonb
18:27:05  <juliangruber>kenansulayman: it's just easier to bundle both
18:27:13  <juliangruber>in an ideal world it would just use msgpack
18:27:21  <juliangruber>but as websockets mostly arent binary compatible
18:27:30  <juliangruber>jsonb needs to be supported too
18:29:38  <kenansulayman>You're right. But I just wanted to inspect why Multilevel fails hard. Turns out any combination of Msgpack ~0.2.0 and leveldown break
18:31:32  <kenansulayman>juliangruber Hacking msgpack off multilevel fixes the issue :(
18:31:50  <juliangruber>oh
18:31:51  <juliangruber>shit
18:32:02  <juliangruber>hm
18:32:39  * dguttmanjoined
18:33:12  <juliangruber>kenansulayman: multilevel/mux-demux use msgpack-js
18:33:18  <juliangruber>there shouldn't be any binary involved?
18:33:41  <kenansulayman>juliangruber If MSGPACK isn't compiled at all, it works fine
18:33:49  <juliangruber>but
18:34:00  <juliangruber>which module has native msgpack as a dependency?
18:34:07  <kenansulayman>mom
18:34:33  <kenansulayman>none
18:34:54  <kenansulayman>https://github.com/msgpack/msgpack-node/blob/master/package.json
18:38:17  <levelbot>[npm] level-namequery@0.1.5 <http://npm.im/level-namequery>: An intelligent search engine on top of LevelDB for Name <-> User-ID relations. (@kenansulayman)
18:39:40  <kenansulayman>juliangruber Still alive?
18:40:07  <juliangruber>kenansulayman: why do you need msgpack-node?
18:40:17  <juliangruber>kenansulayman: watching a movie, so don't expect sudden answers
18:40:22  <kenansulayman>Okay ;)
18:40:24  <juliangruber>:s/sudden/immediate/g
18:40:45  <kenansulayman>Need? Not really. It's just performance and I like SlowBuffers
18:40:58  <kenansulayman>Yet the C++ bridge overhead is bad
18:43:37  <juliangruber>kenansulayman: it's making your app crash, so sounds like msgpack-js would be the better deal ;)
18:44:18  * chapelquit (Ping timeout: 264 seconds)
18:44:33  <kenansulayman>yes.. it's just that we're starting to doubt msgpack since all our servers are unmetered (as of traffic) and JSON has been heavily optimized recently
18:55:23  <kenansulayman>brycebaril Woah. Checkout hyperlevel (level with leveldown-hyper) it nearly doubled our throughput. sick shit
19:01:55  * jmartinsjoined
19:02:22  <brycebaril>kenansulayman: awesome!
19:06:17  * Acconutjoined
19:17:28  * jxsonquit (Remote host closed the connection)
19:20:35  * alanhoffquit (Ping timeout: 240 seconds)
19:20:35  * Acconutquit (Read error: Connection reset by peer)
19:21:41  * alanhoffjoined
19:34:29  * jcrugzzquit (Ping timeout: 248 seconds)
19:34:51  * tmcwquit (Remote host closed the connection)
19:35:27  * tmcwjoined
19:36:26  <mbalho>ohh its called leveldown-hyper
19:40:18  * tmcwquit (Ping timeout: 264 seconds)
19:47:57  * jxsonjoined
19:48:21  <kenansulayman>mbalho well it's read as hyperlevel ;)
19:48:23  * dominictarrjoined
19:51:22  * chapeljoined
19:55:02  * tmcwjoined
19:57:00  * jxsonquit (Ping timeout: 276 seconds)
19:58:14  * chapelquit (Ping timeout: 240 seconds)
19:58:46  * ryan_ramagequit (Quit: ryan_ramage)
20:06:36  * ryan_ramagejoined
20:14:02  * ryan_ramagequit (Ping timeout: 264 seconds)
20:14:09  * Acconutjoined
20:14:26  * Acconutquit (Client Quit)
20:14:35  * ryan_ramagejoined
20:18:46  * jcrugzzjoined
20:22:04  * kenansulaymanquit (Quit: ∞♡∞)
20:28:52  * Acconutjoined
20:31:19  * chapeljoined
20:32:00  * jxsonjoined
20:39:24  * Acconutquit (Quit: Acconut)
20:49:32  <rescrv>brycebaril: HyperDex is a new key/value store. HyperLevelDB is our fork of LevelDB.
20:52:01  * missinglinkjoined
20:55:34  <rescrv>brycebaril, mikeal: HyperLevelDB is nearly a drop-in replacement for LevelDB. HyperDex uses HyperLevelDB to provide higher-level features such as fault tolerance, and scalability across multiple machines.
21:01:14  * Acconutjoined
21:01:14  * Acconutquit (Client Quit)
21:30:58  * dookquit (Ping timeout: 245 seconds)
21:31:42  * dookjoined
21:34:56  * Acconutjoined
21:36:33  * mikealquit (Quit: Leaving.)
21:36:53  * Acconutquit (Client Quit)
21:37:18  * mikealjoined
21:40:07  <mikeal>rescrv: it's an entirely new implementation?
21:41:29  <mbalho>rescrv: the version of hyperleveldb that rvagg published on npm with the leveldown api is 3 months old, how recent does it need to be for the live backup stuff to be in there?
21:42:23  <mikeal>someone should take over maintaining that, rvagg is busy :)
21:42:40  <mbalho>agreed
21:42:44  <mbalho>i dont know c++ though :D
21:53:02  * alanhoffquit (Ping timeout: 264 seconds)
21:53:42  * alanhoffjoined
22:05:17  <rescrv>mikeal: HyperDex is not a LevelDB implementation. Think of the HyperDex/HyperLevelDB relationship like the Riak/{bitcask,leveldb} relationship.
22:05:36  <rescrv>Checkout http://hyperdex.org/doc/03.quickstart/
22:07:19  <rescrv>mbalho: It's too old for LiveBackup. It should just be a matter of changing the pointer to the new git checkout and adding the one API call. You can probably mimic the syntax of everything around "GetProperty" and have something that works
22:07:46  <mbalho>cool
22:11:49  * Acconutjoined
22:12:01  * Acconutquit (Client Quit)
22:12:41  * thlorenzquit (Remote host closed the connection)
22:18:44  * jerrysvquit (Remote host closed the connection)
22:24:50  * jerrysvjoined
22:36:07  * ryan_ramagequit (Quit: ryan_ramage)
22:37:57  * alanhoffquit (Read error: Connection reset by peer)
22:38:05  * alanhoffjoined
22:41:17  <levelbot>[npm] level-jobs@0.2.0 <http://npm.im/level-jobs>: Job Queue in LevelDB (@pgte)
22:41:45  * dominictarrquit (Quit: dominictarr)
22:46:17  <levelbot>[npm] level-jobs@0.2.1 <http://npm.im/level-jobs>: Job Queue in LevelDB (@pgte)
22:50:42  * alanhoffquit (Read error: Connection reset by peer)
22:50:51  * alanhoffjoined
22:58:52  * dguttmanquit (Quit: dguttman)
23:17:38  * mikealquit (Quit: Leaving.)
23:18:38  * mikealjoined
23:27:32  * mikealquit (Quit: Leaving.)
23:31:55  <mbalho>rescrv: I successfully upgraded https://github.com/rvagg/node-leveldown/tree/hyper-leveldb to newest hyperleveldb + implemented a liveBackup function that works, woot!
23:32:03  <mbalho>rescrv: gonna send a pull req now
23:33:18  <mbalho>rescrv: the leveldb::State that LiveBackup returns was confusing to me as a c++ noob, wasnt sure how to send it back to JS land
23:33:30  <jerrysv>mbalho: awesome!
23:33:34  <brycebaril>nice!
23:33:44  * jmartinsquit (Remote host closed the connection)
23:34:17  <brycebaril>So what is the liveBackup feature?
23:34:59  <mbalho>19:22 < rescrv> There's a LiveBackup(const Slice& name) call. It'll create the directory "backup-<name>" under the DB directory. Then it'll copy/link every LevelDB-related file in a manner that's consistent. What you end up with is a directory that is itself a LevelDB instance.
23:35:54  <brycebaril>That is ... awesome. I've wanted that!
23:37:27  * dguttmanjoined
23:42:19  * tmcwquit (Remote host closed the connection)
23:42:33  * fallsemoquit (Quit: Leaving.)
23:42:54  * tmcwjoined
23:45:41  * timoxleyjoined
23:46:59  * jmartinsjoined
23:47:35  * tmcwquit (Ping timeout: 260 seconds)
23:55:27  <jcrugzz>mbalho: that does sound pretty nice
23:58:30  * esundahl_quit (Remote host closed the connection)
23:58:33  <rvagg>C++ing mbalho, nice
23:58:41  * rvaggwill clean up and publish
23:58:55  * esundahljoined