13:59:25  <CIA-155>node: Ben Noordhuis v0.6 * ra64acd8 / test/simple/test-cluster-worker-death.js : test: cluster: add worker death event test - http://git.io/Q1CYgg
14:25:11  <piscisaureus_>isaacs: hey
14:31:04  <CIA-155>node: Kyle Robinson Young master * r491c8d9 / doc/api/http.markdown :
14:31:04  <CIA-155>node: doc: add deprecated function http.createClient()
14:31:05  <CIA-155>node: Appears in a lot of old code and core tests. Documented to show it
14:31:05  <CIA-155>node: is deprecated.
14:31:05  <CIA-155>node: Closes #1613. - http://git.io/knSJAA
15:15:54  <bnoordhuis_>piscisaureus_: seems you've scared isaacs away
15:16:04  <piscisaureus_>presumably
15:29:26  <isaacs>piscisaureus_: hola
15:29:34  <piscisaureus_>isaacs: hey
15:29:56  <piscisaureus_>isaacs: you mentioned at some point that there were issues with http proxy when using 0.6 ?
15:30:32  <piscisaureus_>isaacs: did you ever figure that out?
15:30:46  <isaacs>piscisaureus_: all the issues that i had with http-proxy were a result of write() sometimes happening when the socket had already closed, resulting in a throw
15:31:03  <piscisaureus_>isaacs: I thought there was some scalability problem?
15:31:16  <isaacs>well, it only happens under load, if that's what you mean
15:31:21  <piscisaureus_>isaacs: yeah
15:31:26  <piscisaureus_>so you never figured that out?
15:31:34  <isaacs>no, i just wrapped the offenders in try/catch
15:31:43  <isaacs>and if it threw, i killed the connection
15:31:53  <isaacs>it's been working fine for a while now
15:31:58  <piscisaureus_>isaacs: alright
15:32:07  <isaacs>it's kludgey and ugly, though
15:32:13  <isaacs>i mean, that should be a 5 line program if we did it right
15:32:30  <isaacs>except for the upgrade stuff, i guess, that gets a little more complicated
15:32:38  <piscisaureus_>isaacs: so what was that client_latency.js benchmark for?
15:33:00  <piscisaureus_>isaacs: so we're mainly having problems that http proxy is too slow
15:33:01  <isaacs>oh, that was just a problem with the http client, not with proxying
15:33:05  <piscisaureus_>ah, right
15:33:08  <piscisaureus_>good
15:33:21  <isaacs>oh, ok, yeah, now i'm on the same page.
15:33:45  <isaacs>the http client slows down dramatically if you make a bunch of requests that should all go in parallel
15:34:49  <piscisaureus_>hmm
15:35:03  <piscisaureus_>last weekend we had a serious issue with it
15:35:15  <piscisaureus_>some requests would just hang in there for a couple of minutes (!)
15:35:31  <piscisaureus_>while new requests would be serviced just fine
15:35:50  <piscisaureus_>that was under significant load
15:36:52  <piscisaureus_>but it seems that some serious starvation happens under load
15:44:36  <isaacs>hm. that does match what we're seeing occasionally at joyent.
15:44:47  <isaacs>i think we worked around it a different way
15:44:47  <isaacs>i'm not sure.
15:48:28  <bnoordhuis_>piscisaureus_: btw, i updated that watcher pr
15:51:01  <piscisaureus_>bnoordhuis_: what doid you change?
15:51:23  <bnoordhuis_>piscisaureus_: i threw away the lseek commit
15:51:33  <bnoordhuis_>otherwise it's perfect as it is
15:57:21  <CIA-155>libuv: Ben Noordhuis master * r1f001fe / src/unix/kqueue.c :
15:57:21  <CIA-155>libuv: unix: remove kqueue cb == NULL check
15:57:22  <CIA-155>libuv: The other implementations don't check for it and it's making the counters_init
15:57:22  <CIA-155>libuv: test fail. - http://git.io/Ju8QCw
15:59:04  <travis-ci>[travis-ci] joyent/libuv#216 (master - 1f001fe : Ben Noordhuis): The build is still failing.
15:59:04  <travis-ci>[travis-ci] Change view : https://github.com/joyent/libuv/compare/1fa1c51...1f001fe
15:59:04  <travis-ci>[travis-ci] Build details : http://travis-ci.org/joyent/libuv/builds/1155493
16:46:11  <piscisaureus_>bnoordhuis_: ha!
16:46:22  <piscisaureus_>bnoordhuis_: is recv() not interruptible by a signal?
16:46:32  <bnoordhuis_>piscisaureus_: read the comments :)
16:46:43  <bnoordhuis_>in a nutshell, yes but not by SIGCHLD
16:47:00  <piscisaureus_>bnoordhuis_: the comments don't show up any more :-(
16:47:04  <piscisaureus_>bnoordhuis_: because you force pushed
16:47:12  <piscisaureus_>bnoordhuis_: that deletes all the comments
16:47:19  <bnoordhuis_>piscisaureus_: you get github notifications right?
16:48:18  <piscisaureus_>bnoordhuis_: yes, but so many :-/
16:48:55  <bnoordhuis_>piscisaureus_: okay, so recv() is interruptible but not by SIGCHLD at that point
16:49:08  <piscisaureus_>bnoordhuis_: is that also true for sunos?
16:49:14  <bnoordhuis_>yes
16:49:29  <bnoordhuis_>pselect() unblocks SIGCHLD while in the pselect() syscall
16:49:35  <bnoordhuis_>it's blocked again afterwards
16:50:13  <piscisaureus_>bnoordhuis_: ok, right
16:51:18  <piscisaureus_>bnoordhuis_: ah, so pselect was invented exactly for what we are doing
16:51:24  <piscisaureus_>bnoordhuis_: that is nice of those posix guys
16:51:48  <piscisaureus_>bnoordhuis_: the pull looks good to me
16:55:07  <bnoordhuis_>piscisaureus_: go ahead and merge it then :)
16:55:27  <piscisaureus_>bnoordhuis_: I am looking for the old commits
17:06:42  <bnoordhuis_>ben-doet-ook-een-duit-int-zakje <- haha
17:07:12  <piscisaureus_>bnoordhuis_: quid pro quo
17:08:50  <bnoordhuis_>piscisaureus_: question: do endgames run once and only once for a particular handle?
17:09:04  <bnoordhuis_>i want to add some code to uv_process_endgames
17:09:05  <piscisaureus_>bnoordhuis_: not on windows, no
17:09:09  <bnoordhuis_>damn
17:09:18  <piscisaureus_>bnoordhuis_: what is that?
17:09:35  <bnoordhuis_>piscisaureus_: i need a place to deref the handle when it's closed
17:09:40  <piscisaureus_>bnoordhuis_: ah
17:09:48  <piscisaureus_>bnoordhuis_: look for the place where the close callback is made :-)
17:10:06  <piscisaureus_>bnoordhuis_: ah I see
17:10:12  <piscisaureus_>bnoordhuis_: no, that won't really work
17:10:16  <bnoordhuis_>yes, close_cb... called in a lot of places
17:10:24  <piscisaureus_>bnoordhuis_: yeah
17:10:25  <bnoordhuis_>guess i'll add unref code to all handle endgame functions
17:10:33  <piscisaureus_>bnoordhuis_: yeah, it sucks I know
17:10:42  <piscisaureus_>bnoordhuis_: the engame stuff is a little messy
17:10:54  <bnoordhuis_>oh well
17:10:56  <piscisaureus_>bnoordhuis_: but it is also where I handle shutdown() and flushing and stuff.
17:11:15  <piscisaureus_>bnoordhuis_: because the queueing mechanism is not flexible enough to queue a req after all the writes finish
17:12:34  <bnoordhuis_>on a side note, i think there's a fair bit of code that we can eventually merge
17:12:44  <bnoordhuis_>the windows and unix approaches aren't that dissimilar
17:13:11  <bnoordhuis_>a double negative... quite similar is what i mean
17:15:56  <bnoordhuis_>piscisaureus_: btw, you should look at the mingw patches. they lgtm
17:17:05  <bnoordhuis_>piscisaureus_: also, one more question: what does the req that uv_fs_event_init_handle() creates do?
17:17:33  <bnoordhuis_>i noticed other init functions do something similar
17:24:40  <piscisaureus_>bnoordhuis_: yeah, a lot of handle types do that
17:24:50  <piscisaureus_>bnoordhuis_: basically they are a wrapper for the OVERLAPPED
17:25:30  <piscisaureus_>bnoordhuis_: because ReadDirectoryChangesW takes an overlapped which is inserted into the iocp when a changes happens
17:26:05  <piscisaureus_>bnoordhuis_: so when that comes out of the iocp we find back the req that contains the overlapped and we insert it in the request queue
17:26:41  <piscisaureus_>bnoordhuis_: that's also how the read_req works for streams
17:26:45  <piscisaureus_>and the shutdown_req for pipes
17:26:46  <piscisaureus_>etc
17:27:00  <bnoordhuis_>piscisaureus_: okay... so what i'm doing is ref'ing both the handle and the req
17:27:03  <bnoordhuis_>is that going to hurt?
17:27:17  <piscisaureus_>bnoordhuis_: well if we expose the refcount directly, yes
17:27:26  <bnoordhuis_>which we won't
17:27:41  <bnoordhuis_>so it doesn't matter otherwise?
17:27:46  <piscisaureus_>bnoordhuis_: well
17:27:48  <piscisaureus_>I think it does
17:28:01  <piscisaureus_>bnoordhuis_: because the fs watcher *always* has a req pending
17:28:06  <piscisaureus_>so you'd have to unref it twice
17:28:12  <bnoordhuis_>until you close it right?
17:28:16  <piscisaureus_>yeah
17:28:22  <bnoordhuis_>right, that's taken care of
17:28:30  <piscisaureus_>bnoordhuis_: well if you uv_unref it then it won't work
17:28:31  <bnoordhuis_>well...
17:28:44  <bnoordhuis_>it's the handle that gets unref'd not, the req
17:28:48  <bnoordhuis_>grr
17:28:52  <piscisaureus_>bnoordhuis_: I think I would not ref reqs by default
17:29:04  <bnoordhuis_>embedded reqs you mean?
17:29:05  <piscisaureus_>bnoordhuis_: just the ones that should - write, connect etc
17:29:11  <bnoordhuis_>right
17:29:37  <piscisaureus_>bnoordhuis_: there's a lot of embedded reqs in there, I think more than explicit ones.
17:37:18  <piscisaureus_>bnoordhuis_: is it difficult to make your way through uv-win in general btw>
17:39:41  <bnoordhuis_>piscisaureus_: not really, it's just... different
17:53:42  <CIA-155>libuv: Igor Zinkovsky v0.6 * re6d4bca / src/win/fs.c : remove left-over cast fixes #3160 - http://git.io/7DnvZw
17:55:27  <travis-ci>[travis-ci] joyent/libuv#217 (v0.6 - e6d4bca : Igor Zinkovsky): The build is still failing.
17:55:27  <travis-ci>[travis-ci] Change view : https://github.com/joyent/libuv/compare/21bee8c...e6d4bca
17:55:27  <travis-ci>[travis-ci] Build details : http://travis-ci.org/joyent/libuv/builds/1156355
18:09:57  <piscisaureus_>igorzi: there is something that concerns me a bit about this APC plan...
18:09:57  <piscisaureus_>if the loop is very busy then GetQueuedCompletionStatusEx will always return immediately. I don't know if APCs have a chance to run when that happens.
18:10:32  <piscisaureus_>igorzi: that would mean that all shared sockets are always processed with the lowest possible priority... I'm not sure whether that is a good idea.
18:17:19  <igorzi>piscisaureus_: yeah, that's a valid concern
18:18:42  <igorzi>piscisaureus_: with a busy server, that'll mean that sends/recvs will always be processed before new connections are accepted
18:19:40  <piscisaureus_>igorzi: that would not be so bad... but if we do tcp connection sharing the situation may be reversed
18:21:41  <igorzi>piscisaureus_: maybe we can do a conditional WaitForMultipleObjectsEx call in between? (only if we're expecting APCs)
18:22:31  <piscisaureus_>igorzi: well... if doing that with timeout 0 would call the APCs then that would be great
18:23:19  <piscisaureus_>igorzi: oh hmm - I mean, we could also set an event and wait for that...
18:23:57  <piscisaureus_>igorzi: but I don't know if any APCs will be called when the awaited event is already set when WaitFor... is called.
18:26:18  <igorzi>piscisaureus_: meh.. looks like timeout=0 doesn't put it into wait state (which probably means it can't process APCs)
18:26:39  <piscisaureus_>igorzi: so if we do this
18:26:39  <piscisaureus_>SetEvent(e)
18:26:51  <piscisaureus_>WaitForSingleObjectEx(e, TRUE, INFINITE)
18:27:02  <piscisaureus_>does that put it in the wait state?
18:27:40  <igorzi>piscisaureus_: i'd expect no.. but not sure
18:27:46  <piscisaureus_>hmm
18:27:54  <piscisaureus_>:'-(
18:30:24  <piscisaureus_>igorzi: actually, it seems to work :_)
18:31:43  <piscisaureus_>igorzi: https://gist.github.com/2472912
18:32:49  <piscisaureus_>meh
18:32:53  <piscisaureus_>wrong args order
18:33:16  <piscisaureus_>although that doesn't matter
18:34:17  <piscisaureus_>so WaitForSingleObjectEx == +1
18:34:56  <piscisaureus_>It will even process *all* the APCs before returning
18:35:54  <igorzi>piscisaureus_: nice!
19:04:47  <piscisaureus_>bnoordhuis_: hey
19:04:57  <piscisaureus_>bnoordhuis_: you broke cloud9... can you talk to ruben on skype?
19:44:54  <mjr_>So we rolled out the no idle GC notification patch to our entire operation over the weekend.
19:51:23  <isaacs>mjr_: any effects visible yet?
19:53:02  <mjr_>Not really. We have to restart our processes every day because of a memory leak and the TLS hang.
20:02:04  <piscisaureus_>ghe
20:02:07  <piscisaureus_>that sucks
20:05:37  <mjr_>Yes it does. Any help would be appreciated.
20:08:33  <bnoordhuis_>piscisaureus_: it shoudl be fixed alreadt
20:08:53  <bnoordhuis_>also, both my girls are out with a fever so i don't have time to chat
20:10:33  <mraleph>mjr_: which v8 version are you using?
20:13:09  <piscisaureus_>mraleph: when creating a context, most time is spent starting up and deserializing the snapshot, right?
20:13:33  <mraleph>piscisaureus_: should be
20:13:58  <mraleph>piscisaureus_: in preparation for deserializations there can be some gcs
20:14:21  <piscisaureus_>mraleph: so why don't you guys deserialize the snapshot into an immutable space and use that for all isolates/contexts?
20:14:56  <mraleph>because it's not immutable :-)
20:14:57  <piscisaureus_>mraleph: so an object in that space would never refer to an object outside it
20:15:23  <piscisaureus_>mraleph: well... what is not immutable?
20:15:35  <mraleph>e.g. ics in code objects
20:15:40  <mraleph>prototypes of builtin objects
20:16:09  <mraleph>there are some immutable objects there, sure… but I don't think it's an easy ride
20:16:32  <piscisaureus_>mraleph: right... so what if you just made a copy whenever you mutate it?
20:17:06  <mraleph>yeah sure :-)
20:17:12  <mraleph>but that requires plumbing
20:17:24  <mraleph>COW arrays were not so easy to get.
20:17:49  <piscisaureus_>mraleph: so just fmi... what objects *do* get mutated?
20:18:08  <piscisaureus_>mraleph: I hear arrays, and I know that a string gets mutated when it is flattened
20:18:42  <mraleph>piscisaureus_: code objects can be patched
20:19:09  <mraleph>arrays can be mutated (and arrays can represent many different things, properties backing stores etc).
20:19:40  <piscisaureus_>yeah, but the arrays in the snapshot are typically small
20:19:46  <piscisaureus_>COW should not be so bad
20:19:58  <mraleph>yeah it should not.
20:20:07  <piscisaureus_>the biggest hurdle is probably checking whether something is in an immutable space and making a copy if
20:20:09  <mraleph>as I say: it is possible, but requires plumbing.
20:21:32  <piscisaureus_>mraleph: yeah, ok
20:21:36  <piscisaureus_>mraleph: too bad :-)
20:21:45  <piscisaureus_>mraleph: I was thinking about how to do erlang in node
20:21:56  <piscisaureus_>mraleph: it would be nice to have lightweight contexts
20:22:10  <piscisaureus_>mraleph: but I won't bother you no more...
20:23:53  <mjr_>mraleph: we are using - https://gist.github.com/2473576
20:25:46  <mraleph>mjr_: ok, I don't know about any GC related leaks in V8 of that version :-(
20:31:02  <mjr_>mraleph: now that we are on SmartOS, I think we have the tools to better isolate this problem, it just takes more human time than we have.
20:31:43  <mjr_>Sadly though, it seems like moving to SmartOS is partly what caused these problems.
20:32:12  <piscisaureus_>mjr_: rly? how?
20:32:25  <mjr_>Well, who knows. That's just frustration talking.
20:32:42  <mjr_>When we were on Linux, we did not have these memory growth problems, and we did not have to do rolling restarts of our entire operation every day.
20:33:12  <mjr_>But when we moved to SmartOS, we also changed lots of other things like how many processes were on each machine, how many total machines, etc.
20:33:13  <Guest57630>what version of nodejs were you on when you made that change?
20:33:19  <Guest57630>ack
20:33:45  <mjr_>I can't remember. It was about 2 months ago now.
20:33:51  <tjfontaine>oh ok
20:33:56  <mjr_>We generally track the v0.6 branch pretty closely.
20:34:29  <piscisaureus_>why does fucking nano not work properly on these smartmachines
20:34:51  <mjr_>My best theory is that the memory growth is some side effect of not handling back pressure properly.
20:35:47  <mjr_>And because things run at different speeds in slightly different ways on Joyent, we ended up changing some variables in a giant and complicated feedback loop that makes things queue up in a place where they didn't queue up before.
20:36:15  <mjr_>So it's not really fair to say that this is a SmartOS problem. It MIGHT be, but we just don't know.
21:37:22  <piscisaureus_>igorzi: hey
21:37:41  <piscisaureus_>igorzi: is the benchmark cluster available?
22:05:14  <piscisaureus_>igorzi: yt?
22:07:27  <igorzi>piscisaureus_: yep, it should be available.. let me check
22:07:31  <igorzi>piscisaureus_: i'll let you know
22:07:36  <piscisaureus_>igorzi: kewl
22:14:02  <piscisaureus_>igorzi: also, I lost the credentials... they are on my other computer
23:06:19  <piscisaureus_>anyone here knows how to get useful output from the v8 tick profiler on solaris?
23:06:29  <piscisaureus_>It shows 95% unknown :-/
23:06:45  <piscisaureus_>dap, isaacs, mjr_ ?
23:06:54  <piscisaureus_>orlandovftw ?
23:07:02  <mjr_>I do not.
23:07:10  <dap>sorry, I've never used the V8 tick profiler. I always use jstack :)
23:07:15  <mjr_>However, on SmartOS, you can use dap's winning jsstack
23:07:21  <piscisaureus_>ah, right
23:07:23  <piscisaureus_>how do I do that?
23:07:23  <isaacs>yes, that's the way to go. tick ftl
23:07:30  <orlandovftw>i did that a long time ago... but i dont remember offhand
23:07:39  <orlandovftw>it wasn't super useful
23:07:47  <piscisaureus_>is there a guide for jsstakc?
23:07:49  <mjr_>I've actually never gotten useful information out of V8's profiler in node. It always sent me down the wrong path.
23:07:58  <dap># dtrace -n 'profile-97/pid == $target/{ @[jstack()] = count(); }' -p $yourpid
23:08:09  <dap>when you CTRL-C it, you'll get the results.
23:08:13  <mjr_>But jsstack + flame graph has been very enlightening.
23:08:35  <dap>you'll probably want to check out http://dtrace.org/blogs/dap/2012/01/05/where-does-your-node-program-spend-its-time/, which has a little more detail and explains how to visualize it with a flame graph.
23:08:43  <mjr_>The confusing thing is that V8's profiler SEEMS like it's telling you something useful, but in my experience it is now.
23:08:44  <mjr_>not
23:09:03  <piscisaureus_>dap: do I need to compile with particular flags to make it show C functions?
23:09:06  <dap>(you can skip steps (1) - (3) from that blog post, btw. I should probably update it to show that.)
23:09:18  <piscisaureus_>dap: 404
23:09:20  <dap>piscisaureus_: no. it always shows both C and JavaScript functions, translating only where it makes sense.
23:09:35  <dap>piscisaureus_: it works for me. did your client add the , ?
23:09:42  <mjr_>dap: have you demangled the C++ symbols successfully?
23:09:43  <piscisaureus_>yep
23:09:46  <piscisaureus_>:-)
23:09:49  <piscisaureus_>sorry, idiot alarm
23:09:51  <dap>mjr_: yeah, I pipe the output through c++filt to do that.
23:09:59  <mjr_>c++filt never does anything for me.
23:09:59  <dap>…I should update that example too :)
23:10:00  <kohai>c has 9 beers
23:10:11  <dap>mjr_: are you running it on the smartos system?
23:10:11  <mjr_>It ends up with identical output as input.
23:10:28  <mjr_>Oh, no. I was doing post-processing on my mac.
23:10:32  <dap>I found that using MacOS's c++filt with SmartOS's output or vice versa doesn't work.
23:10:35  <dap>in exactly that way.
23:10:38  <mjr_>Ahhh
23:10:41  <mjr_>i will try that, thanks.
23:10:49  <dap>I assume it's because demangling is compiler-specific, though I've been told it's been standardized.
23:10:59  <mjr_>Different compiler versions for sure.
23:11:06  <mjr_>clang vs. gcc, etc.
23:11:09  <dap>teag
23:11:10  <dap>yeah*
23:11:49  <mjr_>I would like to give C more beers, but C++ keeps getting them.
23:12:03  <piscisaureus_>eh, in reverse you mean?
23:12:05  <piscisaureus_>c++++
23:12:07  <kohai>c has 10 beers
23:13:39  <piscisaureus_>dtrace: 55 jstack()/ustack() string table overflows
23:13:48  <piscisaureus_>dap: ^-- those errors are normal?
23:14:17  <mjr_>piscisaureus_: @[jstack(100, 8000)] = count();
23:14:26  <dap>piscisaureus_: they're harmless, but means you'll be missing some translations. What mjr_ said will fix it.
23:14:35  <dap>100 = 100 stack frames, and 8000 is a buffer size.
23:14:39  <piscisaureus_>aha
23:14:40  <piscisaureus_>thanks
23:14:56  <mjr_>Look at me, I'm answering questions about dtrace.
23:15:02  <dap>:)
23:15:04  <piscisaureus_>mjr_: yeah, thanks a lot ;_)
23:15:22  <mjr_>Seriously though, this jsstack thing is a game changer for node.
23:15:25  <piscisaureus_>now unfortunately it blasts stack frames through my console
23:15:59  <mjr_>piscisaureus_: dtrace -n 'profile-97 /pid == 1234/ { @[jstack(100, 8000)] = count(); } tick-30s { exit(0); }' -o out.stacks
23:16:36  <mjr_>That'll sample for 30 seconds, count the stacks, then write to out.stacks on exit.
23:17:08  <piscisaureus_>wow
23:17:10  <mjr_>Or use -p, etc.
23:17:29  <piscisaureus_>so how do I generate the flame graph? There's another option for that?
23:17:50  <piscisaureus_>seems kind of unexpected to have that in dtrace
23:17:56  <piscisaureus_>so what do I get for that?
23:17:59  <mjr_>The world must learn about this jsstack + flame graph technology.
23:18:05  <piscisaureus_>yes
23:18:23  <piscisaureus_>ah let's just read dap's blog :-)
23:19:11  <igorzi>piscisaureus_: i'll send you an email with the credential as soon as i verify that the test cluster is up
23:19:17  <piscisaureus_>igorzi: cool
23:30:10  <piscisaureus_>ah, got my first flame graph
23:30:11  <piscisaureus_>nice
23:30:14  <piscisaureus_>dap++
23:30:15  <kohai>dap has 2 beers
23:30:19  <piscisaureus_>and mjr_ thanks!
23:30:25  <dap>great!
23:30:36  <piscisaureus_>that wasn't so hard
23:30:47  <piscisaureus_>but the manual needs a tl;dr version
23:30:51  <piscisaureus_>aka hello world
23:30:54  <dap>yeah, agreed.
23:31:09  <piscisaureus_>after they first succeed people get interested
23:40:57  <isaacs>making actual websites in node is way way too hard.
23:41:23  <isaacs>not even because of the callbackitis or whatever.
23:52:18  <bnoordhuis_>piscisaureus__: okay, back. you were saying?
23:52:28  <bnoordhuis_>also, your log bot is broken
23:57:59  <piscisaureus__>bnoordhuis_: never mind. Talk to Jos on skype
23:58:09  <piscisaureus__>bnoordhuis_: also, why did you break my bot?
23:59:32  <bnoordhuis>piscisaureus__: jos or ruben?
23:59:39  <piscisaureus__>bnoordhuis: jos
23:59:57  <bnoordhuis>okay, i'll ping him tomorrow. what's it about?