00:06:51  * paulfryzeljoined
00:09:07  * chorrelljoined
00:10:35  * notmattjoined
00:11:17  * paulfryzelquit (Ping timeout: 246 seconds)
00:11:38  * therealkoopajoined
00:16:18  * therealkoopaquit (Ping timeout: 240 seconds)
00:17:01  * marselljoined
00:19:59  * therealkoopajoined
00:27:07  * therealkoopaquit (Ping timeout: 252 seconds)
00:44:30  * AvianFluquit (Remote host closed the connection)
00:51:40  * marsellquit (Quit: marsell)
00:52:27  * nfitchjoined
00:56:47  * nfitchquit (Ping timeout: 246 seconds)
00:58:49  * therealkoopajoined
01:04:00  * therealkoopaquit (Ping timeout: 268 seconds)
01:07:34  * paulfryzeljoined
01:11:50  * paulfryzelquit (Ping timeout: 246 seconds)
01:16:04  * AvianFlujoined
01:20:00  * ed209quit (Remote host closed the connection)
01:20:08  * ed209joined
01:22:33  * therealkoopajoined
01:30:31  * therealkoopaquit (Ping timeout: 264 seconds)
01:41:42  * bixu_joined
01:44:18  * bixuquit (Ping timeout: 240 seconds)
02:05:29  * chorrellquit (Quit: Textual IRC Client: www.textualapp.com)
02:08:19  * paulfryzeljoined
02:12:38  * paulfryzelquit (Ping timeout: 240 seconds)
02:41:12  * nfitchjoined
02:45:18  * nfitchquit (Ping timeout: 240 seconds)
02:53:15  * therealkoopajoined
02:55:16  * fredkjoined
02:55:42  * fredkquit (Client Quit)
02:57:37  * therealkoopaquit (Ping timeout: 240 seconds)
03:01:26  * therealkoopajoined
03:05:55  * therealkoopaquit (Ping timeout: 264 seconds)
03:09:09  * paulfryzeljoined
03:12:39  * therealkoopajoined
03:14:07  * paulfryzelquit (Ping timeout: 268 seconds)
03:16:58  * therealkoopaquit (Ping timeout: 240 seconds)
03:41:02  * therealkoopajoined
03:45:31  * therealkoopaquit (Ping timeout: 264 seconds)
03:51:35  * marselljoined
04:09:53  * paulfryzeljoined
04:13:58  * paulfryzelquit (Ping timeout: 240 seconds)
04:22:41  * therealkoopajoined
04:27:30  * therealkoopaquit (Ping timeout: 268 seconds)
04:29:57  * nfitchjoined
04:31:41  * notmattquit (Remote host closed the connection)
04:34:01  * nfitchquit (Ping timeout: 240 seconds)
04:35:31  * notmatt_joined
05:02:30  * marsellquit (Quit: marsell)
05:10:39  * paulfryzeljoined
05:14:49  * paulfryzelquit (Ping timeout: 240 seconds)
05:49:10  * notmatt_quit (Remote host closed the connection)
05:58:54  * ins0mniajoined
05:58:56  * AvianFluquit (Remote host closed the connection)
05:59:26  * AvianFlujoined
06:04:19  * AvianFluquit (Ping timeout: 268 seconds)
06:14:07  * therealkoopajoined
06:18:41  * nfitchjoined
06:18:49  * therealkoopaquit (Ping timeout: 240 seconds)
06:22:47  * therealkoopajoined
06:22:58  * nfitchquit (Ping timeout: 240 seconds)
06:27:31  * therealkoopaquit (Ping timeout: 264 seconds)
06:29:24  * ghostbarquit
06:52:29  * marselljoined
07:12:18  * paulfryzeljoined
07:16:32  * paulfryzelquit (Ping timeout: 246 seconds)
07:22:37  * therealkoopajoined
07:25:19  * notmattjoined
07:29:26  * notmattquit (Ping timeout: 252 seconds)
07:30:32  * therealkoopaquit (Ping timeout: 246 seconds)
07:46:29  <bsdguru>good morning guys
07:46:39  <bsdguru>was manta having an issue this morning
07:47:05  <bsdguru>been seeing a 6.6k 503 errors for package tarballs from manta
07:53:12  <bsdguru>any ideas what caused the blip on manta?
08:07:31  * nfitchjoined
08:12:35  * nfitchquit (Ping timeout: 268 seconds)
08:12:59  * paulfryzeljoined
08:15:12  * bsdguruquit (Quit: bsdguru)
08:17:31  * paulfryzelquit (Ping timeout: 268 seconds)
08:30:52  * mamashjoined
08:31:04  * bsdgurujoined
09:13:21  * notmattjoined
09:13:43  * paulfryzeljoined
09:17:59  * paulfryzelquit (Ping timeout: 246 seconds)
09:18:05  * notmattquit (Ping timeout: 265 seconds)
09:39:08  * therealkoopajoined
09:43:51  * therealkoopaquit (Ping timeout: 268 seconds)
09:49:47  * notmattjoined
09:54:57  * notmattquit (Ping timeout: 268 seconds)
09:55:55  * bixu_changed nick to bixu
09:56:13  * nfitchjoined
10:00:30  * nfitchquit (Ping timeout: 252 seconds)
10:14:42  * paulfryzeljoined
10:18:53  * paulfryzelquit (Ping timeout: 246 seconds)
10:27:50  * bixu_joined
10:29:02  * bixuquit (Ping timeout: 246 seconds)
10:50:51  * therealkoopajoined
10:55:17  * therealkoopaquit (Ping timeout: 246 seconds)
11:02:41  * bixujoined
11:03:11  * bixu_quit (Ping timeout: 252 seconds)
11:15:19  * paulfryzeljoined
11:15:54  * therealkoopajoined
11:20:03  * paulfryzelquit (Ping timeout: 268 seconds)
11:20:55  * therealkoopaquit (Ping timeout: 264 seconds)
11:22:50  * therealkoopajoined
11:27:08  * therealkoopaquit (Ping timeout: 246 seconds)
11:45:02  * nfitchjoined
11:49:11  * nfitchquit (Ping timeout: 246 seconds)
12:16:03  * paulfryzeljoined
12:20:55  * paulfryzelquit (Ping timeout: 264 seconds)
12:22:41  * therealkoopajoined
12:26:58  * therealkoopaquit (Ping timeout: 240 seconds)
12:44:25  * AvianFlujoined
13:00:31  * chorrelljoined
13:00:56  * bixuquit (Ping timeout: 246 seconds)
13:03:19  * therealkoopajoined
13:16:51  * paulfryzeljoined
13:21:14  * paulfryzelquit (Ping timeout: 246 seconds)
13:29:30  * mamashpart
13:33:37  * bixujoined
13:33:49  * nfitchjoined
13:38:48  * nfitchquit (Ping timeout: 268 seconds)
14:17:33  * paulfryzeljoined
14:22:08  * paulfryzelquit (Ping timeout: 246 seconds)
14:48:29  * paulfryzeljoined
14:56:50  * bsdguruquit (Quit: bsdguru)
15:41:39  * paulfryzelquit (Read error: Connection reset by peer)
15:42:01  * ryancnelsonjoined
15:42:04  * paulfryzeljoined
15:42:46  * seldojoined
15:49:23  * seldoquit (Remote host closed the connection)
15:50:44  * nfitchjoined
15:51:28  * seldojoined
15:51:49  * fredkjoined
15:55:28  * seldoquit (Remote host closed the connection)
15:55:47  * seldojoined
16:01:38  * notmattjoined
16:04:09  * notmattquit (Remote host closed the connection)
16:11:51  * chorrellchanged nick to chorrell-away
16:18:23  * dap_joined
16:20:42  * seldoquit (Remote host closed the connection)
16:21:08  * seldojoined
16:22:24  * chorrell-awaychanged nick to chorrell
16:24:35  * seldoquit (Remote host closed the connection)
16:28:55  * yunongjoined
16:34:59  * chorrellchanged nick to chorrell-away
16:36:20  * chorrell-awaychanged nick to chorrell
16:38:25  * bsdgurujoined
16:46:54  * notmattjoined
17:14:23  * chorrellchanged nick to chorrell-away
17:17:29  * yunongquit (Ping timeout: 246 seconds)
17:18:21  * yunongjoined
17:20:16  * chorrell-awaychanged nick to chorrell
17:25:09  * nshalmanchanged nick to nahamu_
17:25:46  * nahamuchanged nick to nshalman
17:25:57  * yunongquit (Quit: Leaving.)
17:26:20  * nahamu_changed nick to nahamu
17:33:10  * seldojoined
18:03:45  * ringzerojoined
18:04:01  * ryancnelsonquit (Quit: Leaving.)
18:07:07  * marsellquit (Ping timeout: 264 seconds)
18:28:56  * marselljoined
18:32:28  * bsdguruquit (Quit: bsdguru)
18:35:00  * yunongjoined
18:50:07  * AvianFluquit (Remote host closed the connection)
18:56:59  <isaacs>We just saw a weird burst of 500 errors.
18:56:59  <seldo>Hello manta folks
18:57:03  <isaacs>any alarms going off over there?
18:57:13  <isaacs>yunong dap_ fredk et al.
18:57:38  * ceejbotjoined
18:58:13  <yunong>isaacs: We're upgrading our metadata tier this morning -- you should see intermittent 500s for about ~30s
18:58:39  <isaacs>yunong: can you tell us which hosts you're doing stuff on, preferrably ahead of time?
18:59:06  <isaacs>yunong: we can easily take manta hosts out of rotation to avoid anyone seeing a 500 on a tgz request.
18:59:34  <isaacs>but if there's a burst of download errors, we get a bunch of people complaining about it
18:59:48  <yunong>isaacs: I'm not sure how that would help. If you're familiar with the architecture of manta, your keys are consistently hashed to a set of shards, when we upgrade a shard, it'll go off line for about 30s, which means keys that hash to that shard are unavailable.
19:02:16  <seldo>yunong: is the interruption we saw just now a manta thing, or did something happen to us-east?
19:02:43  <yunong>seldo: this is isolated to manta, and to be specific a subset of keys in manta.
19:02:43  <seldo>We saw 8 minutes where our request volume dropped by half
19:02:51  <seldo>And errors spiked across the board
19:03:07  <seldo>Which is roughly what would happen if we lost the half of our capacity that's in us-east
19:03:20  * bsdgurujoined
19:10:14  * ringzeroquit
19:15:25  <yunong>We're pausing the manta upgrades for now. We'll be continuing upgrades in about 90 mins. I'll be send out a message to the mailing list and in this channel when we resume.
19:15:40  <nahamu>there's a mailing list?
19:15:59  <nahamu>or do you mean customers?
19:20:54  <bsdguru>yunong: what was the cause of the 8 minutes of downtime a short while ago?
19:21:37  <yunong>nahamu: sorry, we had a mailing list for beta customers. We'll be setting up a better channel for communicating upgrades going forward.
19:21:59  <nahamu>I might be on that list.
19:22:58  <yunong>bsdguru: If you check the IRC logs, you'll see what the outage was caused by.
19:24:23  <yunong>bsdsguru: http://logs.libuv.org/manta/latest#18:58:13.729
19:34:21  <yunong>Apologies on not communicating the upgrade in advance, that's something we'll be doing going forward.
19:37:35  * ceejbotquit (Remote host closed the connection)
19:40:42  <bsdguru>also if there is some sort of guide to how manta is put together so we can gauge how service impacting a maintenance is going to be would be useful
19:41:19  <bsdguru>understanding that doing stuff on the metadata teir takes outs out binary serving by half would be useful to understand
19:42:24  <nahamu>https://www.usenix.org/conference/lisa13/manta-storage-system-internals
19:43:23  <nahamu>I wonder if his slides from that are posted somewhere public...
19:44:45  <nahamu>http://dtrace.org/blogs/dap/2013/07/03/fault-tolerance-in-manta/ has a picture that helps too
19:49:33  <bsdguru>yeah I need to prob add that to my big picture of how the registry.npmjs.org works
19:49:44  <bsdguru>my white board is full of the digram at the moment
19:53:20  <nahamu>https://us-east.manta.joyent.com/mark.cavage/public/surge2013_manta_final.pdf
19:53:31  <nahamu>Those look similar to the USENIX slides.
19:54:33  <nahamu>see pages 18 and 21
19:54:43  * ringzerojoined
19:55:31  * ringzeroquit (Client Quit)
19:57:17  * chorrellchanged nick to chorrell-away
19:58:58  * ringzerojoined
20:06:36  * quijotejoined
20:09:58  * quijotequit (Quit: Textual IRC Client: www.textualapp.com)
20:10:11  * quijotejoined
20:15:10  * quijotequit (Quit: My MacBook has gone to sleep. ZZZzzz…)
20:18:00  * chorrell-awayquit (Quit: My Mac has gone to sleep. ZZZzzz…)
20:31:06  <yunong>We are resuming upgrades to the Manta metadata tier. You'll see intermittent 500 errors. I'll give another heads up when we're done.
20:31:42  <nahamu>yunong: I hope it goes smoothly. :)
20:31:49  <yunong>thanks!
20:32:30  * chorrelljoined
20:42:40  <nahamu>seldo: ^
20:42:52  <seldo>thanks
20:44:28  * chorrellchanged nick to chorrell-away
20:44:54  <bsdguru>thanks
20:45:01  <bsdguru>how is the maint going?
20:45:40  <seldo>How long is this going to last?
20:45:55  <seldo>This is causing serious service disruption to npm
20:46:00  <seldo>This is not "30s downtime"
20:50:22  <seldo>That was 13 minutes of downtime, plus the 9 minutes this morning
20:50:36  <seldo>We get a ton of flack from users when this happens in the middle of the day
20:51:02  <yunong>seldo: we hit an unexpected issue during the upgrade.
20:51:21  <seldo>We have the manta hosts individually in our load balancer
20:51:33  <seldo>Can you tell us which ones you're working on when, so we can take them out of rotation?
20:52:08  <yunong>That won't help in this situation since we're not updating the front end web hosts.
20:53:33  <seldo>Is there anything else we can do to mitigate this? We cannot just have half an hour of random downtime as part of routine maintenance.
20:53:37  <dap_>Right. Manta's strongly consistent (we choose C in CAP, not A). There's a lot of detail here:
20:53:38  <dap_>http://dtrace.org/blogs/dap/2013/07/03/fault-tolerance-in-manta/
20:53:56  <nahamu>is zookeeper still part of the metadata layer or has that been ripped out already?
20:54:04  <dap_>nahamu: it's still part of it, unfortunately.
20:54:11  <seldo>Okay, so we should just take manta out of the critical path for serving binaries.
20:54:12  <yunong>nahamu: ZK is still part of the metadata layer, but it's not the culprit in this case.
20:54:24  <dap_>seldo: did you see my comment in #joyent in response to your question earlier
20:54:24  <dap_>?
20:54:38  <dap_>I said:
20:54:39  <dap_>As Yunong mentioned, we were doing some upgrades to the metadata tier, and the errors are expected for that kind of upgrade.  Sorry for the disruption, and especially for not communicating that ahead of time.  We're working internally on improving that.  (Literally — we're talking about how best to handle this now.)  The vast majority of our upgrades aren't noticeable from the outside, and most of the time when it is, the period of disruption has been so bee
20:54:41  <dap_>then:
20:54:46  <dap_>The other piece of this is that we pitch 500s instead of blocking when things are transitioning internally.  We do this to provide better visibility into the underlying service's state — IME, it's much worse to have requests black-hole for minutes at a time than to fail frankly and fast.  But to deal with this, it's recommended that clients use retries with backoff (especially 503s).
20:55:34  <dap_>Crap, it looks like IRC is dropping these messages, and adium isn't telling me.
20:55:53  <nahamu>dap_: those are showing up in my client.
20:56:03  <dap_>I'm also hearing they were cut off.
20:56:18  <nahamu>oh, yes, that does look cut off...
20:56:34  <dap_>Ugh. Sorry.
20:56:47  <nahamu>Cuts of after "(Literally"
20:57:17  <dap_>What I meant to say:
20:57:18  <dap_>https://gist.github.com/davepacheco/c09fcd2fd1e101a9ac8c
20:57:28  * chorrell-awaychanged nick to chorrell
20:57:57  <dap_>And finally: we can make improvements to reduce the impact, potentially even avoiding a read outage for these situations. We can also consider scheduling them differently.
20:58:29  <dap_>seldo: Another option is to build a cache over Manta that chooses the A in CAP instead of C. (I thought that's what fast.ly did, actually.)
20:58:49  <seldo>dap_: your original post gets truncated after "the period of disruption has been"
20:59:03  <yunong>seldo: see the gist dap_ just posted.
21:00:04  <seldo>dap_: I understand that if you pick C over A then I shouldn't be relying on manta as part of our critical path
21:00:07  <isaacs>Choosing C rather than A is fine for writes. But for reads, it makes basically no sense, especially if writes are guaratneed to be consistent.
21:00:29  <isaacs>If writes are always consistent, then you can guarantee that a read from anywhere is as good as read from anywhere else, right?
21:00:47  <yunong>isaacs: you can't have both A and C.
21:00:57  <seldo>See, https://twitter.com/alan_hoff/status/448202693488939009 is the kind of thing I'm getting
21:01:01  <dap_>It doesn't makes sense to distinguish between reads and writes that way. You can't be consistent for writes and not for reads. (What does "consistent" mean in the absence of reads?)
21:01:01  <nahamu>isaacs: the bits are all consistent, but if the metadata layer can't tell you where they are, they aren't available.
21:01:01  <seldo>And it's really driving me nuts
21:01:44  <dap_>seldo: What can we do to help now?
21:01:46  <isaacs>Yeah, basically, we're using Manta in the way it's sold: as an S3 replacement, a good place to store big files.
21:02:09  <isaacs>what yoer' saying is, it's not suited to that task, and should not be depended on to serve content in production.
21:02:28  <nahamu>I have a not-fully-formed thought:
21:02:32  <dap_>isaacs: We've been very upfront about the architecture of Manta, including the ways it differs from S3, which include *exactly* the point about strong consistency.
21:02:44  <nahamu>the metadata API is what lets the frontend figure out which backend has the bits.
21:02:47  <seldo>dap_: ideally, stop going down for reads, and halt "maintenance" work until you're sure the disruption cannot be customer-affecting for > 1m
21:03:09  <nahamu>what if there was a way to expose that data to a client so that I could hit a URL that encodes directly a backend server to try.
21:03:33  <nahamu>so I mput a file, it gets dumped onto, e.g. 2 backend servers
21:03:37  <isaacs>dap_: yes, there have been many communications about the architecture of Manta. However, as a product, it's been sold as ostensibly a good place to serve files from.
21:03:50  <isaacs>dap_: that needs to change Joyent's priorities wrt when and how to cause outages.
21:04:02  <nahamu>I do some special query that spits out two urls that are able to bypass the metadata api
21:04:07  <isaacs>dap_: either (a) state upfront "We will break your site from time to time, so don't depend on this", or (b) don't break my site.
21:04:22  * AvianFlujoined
21:04:27  <yunong>nahamu: it's not the backend server that's the issue here. The metadata service was down for a specific shard in the hash ring, which means keys mapped to that shard isn't available until it's back up.
21:04:48  <nahamu>yunong: right. If at upload time I could have generated that pair of URLS...
21:05:01  <nahamu>then when the metadata service is down I have a way to bypass it.
21:05:12  <nahamu>I guess permissions are tricky for that... :(
21:05:16  <isaacs>and ideally, even if it does break my site from time to time, schedule that on a saturday night
21:05:25  <nahamu>if you only allowed it for things that are under /public, maybe.
21:05:33  <dap_>isaacs, seldo: To be clear, we take downtime seriously, and we've already apologized for the lack of notice, and I've said we're discussing internally how best to communicate this better.
21:06:21  <seldo>okay, but you apologized for the lack of notice this morning, and then you did it again this afternoon
21:06:31  <yunong>seldo: we gave notice in this channel.
21:06:39  <yunong>please see the IRC logs.
21:06:54  <isaacs>dap_: thanks. i'm not personally upset, i'm just pointing out that, from a business pov, it makes no sense to be spending as much as we are on Manta, when S3 costs the same, and emails their users weeks in advance about downtime.
21:06:56  <seldo>yunong: a notice in an IRC channel does not count as "adequate notice" in my book
21:07:25  <seldo>I want an email, 24 hours in advance, so I can tell my users and reconfigure my systems to expect the change in load pattern this causes
21:07:31  <isaacs>i mean, the takeaway seems to be that Manta is not a good fit for our use-case.
21:07:33  <isaacs>and that's fine.
21:07:49  <isaacs>but, i do wish i'd know that a few months ago.
21:08:40  <seldo>question: is there more of this maintenance to do? If so, can it be halted while we scramble to change our load-balancer so we don't throw 503s for every binary?
21:09:06  <yunong>seldo: we're going to hold off from more maintenance during business hours today.
21:09:27  <isaacs>yunong: thank you.
21:09:33  <isaacs>yunong: when do "business hours" end?
21:10:01  <nahamu>.oO(the sun never sets on the node.js empire)
21:10:19  <isaacs>nahamu: actually it does, for an hour or so, before Japan wakes up
21:10:32  <nahamu>isaacs: good to know. ;)
21:15:17  <nahamu>oh... bsdguru, do you work at npm Inc now?
21:15:31  * chorrellchanged nick to chorrell-away
21:16:22  <dap_>isaacs, seldo: Sorry again, and I know it sucks when users are angry at you about it. The expected behavior on a 503 is to retry — it seems that would help a lot in this situation.
21:17:18  <nahamu>dap_: does manta set a Retry-After header?
21:17:41  <dap_>nahamu: it does not
21:18:40  <nahamu>http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html says "If no Retry-After is given, the client SHOULD handle the response as it would for a 500 response."
21:19:10  <dap_>Yeah, but it doesn't really say what that means. It *does* say, just before that:
21:19:10  <dap_>The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay.
21:19:29  <nahamu>not that setting that to 1 minute would help with a 13 minute unforeseen outage...
21:19:59  <seldo>dap_: the client *does* retry after a 503, but when the service is down for 10 minutes that really doesn't make a difference
21:20:04  <dap_>Were you guys seeing 503s *continuously* for 13 minutes?
21:20:08  <seldo>We just get a firestorm of retries, all of which fail
21:20:34  <seldo>dap_: 50% of our traffic was 503s for 13 minutes
21:20:44  <dap_>seldo: It depends on what behavior you want. You can retry not at all, for a fixed number of minutes before giving up, or indefinitely.
21:21:20  <seldo>dap_: retrying for 10 minutes is not something our users can be reasonably expected to do
21:21:35  <seldo>60 seconds is pretty much the outer bound there
21:21:40  <seldo>After that we just look broken
21:21:53  <seldo>These are actual people, sitting at the command line, watching 503s stream past as they try to npm install stuff
21:22:29  <nahamu>seldo: are the end users getting them from a CDN which is passing them through from Manta or do the users get directed straight to Manta itself?
21:22:32  <dap_>That's up to you to decide. Most of my npm uses are for builds, and if npm said it was retrying, and the build just took 10 minutes instead of a few seconds, I'd much prefer that to failing immediately.
21:23:10  <dap_>If that's not helpful, that's okay. I'm trying to offer options. There are likely a lot of situations in which 500s are more transient than 13 minutes.
21:26:05  * mamashjoined
21:27:01  <nahamu>I think part of the issue here is a lack of a clear SLA.
21:27:45  <nahamu>both for people like isaacs to use to decide whether or not to use Manta as a building block of a business
21:28:10  <nahamu>but also for Joyent to use to help them guide when and how maintenance is done, etc.
21:29:21  * ryancnelsonjoined
21:31:26  <nahamu>and from what dap_ said, it sounds like a bunch of those conversations have already started inside Joyent.
21:32:21  * therealkoopaquit (Remote host closed the connection)
21:32:46  <nahamu>anyway, I should get going, but much love to all the Joyent and npm Inc folks!
21:33:53  <seldo>dap_: for truly transient downtime I think we are doing okay. I'm just worried there are going to be more of these 5-10 minute gaps today.
21:36:21  * chorrell-awayquit (Quit: My Mac has gone to sleep. ZZZzzz…)
21:37:28  <nahamu>I'm hoping a/the postmortem will reveal ways to prevent them (and it would be really cool if it's the kind of thing that could be shared with people outside Joyent; I for one would find it interesting.)
21:38:12  * ryancnelsonpart
21:43:09  <dap_>seldo: we're discussing options now, but we'll check with you or someone from your team before doing anything
21:43:22  <seldo>dap_: much appreciated, thank you
21:50:21  * therealkoopajoined
22:00:49  * ringzeroquit
22:04:17  * ghostbarjoined
22:04:43  * ringzerojoined
22:11:53  * mamashpart
22:13:52  <dap_>seldo: I believe Bryan's reaching out to Isaac with details.
22:14:01  <seldo>dap_: thanks
22:15:18  * therealkoopaquit (Remote host closed the connection)
22:20:52  * therealkoopajoined
22:47:14  <tjfontaine>mtr or traceroute
22:47:19  <tjfontaine>er ww
22:59:20  * ringzeroquit
23:02:34  * therealkoopaquit (Remote host closed the connection)
23:46:05  * AvianFluquit (Remote host closed the connection)
23:46:35  * AvianFlujoined
23:47:37  * chorrelljoined
23:50:55  * AvianFluquit (Ping timeout: 264 seconds)