00:08:11  * dguttmanquit (Quit: dguttman)
00:12:42  * wilmoorequit (Ping timeout: 245 seconds)
00:28:36  * calvinfojoined
00:29:31  * calvinfo1joined
00:29:31  * calvinfoquit (Read error: Connection reset by peer)
00:30:08  * mcavagequit (Remote host closed the connection)
00:30:17  * calvinfo1quit (Read error: Connection reset by peer)
00:30:28  * calvinfojoined
00:31:18  * calvinfo1joined
00:31:18  * calvinfoquit (Read error: Connection reset by peer)
00:35:48  * calvinfo1quit (Ping timeout: 264 seconds)
00:38:29  * fritzyquit (Remote host closed the connection)
00:50:35  * blessYahujoined
00:57:58  * thlorenzjoined
01:05:01  * blessYahu_joined
01:07:14  * blessYahuquit (Ping timeout: 240 seconds)
01:10:30  * calvinfojoined
01:10:31  * wangbusjoined
01:10:55  <wangbus>ogd: thanks for the reply on the twitters
01:11:23  * calvinfo1joined
01:11:23  * calvinfoquit (Read error: Connection reset by peer)
01:12:46  * calvinfo1quit (Read error: Connection reset by peer)
01:13:00  * calvinfojoined
01:13:11  * ednapiranhaquit (Quit: Leaving...)
01:15:11  <wangbus>anyone have reading material for someone that's coming from nano/couch
01:16:51  * brianloveswordsquit (Quit: Computer has gone to sleep.)
01:21:30  * TehShrikejoined
01:21:35  * TehShrikepart
01:28:31  * wilmoorejoined
01:36:48  * brianloveswordsjoined
01:58:21  * calvinmetcalfquit (Quit: Connection closed for inactivity)
02:40:20  * fritzyjoined
02:45:17  * fritzyquit (Ping timeout: 264 seconds)
02:53:58  * ramitosquit (Remote host closed the connection)
02:54:30  * ramitosjoined
02:56:30  * wilmoorequit (Quit: wilmoore)
03:10:24  * calvinfoquit (Quit: Leaving.)
03:10:40  * calvinfojoined
03:10:45  * calvinfoquit (Client Quit)
03:33:10  * jmgunn87joined
03:37:14  * fritzyjoined
03:47:49  * thlorenzquit (Remote host closed the connection)
03:50:15  * mikealquit (Quit: Leaving.)
04:03:08  * wilmoorejoined
04:21:07  * mikealjoined
04:21:24  * ednapiranhajoined
04:23:31  * jmgunn87quit (Ping timeout: 246 seconds)
05:06:29  * ryanjjoined
05:16:49  * fritzyquit (Remote host closed the connection)
05:35:55  * daviddiasjoined
05:37:42  * Sorellaquit (Quit: It is tiem!)
05:38:15  * fritzyjoined
05:45:49  * calvinfojoined
05:50:09  * ednapiranhaquit (Quit: Leaving...)
06:03:31  * mikealquit (Quit: Leaving.)
06:05:12  * wilmoorequit (Ping timeout: 260 seconds)
06:10:02  * mikealjoined
06:20:01  * daviddiasquit (Remote host closed the connection)
06:43:58  * fritzyquit (Remote host closed the connection)
06:47:42  * fritzyjoined
07:09:25  * daviddiasjoined
07:10:24  * daviddiasquit (Remote host closed the connection)
07:10:26  * fritzyquit (Remote host closed the connection)
07:21:38  * brianloveswordsquit (Quit: Computer has gone to sleep.)
07:30:17  * blessYahu_quit (Ping timeout: 252 seconds)
07:38:09  * daviddiasjoined
07:46:25  * daviddiasquit (Remote host closed the connection)
07:56:24  * calvinfoquit (Quit: Leaving.)
11:12:19  * calvinmetcalfjoined
12:41:56  * blessYahujoined
12:51:10  * mhernandez1joined
13:52:13  * neonstalwartjoined
13:58:21  * calvinmetcalfquit (Quit: Connection closed for inactivity)
14:04:14  * kenan|afkchanged nick to kenansulayman
14:07:48  * brianloveswordsjoined
14:08:34  * thlorenzjoined
14:54:13  * daviddiasjoined
15:00:10  * mikealquit (Quit: Leaving.)
15:10:44  * kenansulaymanchanged nick to kenan|afk
15:20:45  * mikealjoined
15:31:40  * wilmoorejoined
15:33:55  * stagasjoined
15:36:51  * wilmoorequit (Ping timeout: 252 seconds)
15:39:51  * calvinfojoined
15:40:55  * brianloveswordsquit (Quit: Computer has gone to sleep.)
15:51:03  * ednapiranhajoined
15:51:05  * mcavagejoined
16:03:56  * brianloveswordsjoined
16:04:16  * daviddiasquit (Remote host closed the connection)
16:09:47  * daviddiasjoined
16:11:01  * daviddia_joined
16:11:09  * daviddiasquit (Read error: Connection reset by peer)
16:22:17  * daviddia_quit (Remote host closed the connection)
16:22:56  * brianloveswordsquit (Quit: Computer has gone to sleep.)
16:28:44  * brianloveswordsjoined
16:28:54  * daviddiasjoined
16:30:19  * daviddiasquit (Remote host closed the connection)
16:35:34  * mikealquit (Quit: Leaving.)
16:35:41  * mikealjoined
16:37:45  * blessYahuquit (Ping timeout: 248 seconds)
16:54:58  * wilmoorejoined
17:24:36  * fritzyjoined
17:27:25  * thlorenzquit (Remote host closed the connection)
17:28:00  * thlorenzjoined
17:30:53  * stagasquit (Ping timeout: 252 seconds)
17:32:16  * thlorenzquit (Ping timeout: 240 seconds)
17:36:37  * fritzyquit (Remote host closed the connection)
17:37:34  * thlorenzjoined
17:37:39  * wolfeida_joined
17:39:42  * thlorenzquit (Remote host closed the connection)
17:40:03  * wolfeidauquit (Ping timeout: 240 seconds)
17:40:50  * fritzyjoined
17:42:05  * SuperPhlyjoined
17:44:37  <SuperPhly>What's the best way to go about updating an index in level? How do I go about updating a value in the database pretty frequently...
18:17:27  * thlorenzjoined
18:19:04  * daviddiasjoined
18:23:48  * daviddiasquit (Ping timeout: 264 seconds)
18:24:54  * mhernandez1quit (Remote host closed the connection)
18:25:07  * daviddiasjoined
18:27:42  * daviddiasquit (Remote host closed the connection)
18:28:05  * mhernandez1joined
18:28:19  * daviddiasjoined
18:29:49  <nrw>SuperPhly: generally, you want update your indexes in a batch with the change you're indexing.
18:30:14  <SuperPhly>what if i'm doing that several times a second?
18:30:59  <SuperPhly>I get the feeling that I'm sorta approaching this from the wrong direction...
18:31:07  <nrw>SuperPhly: is there any reason calling put repeatedly won't work for your use case?
18:31:30  <nrw>SuperPhly: every time you 'put', you totally overwrite the key/value pair.
18:31:40  <SuperPhly>right, so i'd have to read, update, put
18:31:49  <nrw>SuperPhly: why are you reading?
18:31:50  <SuperPhly>if i were adding to the index
18:32:04  <SuperPhly>because i need to know the value that's in there in order to add
18:32:33  <nrw>SuperPhly: that sounds more like reducing than indexing.
18:32:42  <SuperPhly>may i explain the situation?
18:32:53  <nrw>SuperPhly: drop it in a gist.
18:33:35  <nrw>SuperPhly: ... yes, you may. we're in this channel to talk about stuff like this. :)
18:34:46  * daviddiasquit (Remote host closed the connection)
18:36:40  <SuperPhly>https://gist.github.com/superphly/9bdadfc74d38f4715eed
18:37:56  * daviddiasjoined
18:39:30  <nrw>SuperPhly: are you just looking for a what to implement a primary key counter for the number of the artist?
18:39:46  <nrw>SuperPhly: is there a reason you're not using some entropy-based id?
18:40:01  <SuperPhly>now you've lost me ;)
18:40:28  <nrw>SuperPhly: are you just numbering artists to make sure they each have a unique id?
18:40:45  <SuperPhly>well, they are associated with the ID that i'm pull them from across the web
18:40:56  <SuperPhly>so i can match them back up later
18:41:30  <nrw>SuperPhly: so, something else assigns the id to an artist. not you. correct?
18:41:38  <SuperPhly>correct.
18:41:56  <SuperPhly>i'm caching data here locally so i can do my own calculations since i don't have direct access to the database.
18:42:16  * daviddiasquit (Ping timeout: 240 seconds)
18:42:20  <SuperPhly>hitting the API (1 request every 4 seconds)
18:43:28  <SuperPhly>nrw: i'm kind of a novice programmer. I may not know the best way to do things, but I can quickly understand when I'm doing something the wrong way ;)
18:43:48  <nrw>SuperPhly: no worries, i just don't see where there's a problem. :P this seems to handle all the issues you've pointed out. db.put('artist-1', {id: 'artist-1', other: 'properties'})
18:44:16  <SuperPhly>right
18:44:30  <SuperPhly>but say i want to iterate through them or do a count of how many artist-#'s there are
18:44:31  <nrw>SuperPhly: you can save a document by the known id. when you get it, you have the id (along with the rest of it's data) so you can save any changes.
18:44:41  <nrw>SuperPhly: ah
18:45:00  <SuperPhly>I'd have to calculate that and store it right?
18:45:06  <nrw>SuperPhly: you are counting the number of artists, not asking which artist ids are in use, correct?
18:45:39  <nrw>SuperPhly: do you want the database to answer the question "how many artists are there?"
18:45:47  <SuperPhly>both. yes.
18:46:02  <SuperPhly>so i can do something like for each artist do x
18:46:16  <nrw>SuperPhly: that's a map-reduce problem https://github.com/dominictarr/map-reduce
18:47:20  <nrw>SuperPhly: if you're unfamiliar with map reduce: https://en.wikipedia.org/wiki/Map_reduce#Examples
18:47:53  <SuperPhly>so it sorta counts and stores that data on update?
18:48:10  <SuperPhly>keeping tabs on things without me having to do it manually?
18:48:26  <nrw>SuperPhly: that's not a bad explanation, but not very complete. :P
18:48:58  <SuperPhly>of course ;) it never is...
18:49:18  <nrw>SuperPhly: i'd summarize it as: for every change, the map function is run. data emitted by the map function is stored. that data can be reduced into a smaller, useful value.
18:49:43  <SuperPhly>gotcha. so if i had a list of 250,000 id's
18:49:50  <SuperPhly>how would i iterate through them?
18:49:59  <SuperPhly>an array with 250,000 values isn't a good idea
18:50:05  <nrw>SuperPhly: db.createReadStream()
18:50:19  <nrw>SuperPhly: what are you doing when you iterate through them?
18:50:54  <SuperPhly>pulling data out of the object and sending it to the console or file
18:51:07  <nrw>SuperPhly: that is a job for a readstream
18:51:21  <nrw>SuperPhly: db.createReadStream().pipe(process.stdout)
18:51:44  <nrw>SuperPhly: are you familiar with streams?
18:51:55  <SuperPhly>no, but i am getting where you're going i think.
18:52:02  <SuperPhly>do i specify some sorta "key" mask?
18:52:03  <nrw>SuperPhly: this is a good intro: https://github.com/substack/stream-handbook
18:52:11  <SuperPhly>"artist-"
18:52:44  <nrw>SuperPhly: everything is sorted lexographically db.readStream({start: 'artist-', end: 'artist-\xff'})
18:53:07  <SuperPhly>\xff?
18:53:23  <nrw>SuperPhly: that's the max value of a character
18:53:47  <SuperPhly>the "last" character in the alphabetical list of characters
18:53:49  <nrw>SuperPhly: ... that's not strictly true, but for this case, think of that as the biggest value
18:53:54  <nrw>SuperPhly: hes
18:53:57  <nrw>SuperPhly: yes
18:54:02  <SuperPhly>yeah, i could say artist- through artist-9999999
18:54:05  <nrw>SuperPhly: see the levelup readstream api here: https://github.com/rvagg/node-levelup#createReadStream
18:54:07  <SuperPhly>and it would work fro the most part
18:54:14  <SuperPhly>man, this is exciting
18:55:01  <nrw>SuperPhly: {start: 'artist-', end: 'artist-9999999'} would also return artist-99999999999999999
18:55:16  <SuperPhly>ah, right.
18:55:47  <SuperPhly>but that doesn't exist, so we'd be safe right?
18:55:57  <SuperPhly>artist-0 artist-9
18:56:00  <nrw>SuperPhly: of course, just know that it's not about the numbers
18:56:09  <SuperPhly>it's a string
18:56:25  <nrw>SuperPhly: yes
18:56:46  <nrw>SuperPhly: anything that starts with that string, or sorts between those strings will be emitted
18:57:09  <nrw>SuperPhly: also: if you're gonna be working with level, you definitely want to get comfortable with streams.
18:57:36  <SuperPhly>what order does it return things?
18:57:41  <SuperPhly>randomly or in sequence?
18:57:49  <nrw>SuperPhly: sequence
18:57:54  <SuperPhly>PERFECT.
18:58:09  <nrw>SuperPhly: pour yourself a cup of coffee and read that levelup readme. you won't have many questions left afterwards. :P
18:58:19  <SuperPhly>and the DB is efficient for these kinds of actions?
18:58:22  <SuperPhly>man, this is great.
18:59:22  <nrw>SuperPhly: level tries to expose things it's good at.
18:59:48  <nrw>SuperPhly: it's a log structured merge tree: http://en.wikipedia.org/wiki/Log-structured_merge-tree
19:00:56  <nrw>SuperPhly: i'd encourage you to make it work before you worry about efficiency (the Rule of Optimization) http://www.faqs.org/docs/artu/ch01s06.html
19:01:17  <SuperPhly>Did you send me that readme link?
19:01:29  <SuperPhly>I have the links you sent me but none of them are labeled as readme
19:01:40  <nrw>SuperPhly: this is the page: https://github.com/rvagg/node-levelup#createReadStream
19:01:53  <SuperPhly>Awesome.
19:01:55  <SuperPhly>I'm reading that one now.
19:15:27  <hij1nx>who has a multi-get module?
19:18:28  <hij1nx>annoying to write often
19:19:00  <wangbus>got a few questions about level.. if someone has some insight it'd be really helpful
19:19:34  <wangbus>been a couch dev for about half a year and the reason i'm looking at level is because i want more control on the database and deployment
19:20:25  <wangbus>for a huge web application, for our own database implementation via level/node would we expose the database via some kind of http interface like couch?
19:20:48  <nrw>hij1nx: i'd like to know the answer to that, too!
19:20:51  <Aria>Possibly. Or multilevel
19:21:09  <Aria>Or you'd replicate data, distributed style, depending on your use-case, wangbus
19:21:39  <wangbus>i'm leaning toward distributed style
19:22:14  <wangbus>i just want it to be easy to deploy
19:22:23  <wangbus>npm install and it installs all the reqs
19:22:27  <wangbus>no external deps
19:23:33  <wangbus>Aria: for my usecase it'd be for text data with versioning and timestamps
19:24:02  <wangbus>i'd like to be able to replicate and scale with ease via something that can be installed via npm install
19:26:21  <Aria>Update pattern and consistency requirements are the parts that usually affect how you shape your databases
19:27:10  <wangbus>ah.
19:27:19  <wangbus>i guess is it worth the effort.
19:27:31  <wangbus>to get rid of external dependencies
19:32:50  <wangbus>is there a good module for versioning
19:35:35  <nrw>wangbus: semver
19:35:38  <nrw>:)
19:35:46  <wangbus>lol
19:36:22  <wangbus>i guess couch just does key by id and a version #
19:36:27  <wangbus>not exactly rocket science
19:36:48  <Aria>Yeah. In level, you'd do similar. id first ,version second, so you can retrieve all keys for the range of [id-0 to id-any]
19:37:39  <wangbus>so different question
19:37:51  <wangbus>i understand that reads scale with # of cores
19:38:10  <wangbus>and writes are better single threaded
19:39:00  <wangbus>just curious, how would you scale writes if you had a high freq of writes
19:39:28  <nrw>wangbus: batch()?
19:40:48  * calvinfoquit (Quit: Leaving.)
19:40:48  <nrw>wangbus: if you are running level in a process (not over multilevel) you usually get in the neighborhood of 100k writes/second. have you measured that db performance is your bottleneck?
19:41:13  <wangbus>i'm just asking
19:41:16  <wangbus>didn't have this problem yet
19:41:25  <wangbus>don't really think there would be a problem w/ #'s like that.
19:41:46  <wangbus>nrw: thanks for the insight though.
19:41:57  <nrw>wangbus: the Rule of Optimization! http://www.faqs.org/docs/artu/ch01s06.html
19:42:30  <wangbus>yes i always go back to this
19:42:52  * tec27_joined
19:43:01  <wangbus>i'm writing large scale forum software
19:43:11  * parshapquit (Read error: Connection reset by peer)
19:43:12  * tec27quit (Ping timeout: 240 seconds)
19:43:12  * parshapjoined
19:43:13  * jez0990quit (Ping timeout: 240 seconds)
19:43:14  * jez0990_joined
19:43:30  <wangbus>nrw: do you think it's a good idea to use level to tailor things for my system?
19:44:09  <nrw>wangbus: that is a vague question. :P i would say "both" with the information i have. :)
19:44:54  * brianloveswordsquit (Quit: Computer has gone to sleep.)
19:45:14  <wangbus>what other information would you need to make that assessment
19:45:41  <nrw>wangbus: what do you want deployment to be like?
19:45:44  <wangbus>i'm just looking into level so i'm not sure what it's capable of. but the overhead of doing every piece is pretty daunting but probably rewarding
19:46:00  <nrw>wangbus: what do you mean by 'every piece'?
19:46:01  <wangbus>npm install -g foo
19:46:28  <wangbus>then you can control the system and even attach to other nodes w/ that entry point
19:47:11  <nrw>wangbus: so it's on multiple servers, each server has a writeable database and they sync
19:47:14  <nrw>wangbus: and you come from couch.
19:47:21  <wangbus>every piece meaning everything i'd need to build on top of level to use as this application's db.
19:47:34  <wangbus>yes exactly like that.
19:47:35  <nrw>wangbus: how is that different from any other database?
19:48:16  <wangbus>not sure
19:48:35  <wangbus>i believe having no external deps is pretty powerful
19:49:25  * brianloveswordsjoined
19:49:44  <nrw>wangbus: i think you'll find that you're just saving yourself the trouble of running build-couchdb when you deploy. which is fine. :)
19:49:48  <nrw>wangbus: it sounds like pouchdb-server is a good choice for you.
19:50:04  <nrw>wangbus: syncs just like couch. runs in process. is backed by level.
19:50:10  <wangbus>1 more thing i'd like to add
19:50:31  <wangbus>i want it to sync with other nodes through a tracker like interface
19:50:34  <wangbus>much like btsync
19:50:41  <wangbus>so you could do
19:50:46  <wangbus>foo sync <hash>
19:50:53  <wangbus>then you can sync w/ other nodes that provide the same hash
19:51:26  <nrw>wangbus: i think you're into an application layer problem, not database layer problem, there.
19:51:53  <nrw>wangbus: you'd do that with couch by following the changes feed of a database named 'hash', right?
19:51:57  <wangbus>so you're saying build that conectivity into the app layer?
19:52:03  <nrw>wangbus: yes.
19:52:08  <wangbus>guess i'm thinking about it wrong
19:52:37  <wangbus>k this is good insight talking about it
19:52:49  <wangbus>thanks for your input. it was very helpful.
19:53:03  <nrw>wangbus: glad to help. :)
19:53:09  <nrw>wangbus: i'm gonna say one more thing...
19:53:20  <wangbus>will bother you again sometime when i have code to show you
19:53:27  <wangbus>oh?
19:53:41  <nrw>wangbus: whatever protocol you're going to use to sync, you're gonna have that protocol be as dumb as possible. figuring out *what* to sync *when* is what your application is going to dictate.
19:54:00  <nrw>wangbus: perhaps that will save you a headache or two. :P
19:54:08  <wangbus>interesting.
19:54:10  <wangbus>thanks.
19:54:13  <Aria>I'd be looking into CRDTs, vector clocks and other distributed systems algorithms.
19:54:28  <Aria>There's some interesting experiments out there with CRDTs and LevelDB.
19:54:58  <wangbus>awesome
19:55:41  <wangbus>thanks for the help guys
19:55:48  <nrw>Aria wangbus: level-replicate uses scuttlebutt-style replication for the whole db. https://github.com/dominictarr/level-replicate
19:55:59  <Aria>Yup!
19:56:02  <nrw>... if you're looking at scuttlebutt.
19:56:03  <wangbus>nices
19:56:07  <wangbus>is that still in dev?
19:56:14  <wangbus>looked at scuttlebutt about a year ago.
19:56:59  <nrw>wangbus: scuttlebutt is done, as i last heard from dominictarr.
19:57:05  <wangbus>ah
19:57:13  <nrw>wangbus: "done" means "complete"
19:57:27  <wangbus>widely used?
19:58:10  <nrw>wangbus: as far is i can tell. i've used it. :P you'll need to try it for your use case to see how it fits.
19:58:17  <wangbus>cool
19:58:21  <wangbus>thanks i'll look into it
19:59:53  * mikealquit (Quit: Leaving.)
20:01:56  * calvinfojoined
20:02:29  * tec27_changed nick to tec27
20:07:27  * SuperPhlyquit (Quit: Textual IRC Client: www.textualapp.com)
20:21:19  * parshapquit (Changing host)
20:21:19  * parshapjoined
20:26:48  * wilmoorequit (Ping timeout: 264 seconds)
20:30:13  * wilmoorejoined
20:36:26  * SuperPhlyjoined
20:37:17  * ramitosquit (Remote host closed the connection)
20:37:48  * ramitosjoined
20:38:48  * calvinfoquit (Ping timeout: 264 seconds)
20:39:27  * SuperPhlyquit (Remote host closed the connection)
20:39:47  * SuperPhlyjoined
20:46:46  * parshap_joined
20:47:41  * hij1nx__joined
20:47:49  * mcavagequit (Read error: Connection reset by peer)
20:47:57  * mcavagejoined
20:49:50  * SuperPhlyquit (Quit: ZNC - http://znc.in)
20:51:10  * ggreer_joined
20:52:10  * ggreerquit (Disconnected by services)
20:52:17  * ggreer_changed nick to ggreer
20:52:29  * parshapquit (*.net *.split)
20:52:29  * rescrvquit (*.net *.split)
20:52:29  * hij1nxquit (*.net *.split)
20:52:31  * calvinfojoined
20:52:50  * mhernandez1quit (Remote host closed the connection)
20:53:07  * mhernandez1joined
20:57:09  * mhernandez1quit (Remote host closed the connection)
20:58:01  * rescrvjoined
21:09:07  * mcavagequit (Ping timeout: 240 seconds)
21:09:45  * hughsk_joined
21:10:32  * wilmoorequit (Ping timeout: 260 seconds)
21:11:57  * hughskquit (Ping timeout: 264 seconds)
21:12:04  * hughsk_changed nick to hughsk
21:25:08  * mcavagejoined
21:25:15  * brianloveswordsquit (Quit: Computer has gone to sleep.)
21:26:49  * chilts_joined
21:30:03  * aredrideljoined
21:30:11  * savardc_joined
21:31:43  * parshap_quit (*.net *.split)
21:31:43  * ryanjquit (*.net *.split)
21:31:43  * ogdquit (*.net *.split)
21:31:43  * eugenewarequit (*.net *.split)
21:31:43  * jaynequit (*.net *.split)
21:31:43  * Ariaquit (*.net *.split)
21:31:43  * savardcquit (*.net *.split)
21:31:43  * l1xquit (*.net *.split)
21:31:43  * ehdquit (*.net *.split)
21:31:44  * chiltsquit (*.net *.split)
21:31:46  * savardc_changed nick to savardc
21:38:01  * ryanjjoined
21:44:39  * SuperPhlyjoined
21:44:56  * wolfeida_changed nick to wolfeidau
21:59:43  * calvinfoquit (Quit: Leaving.)
22:23:12  * neonstalwartquit (Quit: Leaving.)
22:24:11  * dfreirejoined
22:24:21  * mcavagequit (Read error: Connection reset by peer)
22:24:47  * mcavagejoined
22:30:00  * Sorellajoined
22:30:17  * calvinfojoined
22:30:30  * calvinfoquit (Read error: Connection reset by peer)
22:30:41  * calvinfojoined
22:31:36  * calvinfo1joined
22:31:36  * calvinfoquit (Read error: Connection reset by peer)
22:32:32  * calvinfojoined
22:32:32  * calvinfo1quit (Read error: Connection reset by peer)
22:33:25  * calvinfo1joined
22:33:25  * calvinfoquit (Read error: Connection reset by peer)
22:34:17  * calvinfojoined
22:34:17  * calvinfo1quit (Read error: Connection reset by peer)
22:35:13  * calvinfo1joined
22:35:13  * calvinfoquit (Read error: Connection reset by peer)
22:36:10  * calvinfojoined
22:36:10  * calvinfo1quit (Read error: Connection reset by peer)
22:37:06  * calvinfo1joined
22:37:06  * calvinfoquit (Read error: Connection reset by peer)
22:38:00  * calvinfojoined
22:38:00  * calvinfo1quit (Read error: Connection reset by peer)
22:38:56  * calvinfo1joined
22:38:56  * calvinfoquit (Read error: Connection reset by peer)
22:39:48  * calvinfojoined
22:39:49  * calvinfo1quit (Read error: Connection reset by peer)
22:40:47  * calvinfo1joined
22:40:47  * calvinfoquit (Read error: Connection reset by peer)
22:41:40  * calvinfojoined
22:41:40  * calvinfo1quit (Read error: Connection reset by peer)
22:42:37  * calvinfo1joined
22:42:37  * calvinfoquit (Read error: Connection reset by peer)
22:43:28  * calvinfojoined
22:43:28  * calvinfo1quit (Read error: Connection reset by peer)
22:48:00  * calvinfoquit (Ping timeout: 255 seconds)
22:49:50  * calvinfojoined
22:57:31  * wolfeidauquit (Remote host closed the connection)
23:07:13  * thlorenzquit (Remote host closed the connection)
23:07:48  * thlorenzjoined
23:08:36  * ogdjoined
23:12:16  * thlorenzquit (Ping timeout: 240 seconds)
23:15:34  * calvinfopart