00:17:47  * Therstriumquit (Remote host closed the connection)
00:18:00  * Therstriumjoined
01:03:07  * zjuquit (Ping timeout: 256 seconds)
01:36:44  * MoZu4k_part
03:12:13  * tunniclm_quit (Ping timeout: 276 seconds)
08:48:08  * rendarjoined
08:51:30  * tunniclm_joined
09:41:35  * seishunjoined
10:14:03  * a3fjoined
10:31:38  * seishunquit (Disconnected by services)
10:31:45  * seishunjoined
10:35:11  * thealphanerdquit (Quit: farewell for now)
10:35:42  * thealphanerdjoined
10:55:44  * seishunquit (Ping timeout: 256 seconds)
11:32:18  * seishunjoined
12:07:42  * seishunquit (Ping timeout: 256 seconds)
12:18:58  * seishunjoined
12:31:52  * zju3joined
12:32:11  * zju3changed nick to zju_25
12:32:32  <seishun>what's the plan for the 1.10.0 release?
13:03:13  * a3fquit (Quit: Zzzzz..)
13:33:22  * mbroadstjoined
13:53:23  * etnbrdjoined
14:03:34  * Jacob843quit (Remote host closed the connection)
14:03:58  * Jacob843joined
14:08:46  * mbroadstquit (Changing host)
14:08:46  * mbroadstjoined
15:04:30  <mbroadst>is it possible to poll a fifo character device with libuv?
15:49:57  * luka_quit (Ping timeout: 248 seconds)
16:08:27  <rendar>mbroadst: why not?
16:08:46  <rendar>mbroadst: the operating system shuold makes it just as a regular file
16:08:56  <rendar>should make*
16:10:13  <mbroadst>rendar: I'm actually attempting this in node, and trying to trace whether the error is there or in libuv itself
16:10:53  <mbroadst>the problem is that it isn't a regular file right? so e.g. a more inefficient stat-based approach doesn't work since the OS will always indicate that it has a size of 0
16:11:17  <rendar>it's more like a char /dev
16:11:27  <mbroadst>yeah exactly
16:11:28  <rendar>it may block indefinitely
16:11:37  <rendar>this with regular file won't happen
16:12:44  <mbroadst>so at this point I've just had to make a stream that periodically does a read on the device, which is pretty inefficient
16:12:47  <rendar>mbroadst: unfortunately you can't add a filesystem fd to the epoll subsystem
16:12:57  <mbroadst>but I guess what you're saying is that there just isn't a facility to even poll such a device
16:13:27  <rendar>not for filesystem fds, i mean, epoll() works with socket and pipes, but actaully you can try it
16:13:44  <rendar>try to add the fifo fd, and try to see if you receive poll notifications
16:15:33  <mbroadst>I think you're right about its potential to block indefinitely, which is perhaps the argument for not supporting polling these fds (as you might completely tie up the thread pool in node)
16:17:04  <rendar>mbroadst: exactly
16:17:22  <mbroadst>though in my case I wish they would let me be a little dangerous :)
16:17:35  <rendar>mbroadst: since libuv (and hence node) does i/o operations in files by queueing them into secondary threads
16:18:13  <mbroadst>theoretically if done carefully that should only technically lock up as many threads as I call the polling operation on right?
16:19:05  <rendar>in theory, yes
16:19:31  <rendar>that, or even minus threads than that, if 2 fifo reads are enqueued in the same thread
16:19:54  <mbroadst>I wonder what the argument against supporting that would be then, lack of cancelability?
16:20:19  <mbroadst>we should be able to preempt those threads if a cancel is explicitly called
16:20:44  <mbroadst>(sorry just brainstorming before I go start opening tickets)
16:21:00  <rendar>mbroadst: nope, its just that when you open(2) a fifo, you pass through filesystem, which has different poll facilities than sockets and pipes.. so epoll() just do not support i/o alerts on fds opened through the filesystem, only Windows support this
16:21:13  <rendar>now the problem is that fifo or mkfifo created stuff, are basically pipes!
16:21:21  <rendar>so epoll() may work with fifos
16:21:37  <rendar>maybe here there is someone who has already tried that before..
16:21:48  <rendar>or you can try it by yourself and share results here
16:21:52  <mbroadst>right, okay so I should go experiment a little bit
16:22:05  <rendar>mbroadst: its really easy
16:22:13  <mbroadst>I'll put that on the list, I need to finish implementing this very inefficient approach first :)
16:22:33  <rendar>mbroadst: just create an epoll() fd and associate a fifo fd into that, then just see if epoll() waits indefinitely or it returns alerts
16:22:47  <rendar>you can do this in about ~30 lines of code! :)
16:23:49  <mbroadst>yeah, better yet I can probably look at prior art on this
16:24:03  <mbroadst>glib has these "io channels" which operate on fifos
16:24:35  <rendar>mbroadst: maybe you can use aio_* polling functions with fifos
16:24:53  <mbroadst>well they operate on anything really, but fifos are supported so they must have some backend
16:26:27  <rendar>look, fifos are really pipes, just try the epoll thing, i'm sure it may work actually..
16:28:49  <mbroadst>I absolutely will, I just need about 10min to finish the other implementation here :)
16:29:07  <rendar>:)
17:01:27  * pspiquit (Remote host closed the connection)
17:09:44  * seishunquit (Ping timeout: 256 seconds)
17:22:43  * mbroadstquit (Ping timeout: 252 seconds)
17:23:11  * mbroadstjoined
17:26:11  * seishunjoined
18:44:59  <AlsoDirkson>Hey all. I think I'm locking a wrlock-ing a mutex twice. Does anyone have any real brilliant ideas on how to track that down? My usual tools are gdb, valgrind, and clang's sanitizers, but none of them seem to be able to tell me where I previously locked the mutex.
18:47:30  <AlsoDirkson>(The reason I think I'm locking a rwmutex twice is that my program crashes when trying to wrlock a rwlock, and the -only- way I was able to make that happen in my test case was to write lock twice from the same thread without unlocking in between.)
19:05:54  * mbroadstquit (Ping timeout: 256 seconds)
19:24:29  * rendarquit (Ping timeout: 265 seconds)
19:53:46  * rendarjoined
20:45:53  * seishunquit (Ping timeout: 245 seconds)
21:37:08  * mbroadstjoined
21:44:00  * rendarquit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
21:44:28  * mbroadstquit (Ping timeout: 244 seconds)
22:20:16  <indutny>trevnorris: hey mate
22:20:24  <indutny>trevnorris: I know you like perf stuff
22:20:26  <indutny>trevnorris: https://github.com/indutny/heatline