00:01:10  * daurnimatorquit (Read error: Connection reset by peer)
00:19:16  * AtumT_quit (Remote host closed the connection)
00:39:20  * daurnimatorjoined
00:48:00  * daurnimatorquit (Ping timeout: 246 seconds)
00:52:37  * daurnimatorjoined
00:52:37  * daurnimatorquit (Client Quit)
01:04:25  * daurnimatorjoined
01:50:59  <duvnell>guide/eventloops.html mentions running libev along side other event loop systems (e.g. Qt)... I'm interested in that very thing. Is there an example of this, or something more in the docs about it?
01:51:41  <duvnell>All the calls I see take over the calling thread for some amount of wait-time
01:52:03  <duvnell>or, I don't know when new events arrive in order to call it with 0 wait-time
01:52:39  * duvnell1joined
04:34:51  * Successjoined
05:42:48  * Successquit (Remote host closed the connection)
06:27:40  * hari__quit (Ping timeout: 260 seconds)
08:39:51  <Raziel>duvnell: http://docs.libuv.org/en/v1.x/idle.html
08:39:53  <Raziel>maybe this?
08:40:20  <Raziel>would allow you to empty events in the other queues
10:24:58  * strike_quit (Remote host closed the connection)
10:25:10  * mylesborinsquit (Quit: farewell for now)
10:25:25  * strike_joined
10:25:41  * mylesborinsjoined
11:41:22  * AtumTjoined
14:06:24  <duvnell>Raziel: thx, but I'm not sure that would work... I think you're suggesting to run the uv loop but in some attached idle callback dump process any other event loop systems' queues. But that wouldn't process events from those queues when they arrive, i.e they don't interrupt uv's waiting on it's queue as soon as something arrives .. nor would it handle timed events from those other queues.
14:07:25  <duvnell>e.g. if I wanted to simultaneously run a uv loop on the gui thread of a win32 app, I can either pump the uv queue or the win32 queue, but not at the same time.. at least not without some background thread and (inefficiently) proxy all the events.
14:07:46  <duvnell>unless uv does run atop the win32 event loop on win32, and likewise on other platforms
14:07:52  * daurnimatorquit (Ping timeout: 256 seconds)
14:08:15  <Raziel>if I understand that page right (as it says having an active idle handle means the libuv loop will perform a zero timeout poll instead of blocking)
14:08:26  <Raziel>you could just do a while (PeekMessage()) in your idle handler
14:11:05  * daurnimatorjoined
14:15:20  <duvnell>yeah but what if e.g. a mouse press occurs while ev is waiting for network i/o? .. it won't be handled until something with the network happens to wake it up.
14:15:49  <Raziel>it's not going to wait due to the zero timeout
14:16:04  <Raziel>it'll pick it up on the next (or 500 later) loop iteration
14:17:31  <Raziel>the downside is that'll you'll probably need to put some sleep/yield somewhere because of CPU usage if you keep doing nothing but not blocking
14:18:44  <Raziel>of course you could still just have 2 threads (I'd probably pick the UI one for the main thread, those UI libs tend to be picky especially on mobile OSes), but then there's the synchronization issues
14:19:11  <duvnell>well, yeah, I was gonna say that in that case, it's the other way around: the idle handler would need to block on win32 events so that e.g. mouse events are handled on time, but then the network events are starved. Having to put a sleep or timed event either way to pool just introduces a delay of one side or the other.. certainly don't want to spin 100%
14:19:51  <Raziel>hm but mouse events would still be handled on time
14:20:07  <duvnell>there's uv_backend_fd() which indicates a background thread could do the "waiting" for uv events and then wake up the UI thread to do a UV_RUN_ONCE
14:20:11  <Raziel>(unless you do something that takes a lot of processing time before handling them, oviously)
14:20:19  <duvnell>that doesn't work on win32 however
14:20:32  <duvnell>(say's it's only for kqueue and epoll)
14:21:33  <duvnell>I was thinking that on win32 uv woudl have to use the win32 async io stuff.. which does pump the win32 queue IIRC
14:21:42  <duvnell>so perhaps it's already a non-issue on win32
14:21:54  <duvnell>uv and win32 will be pumped together
14:23:20  <duvnell>"IOCP on Windows"
14:24:32  <Raziel>hm io completion ports use thread pools
14:24:59  <duvnell>uv uses iocp
14:25:28  <Raziel>I'm just saying it doesn't need a Peek/GetWindowMessage loop ;)
14:26:26  <duvnell>does it not deliver your "completion" events back to some main thread?
14:26:35  <duvnell>via the event system I would suspect
14:26:43  <duvnell>perhaps not
14:26:52  <Raziel>iirc there's an option for that yeah
14:28:12  <duvnell>guess I can grep the uv src for TranslateMesasge/DispatchMessage .. 1 sec
14:30:23  <duvnell>only in src/win/tty.c o_O?
14:34:47  <duvnell>at bottom uv seems to call GetQueuedCompletionStatusEx()
14:35:05  <duvnell>perhaps that's interrupted by win32 events.. not sure
14:36:21  <Raziel>hm it can only be interrupted by IO completion stuff iirc (and only when alertable is TRUE)
14:37:22  <Raziel>do your libuv and ui "threads" need to talk a lot?
14:37:32  <Raziel>because there's always uv_async_send
14:40:03  <duvnell>well the point would be to run the network i/o and gui on the same thread.. so that e.g. something from the network comes in which might get translated into some change in the UI.. having to proxy that across threads is possible, but a considerable design element.. having to be careful to do things in the correct thread
14:40:46  <duvnell>and at the same time not have arbitrary delays in event handling e.g. those that would be introduced by having to periodically handle events from one queue or the other
14:40:52  <Raziel>hm is it though? cause you're probably already separating the state from its rendering
14:41:26  <duvnell>that would mean that any event from the backend would have to be proxyed to the frontend, and any action taken on the frontend would have to be proxyed to the backend's thread
14:41:42  <Raziel>yup, aka a client-server design :D
14:42:15  <duvnell>the "server" is already elsewhere :_)
14:42:33  <duvnell>to get events from the server into some other process is what uv is already doing
14:42:45  <duvnell>now something else has to be done to get it from uv to ui thread.. seems unfortunate
14:43:54  <Raziel>what are you rendering, lists and stuff (like some "normal" office/ide/etc app) or game-ish graphics?
14:45:43  <Raziel>(btw I have libuv and UI in a single thread and can render game stuff just fine with no mouse/keyboard lag)
14:47:13  <duvnell>it's not specified yet.. I have an existing event processing system which runs atop Win32, Cocoa and X11 native event loops.. I was looking to add network i/o
14:47:33  <duvnell>.. and do so without getting into overlapped i/o on win32 myself
14:47:38  <duvnell>yuk
14:47:45  <Raziel>what's the existing system?
14:47:52  <Raziel>or it's something proprietary/custom?
14:47:59  <duvnell>something proprietary at my office
14:49:11  <Raziel>dunno, run libuv in a thread, post big chunk of network stuff into a shared queue and each time there's a new message in that queue you PostMessage() to that native event loop so it gets processed? :D
14:50:21  <Raziel>or if that proprietary stuff already uses boost, there's asio (otherwise I wouldn't recommend boost on my worst enemy)
14:50:35  <duvnell>yeah I'm leaning toward having to have a background thread.. at least I think I can do the proxying between threads in a lower layer without the code on top having to know anything about that (e.g. wrapping all network i/o in some Stream interface)
14:51:04  <duvnell>we do use some boost, but no asio
14:52:38  <duvnell>it's just that the comment here: http://docs.libuv.org/en/v1.x/guide/eventloops.html?highlight=qt about running alongside qt perhaps was a hasty thought
14:52:50  <duvnell>I was hoping the author of that comment might .. um .. comment
14:53:33  <duvnell>"juggling multiple loops" it says
14:53:36  <Raziel>hm yeah not sure I see the link between that stop example and having Qt alongside
14:53:43  <duvnell>how? without some sort of arbirary delay
14:54:36  <duvnell>aside from this, I've found everything to be pretty clearly stated
14:54:39  <duvnell>which is good
15:39:33  * zkat_changed nick to zkat
16:23:10  * paradokkusujoined
16:23:35  * strike_quit (Remote host closed the connection)
16:23:37  * paradokkusuquit (Client Quit)
16:24:02  * strike_joined
16:51:37  * Fishrock123joined
17:41:32  * strike_quit (Remote host closed the connection)
17:42:04  * strike_joined
18:09:15  * Fishrock123quit (Remote host closed the connection)
18:21:24  * Fishrock123joined
18:40:34  * Fishrock123quit (Remote host closed the connection)
18:42:28  * Fishrock123joined
19:05:21  * AtumT_joined
19:08:13  * AtumTquit (Ping timeout: 240 seconds)
19:17:20  * strike__joined
19:18:57  * strike_quit (Ping timeout: 240 seconds)
19:53:13  * Fishrock123quit (Remote host closed the connection)
20:59:12  * Fishrock123joined
21:16:06  * strike__quit (Remote host closed the connection)
21:16:32  * strike__joined
21:30:45  * strike__quit (Remote host closed the connection)
21:31:12  * strike__joined
21:48:13  * strike__quit (Read error: Connection reset by peer)
21:48:39  * strike__joined
23:17:04  * Fishrock123quit (Remote host closed the connection)
23:25:46  * Fishrock123joined
23:48:39  * Fishrock123quit (Remote host closed the connection)