00:00:15  <txdv>jeez
00:00:20  <txdv>wasn't active on libuv for quite some time
00:00:21  <amurzeau_>so probably not fixed with all updates on windows 8.1 ...
00:00:35  <txdv>first day i check in on the irc chat and someone finds a bug which crashes windows
00:00:38  <txdv>:D
00:00:55  <amurzeau_>:p
00:01:32  * avalanche123joined
00:01:33  <txdv>ill take a lookg on the test and how to use fast path correctly though first
00:01:42  <txdv>because you are setting fast path on the accepting socket
00:01:49  * amurzeauquit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
00:01:55  <txdv>i dont know if that is the correct way
00:02:12  <txdv>even if it were incorrect though, this is clearly a bug, crashing the entire system should not be possible
00:02:25  <amurzeau_>actually, I wouldn't ever think to find that behind enabling a small feature, actually I discovered the new BSOD style on windows 8+ with that issue
00:02:50  <txdv>to be honest, i haven't seen windows > 7 crash until now
00:02:54  * amurzeaujoined
00:03:04  <amurzeau_>same as you
00:03:39  <amurzeau_>I just read the the technet article
00:03:56  <amurzeau_>it says to enable it before calling connect() or before calling listen()
00:04:54  <amurzeau_>"The target of the connection request must set the SIO_LOOPBACK_FAST_PATH IOCTL on the listen socket, that is, prior to accepting the connection."
00:05:00  <txdv>can you give me the link?
00:05:07  <amurzeau_>http://blogs.technet.com/b/wincat/archive/2012/12/05/fast-tcp-loopback-performance-and-low-latency-with-windows-server-2012-tcp-loopback-fast-path.aspx
00:05:22  <txdv>o it is how to correctly use it
00:05:29  <txdv>you see, you don't set it before connect
00:05:46  <amurzeau_>bind is called before connect if I'm right
00:05:59  <txdv>no
00:06:15  <txdv>why would you
00:06:31  <txdv>you can actually like call bind on a socket before connecting, then it will use a specific port (the one you provided)
00:06:37  <txdv>if you don't it will just use whatever is free
00:07:08  <txdv>maybe i'm wrong in the context of libuvs try_bind function
00:07:15  <amurzeau_>yes but libuv actually always call bind even for client sockets
00:07:47  <amurzeau_>https://github.com/libuv/libuv/blob/master/src/win/tcp.c#L767
00:08:13  <txdv>it does
00:08:18  <amurzeau_>with either ipv4 or ipv6 any address
00:08:23  <txdv>you are right, i was just about to see when try_bind was called
00:08:57  <amurzeau_>running benchmarks clearly shows the performance boost
00:09:12  <amurzeau_>I get almost x3 boost
00:09:43  <txdv>with a free bsod
00:09:47  <txdv>once in a while
00:09:47  <txdv>:D
00:09:55  <txdv>what make that test so special though
00:09:59  <amurzeau_>yes, but only with ipv6 ^^
00:10:02  <amurzeau_>ipv4 works fine
00:10:35  <amurzeau_>and benchmarks use ipv4
00:10:49  <amurzeau_>(so they don't trigger the bsod)
00:11:36  <txdv>so a minimum test case should include ipv6 and io completion ports
00:11:43  <txdv>because you said that the synchronous code works
00:11:47  <amurzeau_>yes
00:12:23  <amurzeau_>and both client & server need iocp I think
00:12:41  <amurzeau_>synchronous echo server + the test don't trigger the issue
00:13:41  <txdv>i wonder when the code stops working
00:13:54  <txdv>is it possible to debug it?
00:14:01  <amurzeau_>I think the best way may to change libuv code to synchronous calls maybe
00:14:55  <amurzeau_>no, if the code is slower (because of debugger / printf / whatever), this issue is not triggered
00:15:12  <amurzeau_>and debugging it with windbg and kernel mode debugging doesn't help either
00:15:34  <amurzeau_>(the failing code seems to be in a kernel worker thread)
00:16:20  * avalanche123quit (Ping timeout: 276 seconds)
00:19:05  * avalanche123joined
00:22:28  <txdv>what are you going to do with this now?
00:24:40  <amurzeau_>I will try to debug it by replacing/removing code in libuv but I'm not sure to find anything useful ...
00:24:46  * avalanche123quit (Remote host closed the connection)
00:24:56  * rendarquit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
00:25:06  <txdv>best thing you can do is find out which line in the test triggers it
00:25:13  * avalanche123joined
00:25:21  <txdv>rewriting everything with pure io completion port api will be another bigger challenge
00:25:24  <txdv>:D
00:26:12  <amurzeau_>and maybe even with IOCP, it could work just because calls are not ordered the same way as in libuv ...
00:28:08  <amurzeau_>if I can't figure out the cause of this crash, maybe I will make this feature available manually (like keepalive and nodelay) for ipv4 only
00:28:52  <txdv>I think that would have been the best approach anyway
00:28:55  <amurzeau_>or maybe just not add it to libuv
00:29:05  <txdv>although free speed for no negative side effects is nice too
00:29:29  <amurzeau_>there is one negative effects: the IOCTL call has a cost
00:29:49  <amurzeau_>the benchmark shows -15%/20% less accept/s
00:29:59  * avalanche123quit (Ping timeout: 276 seconds)
00:30:17  <txdv>o you have to bind on every connect
00:30:23  <txdv>i mean to ioctl call
00:30:59  <amurzeau_>yes
00:31:02  <txdv>well
00:31:22  <txdv>uv_tcp_fastpath(uv_tcp_t, bool) it is
00:31:27  <txdv>on linux it just returns ENOTSUP
00:31:40  <txdv>on windows it tries to do the ioctl call
00:31:45  <txdv>and does ENOTSUP if some error is there
00:31:48  <amurzeau_>yes, that's what I have in mind :)
00:32:07  * ncthom91joined
00:33:36  <txdv>I set up the environment for deving for libuv on windows but was too lazy to do anything
00:33:43  <txdv>the fast path issue looked like an easy one
00:33:46  <txdv>but you already did it
00:33:55  * Jacob843quit (Ping timeout: 240 seconds)
00:34:17  <txdv>amurzeau_: https://github.com/libuv/libuv/issues/489
00:34:19  <amurzeau_>not as easy as expected in fact :
00:34:20  <amurzeau_>:p
00:34:30  <txdv>well it is easy, its just blows up with ipv6
00:34:38  <txdv>the addition of the function is not that big
00:34:46  <txdv>also add documentation when it has to be called
00:34:47  <amurzeau_>maybe it can blows up with ipv4 too
00:34:55  <amurzeau_>with a 200Mhz cpu
00:35:08  * avalanche123joined
00:35:38  <amurzeau_>(just speculation ^^)
00:36:07  <amurzeau_>on my machines it seems to be fine with ipv4, I ran benchmarks a lot and never had an issue
00:39:05  <amurzeau_>I need to do more tests on that for sure
00:45:43  * ncthom91quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
00:49:04  * daurnimatorjoined
00:49:22  * amurzeau_quit (Quit: Page closed)
00:54:05  <txdv>if you open up a pull request reference it issue 489 in it
00:54:08  <txdv>(the link i provided above)
00:55:17  * jgiquit (Quit: jgi)
01:29:34  * avalanche123quit (Remote host closed the connection)
01:30:01  * avalanche123joined
01:31:24  * avalanch_joined
01:31:46  * zju4quit (Ping timeout: 240 seconds)
01:34:59  * avalanche123quit (Ping timeout: 276 seconds)
01:59:02  * dap_quit (Quit: Leaving.)
02:01:20  * Ralithjoined
02:01:45  * avalanch_quit (Remote host closed the connection)
02:02:10  * avalanche123joined
02:06:58  * avalanche123quit (Ping timeout: 256 seconds)
02:45:50  * Ruyiquit (Ping timeout: 276 seconds)
03:13:32  * brsonquit (Quit: leaving)
03:17:44  * ncthom91joined
03:31:02  * ncthom91quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
03:46:21  * tunniclmquit (Ping timeout: 250 seconds)
04:20:14  * rmgquit (Remote host closed the connection)
05:20:47  * rmgjoined
05:25:45  * rmgquit (Ping timeout: 260 seconds)
05:39:21  * pjoined
05:39:39  * pquit (Client Quit)
06:07:20  * jgijoined
06:38:23  * seishunjoined
06:42:00  * zju1joined
06:50:21  * rendarjoined
06:50:57  * jgiquit (Quit: jgi)
07:12:44  * Ruyijoined
07:34:46  * seishunquit (Ping timeout: 240 seconds)
07:46:26  * rmgjoined
07:50:55  * rmgquit (Ping timeout: 240 seconds)
07:56:43  * davijoined
07:57:58  * gabrielschulhofjoined
08:36:50  * brrtjoined
10:52:29  * brrtquit (Quit: brrt)
10:57:39  <txdv>amurzeau: is it possible to trace system calls on windows?
10:59:28  <amurzeau>txdv: I don't know a tool like strace on windows
11:00:31  <amurzeau>txdv: I tried tracepoints in gdb, but there weren't supported on windows, so tried breakpoint + continue action on hit but was too slow
11:03:00  <amurzeau>maybe DrMemory can do that
11:03:25  <txdv>http://www.howzatt.demon.co.uk/NtTrace/
11:05:58  <amurzeau>nice find :) redirect to a file if it doesn't trigger the crash
11:10:12  <txdv>im trying to understand the trace code right now though
11:28:10  * Jacob843joined
11:38:29  <txdv>seems like its hooking the symbol tables
11:42:43  * zju4joined
11:43:56  * Damn3dquit (Ping timeout: 255 seconds)
11:44:44  * zju1quit (Ping timeout: 250 seconds)
11:44:45  * Damn3djoined
11:45:28  * zju3quit (Ping timeout: 272 seconds)
11:46:14  * zju3joined
11:47:51  * rmgjoined
11:53:08  * rmgquit (Ping timeout: 276 seconds)
12:06:35  * daviquit (Ping timeout: 240 seconds)
12:14:15  * evanlucasjoined
12:15:51  * wuqiongjoined
12:19:27  * brrtjoined
12:23:26  * amurzeauquit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
12:24:05  * amurzeaujoined
12:30:26  <amurzeau>txdv: you tried it against run-tests.exe ?
12:31:29  <amurzeau>(I don't have a windows 8 available actually so can't test anything for now)
12:55:03  * gabrielschulhofquit (Ping timeout: 255 seconds)
13:01:41  * brrtquit (Quit: brrt)
13:03:24  * brrtjoined
13:03:37  * brrtquit (Remote host closed the connection)
13:28:16  * wuqiongquit (Remote host closed the connection)
13:28:45  * wuqiongjoined
13:33:46  * wuqiongquit (Ping timeout: 272 seconds)
13:41:34  * evanlucasquit (Quit: Textual IRC Client: www.textualapp.com)
13:48:36  * rmgjoined
13:53:24  * rmgquit (Ping timeout: 272 seconds)
14:29:43  * bnoordhuisjoined
14:42:33  * wuqiongjoined
15:09:04  * alexforsterjoined
15:20:25  * uvfanjoined
15:21:15  <uvfan>what's the command line to use gyp generate vc projects ? I tried a lot of times.
15:23:19  <uvfan>hi, fellows
15:27:11  * Damn3dquit (Ping timeout: 264 seconds)
15:28:44  <amurzeau>try vbuild.bat, it creates projects and build them
15:32:00  * uvfanquit (Ping timeout: 264 seconds)
15:32:46  * bnoordhuisquit (Ping timeout: 240 seconds)
15:46:31  * Fishrock123joined
15:54:21  * Damn3djoined
15:54:29  * Fishrockjoined
15:55:01  * Fishrock123quit (Read error: Connection reset by peer)
16:10:22  * amurzeauquit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
16:10:36  * amurzeaujoined
16:19:46  * Ruyiquit (Ping timeout: 240 seconds)
16:20:27  * benjamingr_joined
16:28:11  * avalanche123joined
16:53:36  * seishunjoined
16:55:15  * davijoined
16:55:16  * daviquit (Changing host)
16:55:16  * davijoined
16:59:19  * dap_joined
17:05:05  * rmgjoined
17:14:49  * jgijoined
17:23:56  * jgiquit (Quit: jgi)
17:27:29  * avalanche123quit (Remote host closed the connection)
17:27:56  * avalanche123joined
17:32:17  * avalanche123quit (Ping timeout: 246 seconds)
17:33:41  * Fishrockchanged nick to Fishrock123
17:53:59  * jgijoined
18:00:07  * brsonjoined
18:00:47  * gabrielschulhofjoined
18:01:41  <gabrielschulhof>Hey! Quick question about uv_async_t ... if I allocate a uv_async_t on the heap, init it with uv_async_init, and then free it, will that leak?
18:13:21  * magicjoined
18:13:45  * magicchanged nick to Guest51288
18:14:10  <Guest51288>is it feasible/supported to "merge" libuv's event loop with other event loops?
18:15:19  <gabrielschulhof>Guest51288: Which other event loop are you talking about?
18:15:39  <Guest51288>this one is a proprietary one from a proprietary OS
18:15:50  <gabrielschulhof>Oh ...
18:15:53  <Guest51288>but imagine for example the win32 APIs event loop
18:15:57  <gabrielschulhof>What event loop is running first?
18:16:15  <Guest51288>in this case it's trickier, because in this (*shitty*) OS multithreading is not allowed
18:16:40  <Guest51288>gabrielschulhof: i think that might be up for debate, but most likely the existing one from the application
18:17:00  <Guest51288>i wonder if one event loop could call back something that evaluates the other event loop instead of blocking
18:17:16  <Guest51288>and maybe after a timeout re-evaluate that
18:17:45  <gabrielschulhof>Guest51288: Then you can use uv_backend_fd() and friends to create an event source in the proprietary event loop and run the uv event loop with uv_run(..., UV_NOWAIT)
18:17:54  <gabrielschulhof>I forget the exact name of the constant.
18:18:11  <Guest51288>thank you!
18:18:23  <Guest51288>that sounds neat :)
18:18:34  <gabrielschulhof>... but uv_backend_timeout() informs you how long you should wait, uv_backend_fd() can be passed to the other event loop for polling, and you can run the uv event loop without blocking with UV_NOWAIT ...
18:18:42  <gabrielschulhof>Guest51288: Good luck!
18:19:28  <gabrielschulhof>Guest51288: node-gtk contains an example of running the uv event loop on top of the glib event loop.
18:34:38  * alexforsterquit (Quit: Textual IRC Client: www.textualapp.com)
18:49:00  * bnoordhuisjoined
18:52:59  * rendarquit (Ping timeout: 264 seconds)
18:53:28  * avalanche123joined
18:58:39  * rendarjoined
19:18:18  * gabrielschulhofquit (Ping timeout: 272 seconds)
19:19:26  * gabrielschulhofjoined
19:29:15  * bnoordhuisquit (Ping timeout: 255 seconds)
19:31:02  * Guest51288quit (Remote host closed the connection)
19:32:35  * magicjoined
19:32:59  * magicchanged nick to Guest24343
19:36:26  * amurzeau_joined
19:37:11  * Guest24343quit (Ping timeout: 265 seconds)
19:39:26  * magicjoined
19:39:50  * magicchanged nick to Guest33507
19:40:46  * wuqiongquit (Remote host closed the connection)
19:43:21  * bnoordhuisjoined
19:48:04  * bnoordhuisquit (Ping timeout: 272 seconds)
19:50:09  * bnoordhuisjoined
20:13:00  * fourqchanged nick to fourq|away
20:13:01  * fourq|awaychanged nick to fourq
20:14:13  * avalanche123quit (Remote host closed the connection)
20:14:46  * avalanche123joined
20:19:44  * avalanche123quit (Ping timeout: 272 seconds)
20:27:38  * gabrielschulhofquit (Ping timeout: 246 seconds)
20:33:09  * avalanche123joined
20:35:53  * amurzeauquit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
20:36:22  * amurzeaujoined
20:41:27  * s3shsjoined
20:47:29  * amurzeauquit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
20:50:25  * amurzeaujoined
21:06:12  * avalanche123quit (Ping timeout: 264 seconds)
21:07:49  * avalanche123joined
21:12:53  * jgiquit (Quit: jgi)
21:14:07  * seishunquit (Ping timeout: 260 seconds)
21:22:22  * dap_1joined
21:23:38  * dap_quit (Ping timeout: 246 seconds)
21:24:23  * avalanche123quit (Ping timeout: 260 seconds)
21:26:15  * avalanche123joined
21:26:47  * bnoordhuisquit (Ping timeout: 246 seconds)
21:44:52  * tunniclmjoined
21:46:12  * daviquit (Ping timeout: 250 seconds)
22:00:14  * jgijoined
22:03:12  * zju1joined
22:03:12  * zju4quit (Read error: Connection reset by peer)
22:15:59  * Guest33507quit (Remote host closed the connection)
22:34:15  * Jacob843quit (Ping timeout: 240 seconds)
22:38:56  * evanlucasjoined
22:43:22  * Jacob843joined
22:48:04  <amurzeau_>txdv: setting setsockopt(IPV6_V6ONLY) to always 1 seems to prevent the crash (here: https://github.com/libuv/libuv/blob/master/src/win/tcp.c#L327 "on = 1;")
22:48:59  * magicjoined
22:49:24  * magicchanged nick to Guest88674
22:49:24  * Guest88674quit (Remote host closed the connection)
22:49:44  <amurzeau_>yeah reproduced the issue with libuv client and synchronous custom server :D
22:53:56  <txdv>em
22:54:05  <txdv>is windows by default dual stack?
22:59:04  <amurzeau_>no I don't thing
22:59:24  <amurzeau_>I narrowed down the issue to IPV6 dual stack + fast path
23:00:05  <amurzeau_>code here: https://gist.github.com/amurzeau/07eb6e7a62f8a3c4b971
23:01:42  <txdv>this example freezes the system?
23:01:52  <amurzeau_>msdn: "By default, an IPv6 socket created on Windows Vista and later only operates over the IPv6 protocol" (https://msdn.microsoft.com/en-us/library/windows/desktop/bb513665%28v=vs.85%29.aspx)
23:01:52  <amurzeau_>yes
23:02:08  <amurzeau_>server.c compiles to server.exe and client.c to client.exe
23:02:16  <amurzeau_>then first start server.exe then client.exe
23:02:20  <txdv>create a projet file so microsoftlers can compile it themselfs
23:02:23  <txdv>send it to microsoft
23:02:50  <amurzeau_>yes, actually searching where I can put this :)
23:03:22  <txdv>well played
23:03:31  <txdv>did you use nttrace or did you do this on your own?
23:04:18  <amurzeau_>no, I just incrementally disabling stuff in libuv
23:04:28  <amurzeau_>disabled*
23:05:28  <txdv>no one ever had this issue because nobody in the western society uses ipv6 yet
23:06:05  <amurzeau_>and even then, dual stack has to be explicitly enabled to trigger the issue
23:06:48  <txdv>and fastpath
23:07:16  <txdv>dual stack is enabled by default, fasthpath has to be enabled explicitly?
23:07:35  <amurzeau_>both fast path and dual stack is disabled by default
23:07:50  <txdv>o ok
23:08:51  <txdv>so probably some internal code doesnt check if this is a ipv6 struct or ipv6 struct and references over the structs limit
23:09:22  <txdv>in the fastpath code
23:10:06  * hayesquit (Ping timeout: 272 seconds)
23:11:57  <amurzeau_>I would think more of a asynchronous switch to dual stack or something like that
23:12:33  <amurzeau_>if slow enough, there is no issue even with both fast path and dual stack enabled
23:14:47  * hayesjoined
23:17:51  <txdv>did you find a place to report this?
23:20:07  <txdv>i tried to search but i found nothing
23:20:11  <txdv>except 'security issues'
23:20:27  <txdv>i dont think this can be exploited, but it will be annoying once the world switches to ipv6
23:23:16  <amurzeau_>found nothing too
23:24:32  <txdv>if i were you
23:25:16  <txdv>tweet to @davidfowl and ask him if were is the right place to report dualstack fastpath bugs
23:25:42  <txdv>he is an asp.net dev
23:26:04  <txdv>he is not that much into internal kernel dev of windows
23:26:15  <txdv>i just got nothing else
23:31:25  <txdv>he is though a microsoft dev
23:32:11  <txdv>so he probably knows who is responsible for this kind of issue
23:42:22  * s3shsquit (Quit: Computer has gone to sleep.)
23:44:54  <rendar>amurzeau_: what about asking ReactOS developer? they know a lot about win kernel internals