00:00:03  * bnoordhuisjoined
00:04:38  * bnoordhuisquit (Ping timeout: 256 seconds)
00:27:42  * fourqchanged nick to fourq|away
00:29:36  * fourq|awaychanged nick to fourq
00:32:51  * fourqchanged nick to fourq|away
00:33:33  * fourq|awaychanged nick to fourq
00:55:02  * rendarquit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
01:11:28  * amurzeauquit (Ping timeout: 252 seconds)
01:13:08  * zju4quit (Ping timeout: 272 seconds)
01:20:43  * fourqchanged nick to fourq|away
01:22:58  * fourq|awaychanged nick to fourq
01:39:34  * jgijoined
01:44:47  * jgiquit (Ping timeout: 276 seconds)
03:21:35  * tunniclmquit (Ping timeout: 240 seconds)
03:23:20  * evanluca_joined
03:25:54  * evanlucasquit (Ping timeout: 265 seconds)
05:51:41  * nathan7quit (Quit: leaving)
05:54:30  * nathan7joined
06:28:58  * jgijoined
06:40:52  * Ruyi-HomePCjoined
06:42:23  * jgiquit (Quit: jgi)
08:10:57  * evanluca_quit (Read error: Connection reset by peer)
08:10:59  * evanlucasjoined
08:29:34  * Ruyi-HomePCquit (Quit: Leaving)
08:30:23  * bnoordhuisjoined
08:36:19  * cimbojoined
08:40:51  * cimboquit (Client Quit)
08:45:18  * evanlucasquit (Read error: Connection reset by peer)
08:45:41  * evanlucasjoined
08:52:56  * davijoined
09:24:00  * zju1joined
09:44:00  * rendarjoined
10:01:40  * tunniclmjoined
10:12:25  * jgijoined
10:45:15  * bnoordhuisquit (Ping timeout: 265 seconds)
11:11:07  * seishunjoined
11:24:38  * jgiquit (Quit: jgi)
11:51:50  * bnoordhuisjoined
11:56:16  * bnoordhuisquit (Ping timeout: 250 seconds)
12:10:40  * tunniclmquit (Ping timeout: 260 seconds)
12:22:39  * gabrielschulhofquit (Ping timeout: 245 seconds)
12:28:22  * gabrielschulhofjoined
13:32:17  * bnoordhuisjoined
13:52:45  * daviquit (Ping timeout: 260 seconds)
14:37:41  * warehouse13quit (Remote host closed the connection)
15:08:45  * brrtjoined
15:19:36  * Left_Turnjoined
15:33:21  * kombajoined
15:34:29  * amurzeaujoined
15:44:13  <komba>ThreadPool Question: "even though a global thread pool which is shared across all events loops is used, the functions are not thread safe." Which functions? uv_queue_work()?
15:49:16  * bnoordhuisquit (Ping timeout: 256 seconds)
15:50:57  * kombaquit (Quit: Page closed)
16:32:29  * brrtquit (Quit: brrt)
16:41:51  <amurzeau>saghul, txdv_: while trying to implement on-demand fast path activation (like keepalive and nodelay), I need to add a flag to the uv_tcp_t structure because this feature need to be enabled between socket creation and call to connect/accept
16:41:52  <amurzeau>as I can see, there is only one slot available: 0x80000000 according to src/win/internal.h
16:45:54  * brrtjoined
16:45:58  <amurzeau>saghul: can I use this flag slot for the TCP loopback fast path feature or should it be reserved for other uses ?
16:48:06  * davijoined
16:48:06  * daviquit (Changing host)
16:48:06  * davijoined
16:53:11  <amurzeau>or maybe make this feature enabled/disabled at loop level (to match Java behavior of using a global system property to control this feature)
16:54:10  * bnoordhuisjoined
16:56:42  <rendar>amurzeau: what you mean with "slot"?
16:56:52  <amurzeau>a flag bit
16:57:21  <amurzeau>like UV_HANDLE_TCP_NODELAY which is used when the user enable nodelay before the socket is created
16:57:47  <amurzeau>when creating the socket, if the flag is set, setsockopt is called to enable TCP_NODELAY
16:58:58  * bnoordhuisquit (Ping timeout: 256 seconds)
16:58:59  <amurzeau>actually I'm a bit hesitant between loop-global flag or per socket flag
17:00:10  <amurzeau>other implementations (like Redis, Java, KestrelHttpServer) either have a global flag or always enable it
17:00:51  <amurzeau>because it's just a performance boost feature (+/- bugs :) )
17:02:24  <rendar>hmm ok
17:02:49  <rendar>isn't that flag which disables the nagle algorithm?
17:03:14  <amurzeau>yes it's that one
17:17:30  * brrtquit (Quit: brrt)
17:26:36  * bradleymeckjoined
17:42:30  * bnoordhuisjoined
17:50:42  * bradleymeckquit (Quit: bradleymeck)
17:52:02  * tunniclmjoined
17:59:10  * jgijoined
18:03:26  * bradleymeckjoined
18:04:38  * jgiquit (Quit: jgi)
18:06:44  * fourqchanged nick to fourq|away
18:07:57  <kellabyte>how can I control how big of packets I'm sending over a TCP write?
18:11:16  <amurzeau>AFAIK, packet fragmentation is controlled by the OS and the network card, TCP handle a stream of data
18:16:17  <kellabyte>hmm well one application is sending less packets than another and the one sending less has a higher throughput
18:16:21  <kellabyte>so its sending larger packets
18:17:01  * fourq|awaychanged nick to fourq
18:17:50  <amurzeau>both have nagle enabled (that is, TCP_NODELAY disabled)
18:17:52  <amurzeau>?
18:18:35  <kellabyte>yup believe so, haven't disabled it
18:20:10  <amurzeau>what's the size of the bigger and smaller packets ?
18:20:28  <kellabyte>not sure how to tell that, I'm using dstat to monitor throughput and packets/second
18:20:41  <kellabyte>I guess throughput / packets would tell the size?
18:21:27  <amurzeau>yes
18:22:27  <kellabyte>ok the slow application is sending 1.3K bytes per packet
18:25:24  * daviquit (Ping timeout: 250 seconds)
18:26:06  * davijoined
18:26:23  <kellabyte>the faster application is sending 8K bytes per packet
18:26:42  <amurzeau>there are not on the same network ?
18:27:01  <kellabyte>same network, its the same 2 machines I'm benchmarking against
18:27:06  <amurzeau>1.3k would match the common MTU around 1.5k / packet
18:27:30  <kellabyte>iperf is doing 2.2GB/s my libuv application is maxing out at 1.4GB/s
18:28:53  <amurzeau>do you have a iperf report indicating the MSS and MTU size ?
18:29:06  <amurzeau>like "MSS size 8948 bytes (MTU 8988 bytes, unknown interface)"
18:30:07  <amurzeau>you may need -m option to iperf to print that report
18:32:17  <kellabyte>let me check
18:33:56  <kellabyte>hmm doesn't seem -m does that, it lets you set it not print the report
18:35:30  <amurzeau>seems to be a iperf2 only option
18:37:10  <amurzeau>I think iperf use jumbo frames but not sure about that
18:37:43  <amurzeau>jumbo frames allow to have packets bigger than 1500 bytes on ethernet
18:38:25  <kellabyte>so how is this decided in libuv api then?
18:39:02  <bnoordhuis>kellabyte: you mean tcp frame size?
18:39:30  <bnoordhuis>or is this about ethernet frame size?
18:39:50  <bnoordhuis>guess it doesn't matter; libuv leaves it up to the OS either way
18:40:30  <amurzeau>I don't know jumbo frames that much but I think this is only handled by the OS itself (and other network components)
18:41:21  * amurzeauquit (Quit: Page closed)
18:47:21  <kellabyte>bnoordhuis: I guess I'm trying to figure out how redis or iperf can go 2GB/s but my application can't
18:47:50  <kellabyte>8KB/packet versus 1.3KB/packet seems like where the difference may be, so I'm wondering how they are doing that versus what I'm doing wrong?
19:22:21  * yunong_joined
19:24:40  * yunongquit (Ping timeout: 260 seconds)
20:05:25  * jgijoined
20:09:35  * daviquit (Ping timeout: 260 seconds)
20:42:53  <kellabyte>bnoordhuis: should I be using uv_write() or uv_try_write()?
20:49:43  * bradleymeckquit (Quit: bradleymeck)
20:53:31  * amurzeaujoined
21:11:09  * happy-dudejoined
21:29:42  * Guest76837quit (*.net *.split)
21:29:45  * srl295quit (*.net *.split)
21:29:45  * kkaeferquit (*.net *.split)
21:29:45  * lennartclquit (*.net *.split)
21:36:17  * bradleymeckjoined
21:40:35  * jgiquit (Quit: jgi)
21:44:14  * bradleymeckquit (Quit: bradleymeck)
21:46:26  * rendarquit (Ping timeout: 240 seconds)
21:53:19  * rendarjoined
21:56:10  * jgijoined
22:05:46  * kkaeferjoined
22:05:46  * kkaeferquit (Changing host)
22:05:46  * kkaeferjoined
22:06:25  * kenansulaymanjoined
22:06:46  * srl295joined
22:06:46  * srl295quit (Changing host)
22:06:47  * srl295joined
22:06:53  * kenansulaymanchanged nick to Guest64461
22:07:11  * lennartcljoined
22:29:18  * seishunquit (Ping timeout: 272 seconds)
22:32:08  * bnoordhuisquit (Ping timeout: 276 seconds)
22:42:27  * jgiquit (Quit: jgi)
22:47:35  * rendarquit (Ping timeout: 250 seconds)
22:53:27  * yunong_quit (*.net *.split)
22:53:27  * gabrielschulhofquit (*.net *.split)
22:53:28  * devlafquit (*.net *.split)
22:53:29  * eugenewarequit (*.net *.split)
22:53:30  * whitlockjcquit (*.net *.split)
22:57:43  * rendarjoined
22:57:43  * yunong_joined
22:57:43  * gabrielschulhofjoined
22:57:43  * devlafjoined
22:57:43  * eugenewarejoined
22:57:43  * whitlockjcjoined
23:33:02  * rendarquit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
23:37:26  * bnoordhuisjoined
23:42:30  * bnoordhuisquit (Ping timeout: 260 seconds)
23:58:02  * amurzeauquit (Quit: Page closed)