00:35:15  * grantilaquit (Quit: Leaving.)
00:36:36  * tunniclmquit (Ping timeout: 250 seconds)
01:46:31  * grantilajoined
01:51:01  * grantilaquit (Ping timeout: 252 seconds)
02:57:27  * grantilajoined
02:57:31  * grantilaquit (Client Quit)
04:08:49  * grantilajoined
04:13:17  * grantilaquit (Ping timeout: 252 seconds)
05:22:37  * pspiquit (Quit: leaving)
05:23:41  * pspijoined
06:23:03  * italoacasasquit (Quit: Connection closed for inactivity)
06:30:52  * grantilajoined
06:35:12  * grantilaquit (Ping timeout: 246 seconds)
06:47:54  <txdv>kellabyte: linking the lien with the assert on github would make it easier for people too look it up
06:49:01  <txdv>kellabyte: it means, your server is on one loop and the client you want to accept is on another
06:50:22  <txdv>this is not allowed... though should be possible in theory at least on linux
06:50:33  <kellabyte>txdv: the weird thing is it doesn't always happen
06:50:46  <kellabyte>so I have no idea how the assert could sometimes be false and sometimes true
06:51:13  <txdv>the assert only triggers if the handles or not on the same loop
06:51:46  <txdv>you have the listener handle and you are creating a client handle
06:51:52  <txdv>during creationg you have to use the same loop
06:53:25  <kellabyte>yeah I'm just really confused why it'll work many times then suddenly stop working
06:53:28  <kellabyte>the code isn't changing lol
06:54:10  <txdv>you are creating the handle you want to accept onto on a differnt loop?
06:54:20  <txdv>The memory that handle is using is corrupt?
06:54:36  <txdv>or changed somewhere?
06:54:52  <txdv>don't know, is your code open source?
06:55:36  <kellabyte>yeah it is, I've been trying to avoid just dumping code on someone like this but I've been stuck for awhile, I tried a repro but couldn't once I stripped stuff out
06:56:57  <kellabyte>this is the related code: https://github.com/haywire/haywire/blob/master/src/haywire/http_server.c#L246
06:58:12  <txdv>are you calling uv_accept on a different thread?
07:01:18  <kellabyte>I don't think so
07:02:24  <txdv>you are createing a thread in that linked line
07:03:39  <kellabyte>right but thats to create an ipc worker thread
07:05:15  <txdv>luajit
07:05:18  <txdv>in your webbrowser
07:05:22  <kellabyte>this whole run around is just because I want to be able to load balance across multiple event loops, its always been a really complex piece of code for what I want it to do
07:05:42  <txdv>yeah, libuv is single threaded ...
07:05:50  <txdv>so you have to do all the logic with multi threading yourself
07:06:01  <kellabyte>yeah but I wish I could just move a socket to another event loop
07:06:14  <kellabyte>in an easier way
07:06:52  <kellabyte>its a common thing I see a lot of libuv http servers doing in various languages, each does it with varying quality
07:07:34  <txdv>well, you have to do it over ipc
07:08:05  <txdv>you could just pass the fd?
07:09:48  <txdv>what line of your code throws that assert?
07:09:58  <txdv>also, can you reproduce it reliably?
07:10:14  <txdv>your tcp listining code seems to be correct
07:10:42  <txdv>kellabyte: the line you linked me to just creates the thread, i want to know which uv_accept in your code blows up
07:12:00  <kellabyte>#5 0x000000000041074a in ipc_read_cb (handle=0x7ffff75e7de8,
07:12:00  <kellabyte> nread=<optimized out>, buf=<optimized out>)
07:12:00  <kellabyte> at /root/git/haywire/src/haywire/connection_consumer.c:36
07:12:21  <kellabyte>thank you for the help by the way :)
07:12:43  <txdv>i didnt help yet
07:13:29  <txdv>im on master right now, the 36 line is uv_close((uv_handle_t*) &ctx->ipc_pipe, NULL);\
07:14:46  <txdv>seems like you are closing the ipc pipe the read callback
07:16:11  <txdv>https://github.com/haywire/haywire/blob/master/src/haywire/connection_consumer.c#L36
07:16:14  <txdv>why are you doing this?
07:16:57  <kellabyte>hmm good question, I don't remember why that was there
07:18:08  <kellabyte>ah right, I remember following this example which also does that: https://github.com/libuv/libuv/blob/v1.x/test/benchmark-multi-accept.c#L195
07:18:11  <txdv>it seems like you have mixed up what is the server handle and what is the client handle in that part of code
07:18:36  <txdv>literally the smae code
07:18:38  <txdv>D
07:18:40  <txdv>;D
07:18:51  <kellabyte>yeah haha
07:19:03  <kellabyte>it was a couple years ago I followed the pattern :P
07:20:04  <kellabyte>which handle did I mix up?
07:20:30  <txdv>nevermind
07:20:31  <txdv>https://github.com/haywire/haywire/blob/master/src/haywire/connection_consumer.c#L23
07:20:34  <txdv>first fix you need to do
07:21:17  <txdv>this line returns the number of stuff you want to accept
07:21:25  <txdv>its not a procedure, it is a function
07:22:38  <kellabyte>so it should be 1? I'm not sure what you're saying needs to be fixed
07:23:54  <txdv>this function returns the number of handles queued in the pipe
07:24:20  <txdv>i put 2 connections into the pipe, this one returns the number 2
07:24:37  <txdv>in the benchmark it always just gets 1 returned, because this is the way the benchmark is coded
07:24:38  <kellabyte>oh I see, so I may have to loop and accept multiple?
07:25:00  <txdv>I think you should
07:28:49  <txdv>if you have a test to trigger your assertion
07:28:54  <txdv>please try to printf that number
07:29:07  <txdv>if it is 1 all the time, this wont fix the problem if it is > 1 then this might fix the problem
07:30:01  <kellabyte>okay let me try that now that I know what that returns
07:30:07  <kellabyte>lets see if I can reproduce
07:31:23  <kellabyte>hmm I got some 0's
07:31:27  <kellabyte>no 1 or no >1
07:35:12  <txdv>are you sure you are sending stuff at all?
07:35:31  <txdv>you need to rethink that part of code
07:35:45  <txdv>that close there is in the test because it expects only ONE connection
07:35:52  <txdv>literally, it gets one connection, then the test is done
07:36:15  <txdv>it seems like you want to be able to accept more than one connection?
07:36:40  <kellabyte>yeah I misunderstood what that was for I guess, but like how is this working most of the time then? haha
07:36:56  <kellabyte>like I'm running this on osx right now and the pending = 1 4 times for 4 threads started
07:36:59  <kellabyte>and it runs fine
07:37:11  <kellabyte>but on another linux machine its 0's and assert is failing
07:38:24  <txdv>you have a mac?
07:38:47  <kellabyte>yeah I'm debugging this in xcode and also running on linux because the linux server is the one having the troubles
07:39:22  <kellabyte>which parts of that function do I need to do N times besides the uv_accept()?
07:40:49  <txdv>can you give me a testcase so i can run it myself?
07:41:39  * grantilajoined
07:41:40  * grantilaquit (Client Quit)
07:41:52  <txdv>i cant even compile haywire on linux debian
07:42:42  <kellabyte>whats it complaining about?
07:42:47  * jessicaquynhjoined
07:43:45  * saghuljoined
07:44:06  <kellabyte>its compiling some memory allocators and benchmarking tools, if its failing on those we can just comment those out
07:44:28  <kellabyte>in compile_dependencies.sh just delete everything but libuv
07:44:32  <kellabyte>then ./make.sh
07:45:33  <kellabyte>then to reproduce the error ./build/hello_world --threads 4
07:46:24  <kellabyte>but yeah on this linux machine thats now having issues, pending = 0 for all 20 threads
07:48:25  <txdv>you have a linux machines with 20 cores?
07:48:40  <kellabyte>40 cores but 20 are real, other 20 are hyperthreaded
07:49:03  <txdv>is that your personal server>/
07:49:50  <kellabyte>I have control of it yeah, rackspace gave me 2 bare metal instances to benchmark with
07:50:18  <kellabyte>even if I do --threads 4 it has the same issue, so its not about number of threads
07:51:47  <txdv>http://paste.debian.net/898700
07:52:16  <txdv>im using ./build.sh
07:52:46  <kellabyte>oh no don't use that
07:52:50  <kellabyte>sorry I need to deprecate that
07:52:53  <kellabyte>use ./make.sh
07:53:00  <kellabyte>I'm moving from gyp to cmake
07:53:33  * kellabytechecks to make sure the README says make.sh lol
07:54:56  <txdv>i didnt read it
07:54:59  <txdv>;D
07:55:06  <txdv>just saw build in the dir and immediately used tghat
07:55:18  <kellabyte>haha yeah I should just delete it
07:55:46  <kellabyte>actually should just do a makefile now that I've learned how
07:56:30  <txdv>so i started it
07:56:44  <txdv>is there some special configuration i need to do?
07:56:55  <txdv>im doing ab -c 1000 -n 10000 right now and everything wroks
07:57:24  <kellabyte>you did --thread <N>?
07:57:29  <kellabyte>err --threads
07:57:55  <txdv>yeah
07:58:07  <kellabyte>yeah see now its working lol
07:59:09  <kellabyte>what would cause the ipc_read_cb() to be called but the pending pipes is 0?
08:00:29  <txdv>can you lsb_release -a on your server
08:00:59  <txdv>the pending_count() returns 1 on my machine
08:01:14  <kellabyte>Description: Ubuntu 15.10
08:01:30  <kellabyte>here's the thing though, its not machine specific, like I could reboot the box and it'll probably start working again
08:01:35  <txdv>uname -a
08:01:37  <kellabyte>sometimes it works and then it stops
08:01:54  <kellabyte>Linux server1 4.2.0-30-generic #36-Ubuntu SMP Fri Feb 26 00:58:07 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
08:02:16  <txdv>does hello world create a named pipe?
08:02:29  <txdv>on the file system?
08:03:33  <txdv>is that file present on your file system?
08:04:09  <kellabyte>oh yeah it is
08:04:14  <txdv>and that my friend
08:04:19  <txdv>is the source of the error
08:04:21  <txdv>delete it and it will work
08:04:45  <kellabyte>you're right! what is that doing?
08:04:58  <txdv>named pipes need a representation on the file system
08:05:05  <txdv>because ... they are named files after all
08:05:23  <txdv>if your app does not close cleanly, it may leave one around
08:05:32  <txdv>like if you dont close it appropriately or something
08:05:43  <txdv>i had the same issue in my tests for my libuv bindings
08:05:49  <kellabyte>ahhhhhh! that explains the trend that when I have a crash and fix it, it seems broken
08:06:25  <txdv>so yeah, before binding to that name, do a check if the file exists
08:06:38  <txdv>delete if eventually or ask the user if it is ok to delete that named pipe
08:06:50  <kellabyte>okay, is there a good place to put that file typically thats not current dir?
08:07:02  <txdv>tmp?
08:07:08  <kellabyte>yeah thats what I was thinking
08:07:13  <txdv>but that will make it global
08:07:15  <kellabyte>what happens if multiple processes use the same name?
08:07:18  <txdv>local dir is nice if you want have seperate directories
08:08:17  <kellabyte>damn I'm so glad you helped me on that, thank you so much, I feel like an idiot though haha
08:08:53  <txdv>what mac do you have?
08:09:05  <kellabyte>its a macbook pro, couple years old
08:09:09  <kellabyte>do you need me to test something?
08:09:16  <txdv>no, just curious
08:09:26  <txdv>you should read up on named pipes
08:09:36  <txdv>to understand what it is doing
08:10:28  <txdv>i think listening to the same named pipe file makes it fight for connections
08:10:52  <txdv>but i think you can connect with multiple clients and send whatever you want over there
08:11:42  <txdv>this mechanism you are using is for sending handles from one loop to another
08:12:00  <kellabyte>yeah, once I move the TCP connection to another event loop then it stays there
08:12:21  <kellabyte>its a lot of hard to reason about code with barriers and stuff though
08:12:27  <txdv>the thing is, on unix you could just pass the fd to another event loop and open it with uv_tcp_open
08:12:39  <kellabyte>yeah and recent versions of windows supports similar as well
08:12:40  <txdv>but that is not supported on windows
08:12:54  <txdv>so you are doing this named piped trick which has something similar on windows
08:13:07  <txdv>also named pipes on windows can't send over udp connections if i remember correclty
08:14:00  <txdv>AFAIK
08:15:50  <kellabyte>yup, yeah I'll read up on it, thank you SO much for your time, I really appreciate it
08:16:02  <kellabyte>I've gotten SO_REUSEPORT to work with libuv too which has been really nice
08:16:23  <txdv>what does that do?
08:16:31  <txdv>like reuse client connections?
08:16:43  <kellabyte>it allows me to avoid the IPC worker threads and allows multiple threads to open sockets to the same port
08:16:49  <kellabyte>and the linux kernel will load balance
08:17:21  <txdv>o yeah
08:17:25  <txdv>i member
08:17:34  <kellabyte>so it allows me to do things like run 20 haywire instances with 1 thread, or run 1 haywire instance with 20 threads
08:18:30  <kellabyte>I'm hitting 6.8M req/s with 20 1 thread processes, but only 3.5M req/s with 1 process and 20 threads
08:18:48  <txdv>https://travis-ci.org/haywire/haywire#L5886
08:19:12  <kellabyte>yeah, I'm not sure why its failing on travisci
08:19:50  <txdv>so what are you using haywire for?
08:20:23  <kellabyte>learning how to write fast code and learning C, it's a couple year old project now, one company decided to contribute to it and they use it in production
08:20:36  <txdv>what company?
08:20:39  <kellabyte>they helped polish some things up since it was crazy experimental (still is in some ways)
08:21:10  <kellabyte>umm I don't think they wanted to be public about it, but they use haywire as a HTTP API fronting an Aerospike cluster which is an in-memory cache cluster
08:21:25  <kellabyte>so they use it for low latency cache hits
08:22:00  <kellabyte>https://twitter.com/kellabyte/status/697812564793802752
08:22:34  <txdv>thats you
08:23:17  <kellabyte>but mostly its just me learning how to write faster code, I like trying to get as high up the techempower benchmarks as I can, its a fun challenge that drives me to learn things
08:24:25  <txdv>you are up there
08:24:45  <txdv>aspnet-core is higher though
08:25:02  <kellabyte>well, depends, for some reason round 13 for haywire was really bad, no idea why
08:25:26  <txdv>is it possible to view older rounds?
08:25:44  <kellabyte>yeah hold on
08:26:01  <kellabyte>also click the "cloud" tab, for some reason this time around I was faster in the cloud than on bare metal haha
08:26:14  <kellabyte>I'm assuming an anomaly happened
08:27:17  <kellabyte>https://www.techempower.com/benchmarks/#section=data-r10&hw=ph&test=plaintext
08:28:41  <txdv>strange
08:29:19  <txdv>that h2o server though
08:29:48  <kellabyte>yeah, I'm working on some pretty big gains, if I can figure out why 20 threads in-process goes 2x slower than 20 processes, that'll unlock alot
08:29:51  <txdv>he got rid of libuv because it was a layer of abstraction which was slowing him down
08:30:08  <kellabyte>I don't think libuv is a big problem to be honest
08:30:33  <kellabyte>I know some major gains, I just don't know how to polish them or this one multi-threaded bottleneck
08:31:03  <kellabyte>h2o also uses a SIMD accelerated http parser, I'm working on integrating that too
08:31:31  <kellabyte>but if I can get multi-threaded to run as fast as multi-process, then add tcp batching in the responses, I'll be right at the top
08:34:01  <txdv>i remember indutny investigating that http parser
08:34:20  <kellabyte>it works a lot differently, so I'm trying to sort out how to integrate it with haywires buffer manager
08:34:57  <kellabyte>I have an old experimental branch that adds that parser + tcp batch sends, and it got me up to 9.8M req/sec
08:35:24  <kellabyte>but it was hacky but proved to me what I could achieve if I could polished it and didn't lose the perf
08:38:41  <txdv>you are ambitious
08:38:46  <kellabyte>its fun :)
08:39:47  <kellabyte>I wish I had a contributor or two to learn from though, I've learned doing something solo isn't the best learning environment
08:39:54  <txdv>"IM TOP 10 IN TECHEMPOWER SO STFU" is way to assert geek dominance
08:40:00  <txdv>is a nice way *
08:40:01  <kellabyte>haha
08:40:30  <txdv>what exactly do you want to learn?
08:41:20  <kellabyte>well, I know my C is really sloppy, I want to learn how to just write faster things and get better at investigating bottlenecks
08:41:53  <kellabyte>like this isn't the greatest investigation and I'm not terribly confident at what the issue is: https://github.com/gperftools/gperftools/issues/847
08:44:49  <txdv>threads fighting for the same heap?
08:44:59  <txdv>don't know, I'm not an expert either
08:47:31  <txdv>but tcmalloc should literally fight that problem, according to the description
08:49:19  <txdv>lol it was fun chasing that bug down
08:49:34  <txdv>the fun part is that I literally had the same problem with my own tests
08:50:18  <kellabyte>haha really?
08:50:27  <kellabyte>thank you so very much for your time, I really appreciate it :)
08:51:06  <txdv>a test would break or something like that and that pipe file would just be there and afterwards the test would fail again and again
08:52:57  * grantilajoined
08:55:37  <txdv>also, it had nothing to do with bad code and such, it literally was an environmental issue
08:57:31  * grantilaquit (Ping timeout: 256 seconds)
09:06:32  <kellabyte>ahh yeah
09:06:43  <kellabyte>this really explains why it was so confusing to track down
09:07:05  * noorajoined
09:08:26  <txdv>linux does actually the worst thing with those named pipes
09:08:30  <txdv>because that file is literally useless
09:08:54  <txdv>you listen to it again and it does nothing? you send to it something and it just goes to /dev/null
09:12:05  <kellabyte>lol
09:12:16  <kellabyte>can you make an anonymous pipe?
09:12:46  <txdv>on linux, yeah
09:12:57  <txdv>uv_async uses anonymous pipes
09:17:13  <txdv>windows has CreatePipe
09:21:28  <txdv>http://stackoverflow.com/questions/14769638/what-are-the-main-disadvantages-of-windows-pipes-for-inter-process-communication?answertab=votes#tab-top
09:21:37  <txdv>unnamed pipe have no IOCP support
09:21:53  <txdv>that is probably why it is not supported by libuv
09:23:05  <txdv>Everything is so confusing with those APIs
09:23:15  <txdv>file reads writes on windows are supported with IOCP
09:23:29  <txdv>however opening a file (which might be a IO blocking operation) is not
09:23:48  <txdv>close on a file blocks on some unix platform on some doesn't
09:24:44  <txdv>the linux async io primitives not used in libuv because they are too buggy
09:25:00  <txdv>then there are the small diferences between kqueue, epoll, whatever solaris uses
09:33:03  <kellabyte>oh weird, that is strange
09:33:57  <txdv>libuv basically abstracts that entire clusterfuck
09:34:00  <txdv>there are many more corner cases
09:34:59  <txdv>i remember at first I was like "It would be cool to write a libuv counter part in language X" but then I realized it is just wasted effort recreating and catching all the corner cases
09:51:31  * nooraquit (Quit: Page closed)
09:54:34  <kellabyte>yeah lol
09:54:45  <kellabyte>getting things polished really is a tough task, redoing it isn't often worth it
10:22:22  * seishunjoined
10:24:26  * seishunquit (Disconnected by services)
10:24:32  * seishunjoined
11:14:58  * grantilajoined
11:19:35  * grantilaquit (Ping timeout: 265 seconds)
11:25:09  * thealphanerdquit (Quit: farewell for now)
11:25:39  * thealphanerdjoined
11:57:33  * seishunquit (Ping timeout: 246 seconds)
12:00:36  * seishunjoined
12:02:47  * seishunquit (Disconnected by services)
12:02:54  * seishunjoined
12:15:24  * rendarjoined
12:25:47  * grantilajoined
12:25:47  * grantilaquit (Client Quit)
12:38:23  * seishunquit (Ping timeout: 245 seconds)
12:42:02  * seishunjoined
12:49:38  * seishunquit (Ping timeout: 245 seconds)
12:58:21  * rmgquit (Remote host closed the connection)
12:58:40  * seishunjoined
13:37:04  * grantilajoined
13:41:49  * grantilaquit (Ping timeout: 260 seconds)
14:48:16  * grantilajoined
14:48:16  * grantilaquit (Client Quit)
15:30:08  * Fishrock123joined
15:59:34  * grantilajoined
16:03:53  * grantilaquit (Ping timeout: 244 seconds)
16:19:13  * seishunquit (Ping timeout: 245 seconds)
16:22:38  * rmgjoined
16:28:31  * rmgquit (Ping timeout: 260 seconds)
17:04:54  * rmgjoined
17:10:22  * grantilajoined
17:10:22  * grantilaquit (Client Quit)
17:22:09  * grantilajoined
17:23:22  * jessicaquynhquit
17:25:37  * jessicaquynhjoined
17:42:19  * grantilaquit (Quit: Leaving.)
17:42:49  * seishunjoined
18:11:22  * italoacasasjoined
18:15:28  * seishunquit (Ping timeout: 245 seconds)
18:18:17  * seishunjoined
19:32:08  * tunniclmjoined
19:42:23  * dagobert________changed nick to dagobert_
19:42:36  * dagobert_changed nick to dagobert__
20:09:38  * rendarquit (Ping timeout: 250 seconds)
20:13:29  * seishunquit (Disconnected by services)
20:13:35  * seishunjoined
20:39:06  * rendarjoined
21:12:56  * jessicaquynhquit (Remote host closed the connection)
21:13:33  * jessicaquynhjoined
21:44:11  * jessicaquynhquit (Remote host closed the connection)
21:49:24  * rendarquit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
22:16:00  * seishunquit (Ping timeout: 246 seconds)
22:20:30  * jessicaquynhjoined
22:24:51  * jessicaquynhquit (Ping timeout: 252 seconds)
22:33:03  * jessicaquynhjoined
23:16:08  * saghulquit (Ping timeout: 250 seconds)
23:51:49  * rmgquit (Remote host closed the connection)