01:48:03  * jwolfequit (Ping timeout: 240 seconds)
02:01:16  * jwolfejoined
02:34:15  * BobGneuquit (Ping timeout: 272 seconds)
03:48:10  * olalondejoined
03:58:32  * plutoniixjoined
03:58:36  * plutoniixquit (Max SendQ exceeded)
07:02:19  * etnbrdjoined
08:12:48  * etnbrdquit (Quit: Ping timeout (120 seconds))
08:53:30  * thefourtheyejoined
09:21:04  * NewNewbiejoined
09:25:06  * NewNewbiequit (Client Quit)
09:56:33  * koldbrutalityquit (Ping timeout: 240 seconds)
10:10:01  * olalondequit (Quit: Connection closed for inactivity)
11:23:55  * RT|Chatzillaquit (Read error: Connection reset by peer)
11:28:55  * RT|Chatzillajoined
13:04:44  * Net147quit (Quit: Quit)
13:06:03  * bradleymeckjoined
13:11:18  * Net147joined
13:21:50  * thefourtheyequit (Quit: Connection closed for inactivity)
13:32:26  * seventhjoined
15:16:12  * seventhquit (Ping timeout: 265 seconds)
15:53:42  <bradleymeck>buu: v8 heap allocated
15:54:24  <bradleymeck>total is heap available
15:56:16  <buu>=[
15:56:46  <buu>Can I change it?
15:56:52  <bradleymeck>change... "it"?
15:57:00  <bradleymeck>the heap available?
15:57:47  <buu>The ammount allocated
15:57:56  <bradleymeck>`node --v8-options | grep space` will tell you the options for manipulating the sizes used by v8 allocations
15:58:01  <buu>I want to reduce the ram required for greating new isolates
15:58:04  <bradleymeck>buu: well you can just allocate objects
15:58:25  <buu>huh?
15:58:48  <bradleymeck>v8 pre-allocates a heap of some size in Node, that is the available amount
15:59:03  <bradleymeck>the total
15:59:08  <buu>Even for individual isolates?
15:59:30  <bradleymeck>yes, each isolate gets its own heap
16:00:18  <bradleymeck>buu: check out the v8-options and fiddle as you want
16:00:24  <buu>Yeah I meant pre-allocated
16:00:48  <buu>I'm using Isolate::New() ..
16:01:55  <bradleymeck>buu: there are not per isolate settings...
16:02:07  <bradleymeck>https://github.com/v8/v8/blob/master/include/v8.h#L7151 is on the v8 engine itself
16:02:21  <buu>=[
16:02:34  <buu>Well that ruins my life
16:02:36  <bradleymeck>use processes
16:02:40  <buu>no
16:02:43  <bradleymeck>k
16:02:53  <buu>I just want cheap isolates
16:03:17  <bradleymeck>isolates aren't super cheap
16:04:03  <bradleymeck>you can make a pool of them in a worker process that has the right settings and do stuff from there
16:05:11  <bradleymeck>if you feel saucy you can hook into allocation callbacks and enforce sizing yourself
16:09:42  <buu>I don't really want to enforce it
16:09:44  * bradleymeckquit (Quit: bradleymeck)
16:09:55  <buu>I just want to
16:36:31  * bradleymeckjoined
16:41:47  * seventhjoined
17:20:04  * RT|Chatzillaquit (Quit: ChatZilla 0.9.86.1 [Firefox 2.0.0.22pre/2010030309])
17:36:20  * koldbrutalityjoined
17:41:35  * olalondejoined
18:15:45  <caitp>i'm not sure what the benefits of multiple isolates really are anyways, other than _complete_ sandboxing
18:16:07  <caitp>I'm pretty sure chromium doesn't set up too many isolates
18:16:21  <caitp>although admittedly I haven't investigated that, but
18:27:44  <bradleymeck>caitp: in process workers are about it
18:28:43  <bradleymeck>Node has a massive PR that will probably never land using em for threaded worker
18:36:28  <caitp>well, at most I think chromium is like one-per-frame, possibly with a separate one for shared web workers
18:36:35  <caitp>but I'm not even sure if it's one per frame
18:36:49  <caitp>and by frame I mean like top level frame
18:37:43  <bradleymeck>correct frames are done via context
18:37:59  <caitp>I guess, one per render process? but that sounds more like a webkit thing than a chromium thing
18:38:14  <bradleymeck>well Workers get their own
19:14:41  <aklein>the most straightforward, intended way to use Isolates is one-per-thread-that-needs-to-run-in-parallel
19:14:53  <aklein>and that's how Chromium uses them
19:14:58  <aklein>one for the main thread, one per worker thread
19:24:35  <trungl-bot`>Tree closed by buildbot@chromium.org: Tree is closed (Automatic: "Mjsunit" on http://build.chromium.org/p/client.v8/builders/V8%20Linux64%20GC%20Stress%20-%20custom%20snapshot/builds/8024 "V8 Linux64 GC Stress - custom snapshot" from e51482f01f26e0013e6377e85c4d2c41900e403c: littledan@chromium.org)
19:34:19  * Cubejoined
19:36:30  <caitp>hmm
19:55:03  * seventhquit (Ping timeout: 265 seconds)
20:05:53  <trungl-bot`>Tree opened by dehrenberg@google.com: Tree is open (reverted)
20:49:17  <trungl-bot`>Tree closed by machenbach@chromium.org: closed (gnumbd down?)
20:55:21  <trungl-bot`>Tree closed by machenbach@chromium.org: closed - crbug.com/648358 - please reopen when resolved
21:10:28  <trungl-bot`>Tree opened by tandrii@google.com: open (thanks, dnj@)
21:18:33  <trungl-bot`>Tree closed by buildbot@chromium.org: Tree is closed (Automatic: "Check" on http://build.chromium.org/p/client.v8/builders/V8%20Linux%20-%20arm64%20-%20sim%20-%20MSAN/builds/10910 "V8 Linux - arm64 - sim - MSAN" from a4737793cb86e37eb101aa175282ffb2bda39194: alph@chromium.org,bmeurer@chromium.org,bradnelson@chromium.org,littledan@chromium.org,lpy@chromium.org,mtrofin@chromium.org,verwaest@chromium.org)
21:34:17  * buuquit (Ping timeout: 240 seconds)
21:51:11  * Tweth-V-PDSquit (*.net *.split)
21:52:15  * Tweth-U-PDSjoined
21:53:48  <trungl-bot`>Tree opened by machenbach@chromium.org: Open
22:03:23  * Cubequit (Quit: Leaving)
22:10:42  * austincheneyjoined
22:11:15  <austincheney>could this be evidence of an endless loop? https://gist.github.com/prettydiff/e60c9236e16a478808a4108e50b7118d
22:24:10  * bobmcwjoined
22:26:44  * bobmcwquit (Remote host closed the connection)
22:27:29  * bobmcwjoined
22:27:29  * bobmcwquit (Changing host)
22:27:29  * bobmcwjoined
22:29:20  * bobmcwquit (Remote host closed the connection)
22:30:00  * bobmcwjoined
22:32:27  * RT|Chatzillajoined
22:34:25  * bobmcwquit (Remote host closed the connection)
22:35:01  * bobmcwjoined
22:36:50  * bobmcwquit (Remote host closed the connection)
22:37:24  * bobmcwjoined
22:37:38  * bobmcwquit (Changing host)
22:37:38  * bobmcwjoined
22:39:09  * bobmcwquit (Remote host closed the connection)
22:39:46  * bobmcwjoined
22:43:20  * bobmcwquit (Remote host closed the connection)
22:43:54  * bobmcwjoined
22:46:18  * bobmcw_joined
22:46:18  * bobmcwquit (Remote host closed the connection)
22:47:57  * bobmcw_quit (Remote host closed the connection)
22:48:34  * bobmcwjoined
22:51:03  * bobmcwquit (Remote host closed the connection)
22:51:39  * bobmcwjoined
22:54:07  * bobmcwquit (Remote host closed the connection)
22:54:41  * bobmcwjoined
22:56:24  * bobmcwquit (Remote host closed the connection)
22:57:03  * bobmcwjoined
23:00:31  * bobmcwquit (Remote host closed the connection)
23:01:09  * bobmcwjoined
23:02:53  * bobmcwquit (Remote host closed the connection)
23:03:31  * bobmcwjoined
23:03:39  * buujoined
23:05:57  * bobmcwquit (Remote host closed the connection)
23:06:31  * bobmcwjoined
23:09:02  * bobmcwquit (Remote host closed the connection)
23:09:38  * bobmcwjoined
23:12:12  * bobmcwquit (Remote host closed the connection)
23:12:58  * bobmcwjoined
23:14:34  * bobmcwquit (Remote host closed the connection)
23:15:13  * bobmcwjoined
23:17:54  * bobmcwquit (Remote host closed the connection)
23:18:32  * bobmcwjoined
23:22:59  * bobmcwquit (Remote host closed the connection)
23:23:36  * bobmcwjoined
23:25:26  * bobmcwquit (Remote host closed the connection)
23:26:01  * bobmcwjoined
23:28:29  * bobmcwquit (Remote host closed the connection)
23:29:16  * bobmcwjoined
23:30:54  <austincheney>disregard earlier comment... i have verified an endless loop in my own code
23:31:57  * bobmcwquit (Remote host closed the connection)
23:32:36  * bobmcwjoined
23:36:28  * bobmcwquit (Remote host closed the connection)
23:37:05  * bobmcwjoined
23:39:54  * bobmcwquit (Remote host closed the connection)
23:42:36  * bobmcwjoined
23:48:15  * bobmcwquit (Remote host closed the connection)
23:48:54  * bobmcwjoined
23:51:36  * bobmcwquit (Remote host closed the connection)
23:52:04  * plutoniixjoined
23:52:24  * bobmcwjoined
23:52:39  * bobmcwquit (Changing host)
23:52:39  * bobmcwjoined
23:56:50  * bobmcwquit (Remote host closed the connection)
23:57:24  * bobmcwjoined