02:03:21  * Cubequit (Remote host closed the connection)
02:08:02  * zvjoined
02:51:01  * bradleymeckjoined
03:21:54  * mikolalysenko_joined
03:25:14  * Garbee_joined
03:25:57  * mikolalysenkoquit (Ping timeout: 258 seconds)
03:25:57  * jochen__quit (Ping timeout: 258 seconds)
03:25:57  * scottmgquit (Ping timeout: 258 seconds)
03:25:58  * Garbeequit (Ping timeout: 258 seconds)
03:25:59  * seththompsonquit (Ping timeout: 258 seconds)
03:25:59  * Martijncquit (Ping timeout: 258 seconds)
03:26:00  * s1wquit (Ping timeout: 258 seconds)
03:27:53  * s1wjoined
03:28:04  * mikolalysenko_changed nick to mikolalysenko
03:28:17  * s1wchanged nick to Guest22353
03:30:28  * Garbee_changed nick to Garbee
03:31:56  * seththompsonjoined
03:32:08  * jochen__joined
03:32:44  * scottmgjoined
03:34:36  * Martijncjoined
03:35:23  * bradleymeckquit (Quit: bradleymeck)
03:53:34  * xaxxonjoined
03:53:55  <xaxxon>any issues with using FindInstanceInPrototypeChain when changing the prototype of an object after creation?
03:54:10  <xaxxon>I'm getting no matches when I'm expecting one
05:17:42  * dostoyevskyjoined
05:18:40  <dostoyevsky> d->result = Array::New(d->isolate, 0); // is that going to free d->result? I get an ``Allocation failed'' error after a while
05:20:28  <dostoyevsky>(always creating new d->result which are not that large individually )
05:26:57  <dostoyevsky>Should I also generate a new isolate to get the GC going?
05:27:51  <xaxxon>dostoyevsky, that will reduce the ref count on whatever d->result pointed to before, if anything
05:28:13  <xaxxon>the GC won't run immedaitely because of it, most likely, though
05:28:33  <xaxxon>GC is per-isolate... so I don't know what you mean by creating a new isolate to get GC going
05:28:51  <xaxxon>an isolate is a completely independent interpreter and shares (virtually) nothing with other isolates
05:29:02  <xaxxon>except for certain startup parameters, basically
05:29:28  <xaxxon>not sure what your allocation failed error is.. are you making "a lot" of them?
05:30:00  <dostoyevsky>xaxxon: Well, I create many tiny arrays and then call a callback... after calling the callback thousands of times, v8 crashes with "Allocation error"... the old arrays were never cleaned up... but nobody should be using them (reference counter should be 0)
05:30:40  <xaxxon>you can run the GC manually. I forget how but it's easy to find
05:31:39  <xaxxon>dostoyevsky, ah, found it: * while(!v8::Isolate::IdleNotificationDeadline([time])) {};
05:32:15  <xaxxon>I dunno what time is, but you can find it in the doxygen docs presumably
05:32:23  <xaxxon>it's like floating point seconds or millis or something
05:33:28  <xaxxon>maybe post the code for your callback somewhere and link it here?
05:33:44  <xaxxon>also, I'm not the most knowledgable about v8...
05:35:17  <xaxxon>but as to one of your statements, you can see that the GC is called on the isolate object.. so a different isolate wouldn't have a related GC
05:35:27  <dostoyevsky>xaxxon: yeah, I do not think running the gc is the problem...
05:35:54  <xaxxon>is the "allocation failed" a v8 error or a system one?
05:36:12  <dostoyevsky>Because you get this "<--- Last few GCs --->" error message
05:36:27  <xaxxon>I have no idea what that is
05:36:46  <dostoyevsky>226507 ms: Scavenge 1400.6 (1457.0) -> 1400.6 (1457.0) // 227789 ms: Mark-sweep 1400.6 (1457.0) -> 1398.5 (1457.0) MB // 229063 ms: Mark-sweep 1398.5 (1457.0) -> 1398.1 (1457.0) MB
05:37:01  <dostoyevsky>So it ran but couldn't free anything
05:37:31  <dostoyevsky>And then you get: FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
05:37:44  <xaxxon>are you using promises?
05:38:01  <xaxxon>if so: https://bugs.chromium.org/p/v8/issues/detail?id=4858
05:38:32  <xaxxon>that link may be of interest (just skimmed it)
05:42:52  <xaxxon>dostoyevsky, I'm curious what you find, though. this place is usually pretty dead and I'm always trying ot learn more about v
05:42:53  <xaxxon>8
05:43:27  <dostoyevsky>And then there is this: https://github.com/katlogic/lv8/blob/344353dac702901c917a4c05438252121c527ab3/lv8.cpp#L755
05:43:43  <dostoyevsky>> V8 keeps one instace stale for fast reuse. Allocate dummy one to force flush.
05:46:00  <dostoyevsky>So I assume that I should change contexts regularly because the GC might ignore the current context and it is not meant to use it for such a long time, as I do
05:46:47  <xaxxon>contexts are what have your functions in them, so you can't just "switch" them arbitrarily.
05:47:09  <xaxxon>I mean, they get built with the templates you've set up
05:47:14  <xaxxon>but bec areful.
05:47:30  <xaxxon>I'd be surprised if the problem isn't something speific with your code, not that you're not playing the right games with v8
05:48:09  <xaxxon>can you post the code you're running?
05:49:35  <dostoyevsky>xaxxon: it's about 50k of C++ code... not sure if posting it might be helpful
05:51:58  <dostoyevsky>xaxxon: but since you say that one isolate per function, then that's interesting... because I have a function with a callback... so that's actually two functions.. and I prepare the parameters for another function in v8 and then call the javascript callback... I could create a new isolate for each function call
05:52:29  <xaxxon>I don't think I said what you said I said
05:52:41  <xaxxon>and that sounds like a VERY slow way to do things
05:54:03  <xaxxon>if you could isolate the memory issue to a smaller section of code, it could help -- both for your own sanity and for someone else to try to help
05:54:08  <dostoyevsky>xaxxon: Isolate* isolate = Isolate::GetCurrent(); HandleScope scope(isolate); // that's quite cheap imho... I do it for each call into my v8 module
05:54:26  <xaxxon>that's not creating a new isolate
05:54:33  <dostoyevsky>oh...
05:54:40  <xaxxon>that's getting the "current" one
05:55:20  <xaxxon>v8::Isolate::New() creates a new one
05:56:12  <xaxxon>v8 has a lot of state information in it. You're requesting the isolate it is storing as the "current" one to be given back to you
06:03:15  * Guest22353quit (Changing host)
06:03:16  * Guest22353joined
06:03:17  * Guest22353changed nick to s1w
06:17:35  <dostoyevsky>xaxxon: I got it down to a few line of code
06:19:09  <xaxxon>well, share share :)
06:20:48  <dostoyevsky>http://ideone.com/clWSy1
06:21:00  <dostoyevsky>That will just blow up... because result is never freed
06:22:20  <xaxxon>and what if you put a GC call in there?
06:22:29  <xaxxon>V8 isn't threaded behidn the scenes
06:22:41  <xaxxon>so it won't GC in the middle of your calls
06:23:02  <xaxxon>(I don't think)
06:23:14  <dostoyevsky>xaxxon: Well, result is used for a javascript callback... so the GC should run there, no?
06:23:57  <xaxxon>well, that's a lot of objects. not sure it's supposed to work
06:24:14  <dostoyevsky>http://ideone.com/clWSy1 <- I added the error message... you can see that the GC ran... with a heap size of (1457.0) MB
06:25:24  <xaxxon>I see. The limit for 64 bit V8 is around 1.9Gbytes now. Start V8 with the
06:25:31  <xaxxon>https://bugs.chromium.org/p/v8/issues/detail?id=847
06:25:50  <xaxxon>that's a bit old, but maybe try that flag
06:26:48  <dostoyevsky>xaxxon: I know that I could use more RAM but I just want an array with 100 elements in ram... not the whole dataset
06:27:34  <xaxxon>dostoyevsky, please update your example with where you call the callback
06:27:51  <xaxxon>otherwise I can't know what data you need when you try to call it
06:28:33  <xaxxon>hrm, try this.. move the handlescope inside the first for loop
06:28:46  <xaxxon>maybe it doesn't do reference counting when you overwrite a local
06:28:52  <xaxxon>maybe it only does it when the handlescope goes away
06:29:22  <xaxxon>something like this: http://ideone.com/WnZDCC
06:31:20  <dostoyevsky>xaxxon: yeah!
06:31:28  <xaxxon>:-)
06:31:32  <xaxxon>did it work?
06:31:43  <dostoyevsky>xaxxon: yeah... no more memory leaks...
06:31:58  <xaxxon>yay. it wasn't actually leaking it, though. it was just crashing before it had a chance to clean up
06:32:06  <xaxxon>well, I learend something
06:32:30  <xaxxon>I thought Local worked more like Global... but alas they are much lighter weight
06:32:33  <dostoyevsky>now I just need to figure out if I can call handlescope manually... as I am not using a C++ scope for handling the callbaks
06:33:06  <xaxxon>you can't, but that shouldn't matter. overwriting a variable in JS is different
06:33:12  <xaxxon>that DOES affect it's GCability
06:34:03  <xaxxon>I'm afraid you may have created a similar, but different problem with your code you posted
06:35:32  <xaxxon>vs your actual use case
06:43:45  * xaxxonquit (Quit: xaxxon)
06:44:00  * xaxxonjoined
06:44:04  <xaxxon>wrong button
07:46:25  * wingoquit (Quit: ZNC 1.6.1+deb1 - http://znc.in)
07:48:21  * wingojoined
08:19:56  * xiinotulpjoined
08:22:55  * plutoniixquit (Ping timeout: 244 seconds)
08:33:43  * davijoined
08:42:59  <dostoyevsky>xaxxon: I tried for a while to see if I could decompose HandleScope into manual mode but it doesn't work... So I will change my internal API to be able to use a true C++ scope and then I should be set... So what we did was still helpful
08:44:05  <xaxxon>dostoyevsky, hrmmm hang on
08:44:22  <xaxxon>so the "ref count" is the combination of the active javascript variables AND the handlescope
08:44:45  <xaxxon>so in yoru real code, if you have a handlescope around the loop, it doesn't matter what goes on in javsript
08:44:49  <xaxxon>*typing is hard
08:45:44  <xaxxon>both the v8::Local, v8::Global, and plain javascript variables all contribute to the lifetime of the actual memory and inhibit GC
08:46:08  <dostoyevsky>xaxxon: The HandleScope in my example that you moved into another scope (of the for loop) cannot be moved in my current code... but there seems not other way
08:46:23  <xaxxon>You can create sub-handle scopes
08:46:27  <xaxxon>(I think)
08:46:36  <xaxxon>locals are associated with the most local handlescope
08:46:41  <xaxxon>(I think)
08:46:50  <xaxxon>(I'm making this up, but there's a chance it's true)
08:47:29  <xaxxon>v8::local is associated with the most "tightest" handlescope
08:47:39  <xaxxon>am I making sense? I'm not sure
08:48:15  <xaxxon>so you can do handlescope some_local_that_lives_a_while for_loop{handlscope some_short_lifespan_local}
08:48:19  <dostoyevsky>xaxxon: Well you have different kinds of references...
08:49:35  <dostoyevsky>xaxxon: But Scopes are either HandleScope, EscapableHandleScope (return value from a function), or SealHandleScope ... I couldn't find any other use cases in v8's source code
08:49:44  <xaxxon>hang on
08:49:54  * daviquit (Ping timeout: 265 seconds)
08:50:48  <xaxxon>dostoyevsky, http://melpon.org/wandbox/permlink/dRtjGHrbPu3LaVM9
08:52:03  <dostoyevsky>xaxxon: Yeah, but the for-scope doesn't exist in my code...
08:52:34  <xaxxon>what are you doing that results in so mayn variables being created that you run out of space?
08:52:44  <dostoyevsky>Newer versions of v8 have CloseAndEscape for a scope, which might do what I want
08:55:04  <dostoyevsky>http://melpon.org/wandbox/permlink/okjgXfENBxOfAdTB (c&p from v8 source)
08:55:49  <dostoyevsky>slow_storage = loop_scope.CloseAndEscape(new_storage); // this might be just what I would need ...
08:56:49  <dostoyevsky>xaxxon: I keep some GB of data in sqlite databases and want to process these GB of data... in JavaScript... so the result sets become too large
08:57:19  <xaxxon>right, I was figuring you'd load a row, process it, then throw away the row
08:57:26  <xaxxon>load the next row
08:57:50  <xaxxon>for(auto & row : results) {
08:58:16  <xaxxon>handlescope hs; ...create JS objects...; call_callback(...js objects...); }
08:58:35  <dostoyevsky>xaxxon: I call the callback for every 10000 rows, to keep the overhead minimal
08:59:38  <xaxxon>have you benchmarked this? I'm guessing the cost of creating all these JS vars is massive compared to the per-call cost of calling your callback
08:59:53  <xaxxon>also, did you try bumping up the heap size limit?
09:00:10  <xaxxon>there's no memory leak. You're simply creating more objects than it's set to allow
09:00:10  <dostoyevsky>xaxxon: I need to be websql compatible too...
09:01:17  <dostoyevsky>xaxxon: the callback expects the full dataset in results.rows ... I just had the idea of calling the cb multiple times to get it work with my GB of datasets
09:01:28  * thefourtheyejoined
09:01:49  <xaxxon>dostoyevsky, you don't have to actually put all the data in there, though. You can pretend
09:01:53  <xaxxon>and load lazy behind the scenes
09:02:35  <xaxxon>no good database driver dumps all the data on you in one big chunk
09:02:58  <xaxxon>and with javascript it's extra super easy to intercept the calls
09:03:02  <dostoyevsky>xaxxon: they don't? Well, websql has a 20M limit or so :)
09:03:42  <xaxxon>but does it actually load up all the data right away?
09:03:59  <xaxxon>I'm guessing no, it just makes it look like it
09:04:20  <dostoyevsky>But still, I just can return EMOREDATAREADY in my sql_retrieve function and then have a loop in the v8 module... for the scope
09:04:23  <xaxxon>but anyhow, you can't return more objects than your JS environment will let you -- doesn't matter if it's embedded V8 or running in chrome
09:04:31  * xiinotulpchanged nick to plutoniix
09:04:48  <xaxxon>so either you increase the memory limit or you lower the max rows returned at once
09:05:00  <xaxxon>the point is that V8 is acting appropriately and you're calling it correctly
09:05:06  <xaxxon>you're simply asking it to do something it won't
09:05:21  <dostoyevsky>xaxxon: websql will load all the data right away... but the other node.js sql apis use streaming apis to cope with large data sets
09:05:39  <xaxxon>you don't need a "streaming api" to not load all the data at once
09:05:55  <xaxxon>you can hook into array and atribute lookups
09:06:32  <xaxxon>so when someone says database_results[2].some_value -- you can intercept the array index call and load the result then
09:06:40  <xaxxon>or take it out of a c++ cache
09:07:26  <xaxxon>from a coding perspective, you don't know if the data is being loaded all at once or not
09:08:37  <dostoyevsky>xaxxon: Well, suppose you had such a cache in a v8 module, how could you sope it? You would load it when? In what cope and when you read it from JS--it's still there? What kind of sope would that be? And you'd end up with the same problem
09:09:13  <xaxxon>the handlescope would be inside the callback associated with the array lookup
09:09:37  <xaxxon>which would leave the only references in the user code and allow the GC to run on the data as soon as the user code didn't have a reference to it
09:10:05  <xaxxon>so if the user handled it row by row, then only a row's worth of data at a time would be unGCable
09:10:29  <xaxxon>but that's what you want -- the memory usage to scale with the user-code's requirements
09:10:46  <xaxxon>not with the full data set's
09:10:58  <dostoyevsky>xaxxon: And how would you not end up with all the data in memory? What are you going to do if someoone periodiacally asks for results.item(0) ? Just re-execute the SQL for the data you already purged from RAM?
09:11:52  <xaxxon>dostoyevsky, there are all sorts of caching options.
09:11:57  <xaxxon>but I don't understand what else you think you can do
09:11:57  <dostoyevsky>You'd be implementing a virtual memory management based on SQL results sets...
09:12:20  <xaxxon>you're trying to put more stuff in memory than V8 will allow for.
09:12:30  <xaxxon>you HAVE to do something differnet
09:13:06  <dostoyevsky>xaxxon: Yeah, just operate in chunks... same like with when you do read() in c++... your idea seems to be to invent mmap() for sql :)
09:13:30  <xaxxon>this is what most database drivers do
09:13:51  <xaxxon>you can request amssive datasets an access them as if it's all there, but it's being loaded from the DB behind the scenes
09:14:42  <xaxxon>optimziations can be done to help certain acces patterns have lower latency
09:15:29  <dostoyevsky>xaxxon: I've never seen any db driver like this... and never seen a low level SQL api offer that kind of caching
09:17:31  <xaxxon>well, anyhow, you have to figure out something else, and you know your use cases best..
09:19:06  <dostoyevsky>xaxxon: Yeah, but thanks for helping me figuring out how to use handle scope properly :)
09:21:44  <xaxxon>dostoyevsky, happy to help
09:21:54  <xaxxon>I learned something too. also, usually this channel is compltely dead
09:22:06  <xaxxon>v8-users mailing list or stackoverflow are better places to find help
09:25:02  <dostoyevsky>yeah :-/ reading the v8 source code is quite educating... but still I do not really like that kind of c++ very much, very dogmatic, not really versatile to use in your own c++ code
09:29:00  <xaxxon>dostoyevsky, well, much of v8 is dictated by requirements for javascript and for performance.
09:29:49  <xaxxon>it's very important for it to be fast, ls that requires certain usability tradeoffs. but the options are always there.
09:34:26  <dostoyevsky>xaxxon: e->Set(v8::String::NewFromUtf8(isolate, "message"), v8::String::NewFromUtf8(isolate, s->error)) // why not just write e->Set("message", s->error) ? It's not like boost hasn't taught us for decades how to write great APIs...
09:35:45  <xaxxon>https://google.github.io/styleguide/cppguide.html#Function_Overloading
09:35:58  <xaxxon>the number of combinatinos you'd need to provide would be quite large
09:36:15  <xaxxon>the API is consistent in wanting the JS types, not C++ types
09:37:01  <xaxxon>because you'd have to accept the JS types, too. and would you take JS/C++ and C++/JS as well as C++/C++ and JS/JS?
09:37:07  <xaxxon>what about std::string?
09:37:32  <xaxxon>the point is that the type is going to be a JS type, so if it needs to be converted, let the user take care of deciding how to do it
09:37:42  <xaxxon>at least that' smy interpretation
09:40:43  <xaxxon>also, boost isn't what I'd call great APIs
09:57:10  * wingoquit (Quit: ZNC 1.6.1+deb1 - http://znc.in)
09:58:03  * xaxxonquit (Ping timeout: 250 seconds)
10:11:54  * xaxxonjoined
10:16:23  * wingojoined
11:28:15  * xaxxonquit (Quit: xaxxon)
11:31:17  * saper_changed nick to saper
11:36:27  * davijoined
11:56:30  * daviquit (Remote host closed the connection)
12:23:15  * rmcilroyquit (Ping timeout: 264 seconds)
12:35:33  * rmcilroyjoined
13:13:03  * rmcilroyquit (Ping timeout: 264 seconds)
13:25:05  * rmcilroyjoined
13:46:03  * rmcilroyquit (Ping timeout: 264 seconds)
13:57:59  * bradleymeckjoined
14:45:13  * rmcilroyjoined
15:38:09  * davijoined
16:19:14  * bradleymeckquit (Quit: bradleymeck)
16:20:54  * RT|Chatzillaquit (Quit: ChatZilla 0.9.86.1 [Firefox 2.0.0.22pre/2010030309])
16:24:33  * bradleymeckjoined
16:27:55  * bradleymeckquit (Client Quit)
16:44:51  * rmcilroyquit (Ping timeout: 264 seconds)
17:20:26  * rmcilroyjoined
17:29:54  * rmcilroyquit (Ping timeout: 276 seconds)
17:37:06  * bradleymeckjoined
17:41:27  * rmcilroyjoined
17:50:51  * bradleymeckquit (Quit: bradleymeck)
18:34:59  * daviquit (Ping timeout: 260 seconds)
18:55:39  * davijoined
19:01:17  * thefourtheyequit (Quit: Connection closed for inactivity)
19:01:58  * daviquit (Remote host closed the connection)
19:18:46  * Vbitzquit (Ping timeout: 250 seconds)
19:21:54  * Vbitzjoined
19:36:12  * zvquit (Ping timeout: 240 seconds)
20:16:15  * bradleymeckjoined
20:42:26  * zvjoined
20:52:32  * bradleymeckquit (Quit: bradleymeck)
22:20:36  * RT|Chatzillajoined
22:25:28  * seththompson_joined
22:25:40  * Garbeequit (Ping timeout: 264 seconds)
22:26:16  * scottmgquit (Ping timeout: 264 seconds)
22:26:52  * seththompsonquit (Ping timeout: 264 seconds)
22:26:52  * gsathyaquit (Ping timeout: 264 seconds)
22:26:53  * dagobert________quit (Ping timeout: 264 seconds)
22:27:07  * seththompson_changed nick to seththompson
22:27:28  * mathiasbynensquit (Ping timeout: 264 seconds)
22:29:03  * mathiasbynensjoined
22:30:49  * dagobert________joined
22:32:07  * Garbeejoined
22:32:58  * scottmgjoined
22:40:22  * gsathyajoined
22:57:19  * ofrobotsquit (Ping timeout: 255 seconds)
22:59:15  * NewNewbiequit (Read error: Connection reset by peer)
23:18:40  * _Gettyjoined
23:33:52  * ofrobotsjoined
23:35:31  * NewNewbiejoined
23:50:15  * gravitationjoined