00:35:52  * evangeni_quit (Remote host closed the connection)
00:37:06  * majek_joined
00:37:47  * avital_joined
00:42:23  * andrewreedyjoined
00:44:18  * sveisveiquit (*.net *.split)
00:44:18  * majekquit (*.net *.split)
00:44:18  * avitalquit (*.net *.split)
00:44:18  * avital_changed nick to avital
00:46:33  * majek_changed nick to majek
00:50:29  * evangenieurquit (Remote host closed the connection)
01:20:34  * G________joined
01:22:10  * G________part
01:28:40  * droachjoined
02:05:17  * droachquit (Quit: Textual IRC Client: www.textualapp.com)
02:37:11  * sveisveijoined
06:09:40  * lluadquit (Quit: lluad)
08:16:02  * andrewreedyquit (Quit: andrewreedy)
08:27:45  * vguerrajoined
08:59:25  * evangenieurjoined
09:03:13  * colinsullivanquit (Quit: Leaving.)
09:09:28  * evangeni_joined
11:29:43  * dennismartenssonjoined
11:47:41  * evangenieurquit (Remote host closed the connection)
12:11:43  * evangenieurjoined
12:45:36  * dennismartenssonquit (Remote host closed the connection)
13:09:53  * ArxPoeticaquit (Quit: Leaving.)
14:12:23  * dennismartenssonjoined
15:07:50  * dennisma_joined
15:07:50  * dennismartenssonquit (Read error: Connection reset by peer)
15:18:22  * lluadjoined
16:17:16  * andrewreedyjoined
16:44:32  * vguerraquit (Remote host closed the connection)
17:04:03  * paulbjensenjoined
18:00:39  * ArxPoetica1joined
18:00:42  * ArxPoetica1quit (Client Quit)
18:02:00  * ArxPoetica1joined
18:02:29  * ArxPoetica1quit (Client Quit)
18:18:31  * andrewreedyquit (Quit: andrewreedy)
18:19:36  * andrewreedyjoined
18:53:23  * paulbjensenquit (Quit: paulbjensen)
19:11:35  * erobit2joined
19:14:37  * paulbjensenjoined
19:23:16  * andrewreedyquit (Quit: andrewreedy)
19:26:20  * paulbjensenquit (Quit: paulbjensen)
19:26:40  * ArxPoeticajoined
19:26:51  <ArxPoetica>lo
19:27:08  <ArxPoetica>hey owenb --> I assume you're fully aware of the release: http://blog.nodejs.org/2013/03/11/node-v0-10-0-stable/
19:28:32  <ArxPoetica>I'm most interested in the streams bit -- how will that effect SS
19:28:48  <ArxPoetica>i.e., how has your experimentation gone w/ streams?
19:31:53  * k1ijoined
19:32:01  <k1i>anyone in here using ss-angular?
20:01:59  * colinsullivanjoined
20:23:37  * andrewreedyjoined
20:40:43  * k1i_joined
20:45:12  * ArxPoeticapart
20:45:56  * ArxPoeticajoined
20:47:14  * k1iquit (Ping timeout: 246 seconds)
20:47:15  * k1i_changed nick to k1i
20:48:10  <ArxPoetica>owenb — did you see my Q from earlier?
20:49:38  * erobit2quit (Ping timeout: 245 seconds)
20:49:53  <owenb>hey
20:50:11  <owenb>SS 0.3 should work fine with 0.10
20:50:20  <owenb>let me know if you find any bugs
20:50:27  <owenb>but from initial testing, all looks well
20:50:54  <owenb>k1i if you want to use ss-angular you'll need to use the latest socketstream on github master
20:51:07  <owenb>until i push a new release to npm shortly
20:51:22  <k1i>there are a few decisions in ss-angular I am having trouble wrapping my head around
20:51:32  <owenb>still doing a lot of experimentation around streams right now
20:51:48  <k1i>- polling
20:51:58  <k1i>- serverside rendering
20:52:19  <owenb>initial gut feeling is we'll use them extensively on the server, but I've yet to decide if they are a good fit for the browser. plan on writing benchmarks shortly
20:52:41  <owenb>k1i - i agree, but Ben is very much open to improvements and pull requests
20:52:58  <k1i>well
20:53:05  <owenb>or you can make your own module :)
20:53:07  <k1i>with the way the routing occurs in SS-angular and the lack of integration tightly with express
20:53:14  <k1i>is serverside rendering going to be doable?
20:53:31  <owenb>do you want server-side rendering?
20:53:41  <k1i>yes
20:53:46  <k1i>but with the use of angular "clientside routes"
20:53:54  <k1i>i dont know how viable that will be
20:54:25  <owenb>yeah. i'm going through a similar issue myself at the moment
20:54:34  <owenb>i favour client-side rendering
20:54:38  <owenb>and data over the wire
20:54:41  <k1i>I prefer both.
20:54:50  <k1i>I want CS rendering and SS rendering for SEO
20:54:53  <owenb>but when making a new docs site for 0.4 i really need it to be indexed in google
20:54:56  <owenb>exactly
20:55:14  <owenb>i'm looking to make it easier to combine both approaches in SS 0.4
20:55:32  <owenb>mostly through tighter express integration (though you won't be forced to use Express)
21:02:42  <k1i>yeah
21:02:43  <k1i>also
21:03:03  <k1i>I don't like the model update system in the ss-angular package - polling is wholly unnecessary if you use redis to sync servers
21:03:09  <k1i>you also would avoid the need for sticky sessions
21:03:22  <k1i>if redis was used to maintain session/state rather than in-memory session management
21:03:53  <owenb>indeed
21:04:02  <ArxPoetica>how would streams be used on the browser?
21:04:07  <ArxPoetica>I find that confusing. :P
21:04:21  <ArxPoetica>Are you talking about actually chunking data that way?
21:04:37  <owenb>i would make it depend on redis, but it would be nice to support that and do away with polling altogether
21:04:59  <owenb>if you can improve ss-angular I will do the work needed to make sure it works well with the new 0.4 service API
21:05:03  <owenb>planning to do that anyway
21:05:12  <owenb>as it's one of the main modules I want to launch 0.4 with
21:05:28  <owenb>Arx: by sending the streams shim shipped with browserify
21:05:32  <ArxPoetica>k1i — there's another methodology someone employed for angular routing
21:05:39  <owenb>but it's only streams1 :(
21:05:39  <ArxPoetica>other than polidore's
21:05:48  <ArxPoetica>lemme see if I can find
21:05:50  <owenb>and streams2 is VERY big
21:06:09  <k1i>owenb: one sec
21:06:17  <owenb>so I'm 90% decided that we won't ship this by default (streams1 is an additional 5kb when minified)
21:07:18  <ArxPoetica>I'll have to look @ the browserify streams shim
21:07:25  <ArxPoetica>so did you ultimately decide on browserify?
21:07:29  <k1i>ok so, owenb
21:07:35  <k1i>I don't like SS-angular's filestructure, either
21:07:43  <k1i>I don't like the massive split between server/clientside
21:08:05  <owenb>not 100% decided yet
21:08:11  <owenb>but using for now
21:08:25  <k1i>the way I want to do it is, use the same files for each environment, and then, on packAssets(), do some clever file manipulation to change local model calls to RPC calls in model files etc.
21:08:30  <owenb>everything is in flux atm.... i really want 0.4 to be awesome in time for April (realtime conf eu)
21:08:41  <owenb>and out and working, at least in a preview state
21:09:17  <k1i>Meteor has some major problems, IMO, and I want to use something like socketstream+ angular + an ORM to make the equivalent of a well-written meteor
21:09:17  <ArxPoetica>Here's an alternate way: https://github.com/americanyak/ss-angular-demo
21:09:32  <ArxPoetica>I don't think this uses ss-angular, but wraps it another way
21:09:51  <k1i>I believe that all of the routes should be the exact same, the serverside files and clientside files should be the same, and the ORM should be divorced from the system
21:09:52  <ArxPoetica>not 100% sure (that's actually my repository, but Davis Ford made most of the commits)
21:10:09  <k1i>I also believe that redis should be used as a state storing machine instead of memory (so sticky sessions arent necessary, and horizontal scaling is cake)
21:10:20  <owenb>k1i you should read the comment i posted today: https://github.com/socketstream/socketstream/pull/358
21:10:23  <k1i>I also believe that ZeroMQ should be used to propagate PUSHes instead of polling
21:10:29  <k1i>that kind of a system would be the cat's meow
21:10:53  <k1i>I hate dependencies, but everyone is using redis anyway
21:11:02  <k1i>ZeroMQ is small enough that - who cares
21:11:04  <ArxPoetica>not EVERYONE
21:11:06  <ArxPoetica>:P
21:11:07  <k1i>Everyone.
21:11:11  <ArxPoetica>lol
21:11:13  <owenb>k1i. and also potentially very complicated to setup which will put people off. believe me, i've been there :)
21:11:16  <k1i>well
21:11:17  <ArxPoetica>I'm actually trying to get away from it.
21:11:25  <k1i>ArxPoetica: and move to what kind of a distributed in-memory cache?
21:11:33  <ArxPoetica>local storage ha ha
21:11:36  <k1i>because in-memory session handling in an environment like meteor,derby,socketstream isnt going to work
21:11:41  <k1i>because horizontal scaling becomes impossible
21:11:49  <k1i>without stupid shit like sticky sessions
21:12:02  <owenb>i think you will always need sticky sessions
21:12:03  <ArxPoetica>SSD ftw
21:12:04  <owenb>it just makes sense
21:12:06  <ArxPoetica>:P
21:12:09  <k1i>why?
21:12:15  <ArxPoetica>Okay, I'll stop trolling now on that.
21:12:22  <owenb>when you write you want to persist to redis or something like that
21:12:25  <owenb>but most of the time you're just reading
21:12:27  <ArxPoetica>I am actually curious about ZeroMQ
21:12:32  <k1i>well, here's the issue
21:12:43  <k1i>I want my clients to be server-independent, like in a stateless system
21:12:45  <owenb>and you're constantly connected to the same piece of hardware anyway
21:12:49  <k1i>by moving that state to redis - something everything can connect to
21:12:52  <owenb>so why not keep all the session data in ram
21:12:55  <k1i>you can remove the dependency on stick ysession
21:13:04  <k1i>because you are keeping it in RAM, in Redis, just in a distributed RAM cache
21:13:17  <k1i>the distributed portion of it is already done for you
21:13:18  <owenb>but a ws connection maybe open for hours
21:13:25  <k1i>yes, and that doesnt require sticky sessions to deal with
21:13:27  <k1i>long polling does
21:13:31  <owenb>yes
21:13:46  <owenb>i think all reads should be done from ram where possible
21:13:53  <k1i>why would a WS require sticky sessions to remain open?
21:13:56  <owenb>but i agree, you should be able to connect to any other service
21:13:59  <k1i>long polling/comet yes
21:13:59  <owenb>server*
21:14:03  <owenb>and load from redis
21:14:06  <k1i>WS is a TCP socket, no?
21:14:10  <owenb>which is kinda what we have already anyway
21:14:12  <owenb>yes
21:14:35  <k1i>so why would you need sticky sessions to persist the machine it happens to be connecting to if it is an open socket
21:15:13  <k1i>I can see this with a stateless protocol (longpolling), but, not with an open socket
21:16:04  <owenb>i find myself agreeing. i think what you describe is exactly what happens now tbh if you use redis to store session data
21:16:19  <k1i>but can you arbitrarily connect to any node and have the same persisted session?
21:16:24  <owenb>yes
21:16:40  <owenb>if you use the connect redis session store
21:16:44  <owenb>not the in memory one
21:16:51  <k1i>so
21:16:51  <owenb>the key is in the writes
21:16:54  <k1i>ok
21:17:01  <owenb>they are all pushed to redis
21:17:01  <k1i>how does that happen
21:17:04  <k1i>are they queued?
21:17:17  <k1i>a la mongo
21:17:48  <owenb>not sure of the exact implementation - it's just the default connect session driver. you can use any connect session store with SS
21:17:58  <k1i>ok
21:18:18  <k1i>I really like that
21:18:25  <k1i>so, sticky sessions would effectively be unnecessary?
21:18:31  <ArxPoetica>What's the diff between sticky and store?
21:18:31  <k1i>because you can technically make stateless requests?
21:18:49  <k1i>Sticky sessions are a loadbalancer-level setting that allow you to map specific HTTPrequests to a specific server instance
21:18:54  <ArxPoetica>ah
21:19:07  <ArxPoetica>right — hence the horizontal concern
21:19:09  <owenb>yes. there is no need for loadbalancers really
21:19:12  <k1i>it is a shitty way (IMO) to achieve a "stateless-like" system in a multi-server environment
21:19:27  <owenb>read this: http://www.rabbitmq.com/blog/2011/09/13/sockjs-websocket-emulation/
21:19:29  <k1i>if you use redis as your session store, instead of relying on individual nodes to maintain state
21:19:33  <owenb>about load balancing
21:19:41  <k1i>you dont need that
21:20:14  <k1i>my environment will always have native websocket support (browser enforcement)
21:20:20  <owenb>right guys i need to get back to working on 0.4. got a small talk to give on wednesday and right now a lot of stuff is in a broken state
21:20:24  <k1i>gotcha
21:20:29  <k1i>so
21:20:32  <k1i>ss-redis is the session store module?
21:20:43  <owenb>no just connect.redis
21:20:52  <owenb>whatever it's called :)
21:20:57  <k1i>gotcha
21:21:24  <owenb>think we will continue using connect session stores in 0.4 as they are the most mature and there are adapters for every db on earth lol
21:21:27  <ArxPoetica>k1i — did you get my ss-angular-routing link?
21:21:31  <k1i>yes
21:21:33  <k1i>I saw it
21:21:36  <ArxPoetica>ok
21:21:51  <k1i>I believe that querying mydomain.com/route should be the same on the client as it is on the server
21:22:01  <k1i>for SEO purposes
21:22:09  <k1i>automatically that is
21:22:19  <ArxPoetica>doesn't derby do that?
21:22:37  <k1i>yes, but derby fails in many other areas
21:22:40  <owenb>need to go for a bit. will check back in later. chat soon guys
21:22:41  <ArxPoetica>agreed
21:22:45  <ArxPoetica>cya
21:22:45  <k1i>SS + angular = the best.
21:22:59  <ArxPoetica>love angular but haven't had a reason to dive in fully yet
21:23:03  <k1i>ss + angular + redis + zeroMQ + an ORM is going to be the best "real" system
21:23:33  <ArxPoetica>and which ORM?
21:23:38  <k1i>should be ORM agnostic.
21:23:45  <k1i>you don't even technically need an ORM
21:23:56  <k1i>you can execute direct RPC calls to a compliant database connection over the wire
21:24:02  <k1i>mongo, et al.
21:24:43  <k1i>ZeroMQ will take the place of DB polling
21:25:17  <ArxPoetica>ss is flexible that way, eys
21:25:18  <ArxPoetica>*yes
21:25:18  <ArxPoetica>but you're sold on redis as the *only* offering? :P
21:25:21  <ArxPoetica>sure
21:25:28  <k1i>redis for session storage, honestly, that is a minor portion of it
21:25:37  <k1i>any kind of distributed store that can store a key/value will do
21:26:26  <ArxPoetica>Could one just use Mongo that way? MubSub?
21:26:30  <k1i>redis is just nice because it is tested and scales well, memcache would do the exact same thing (but you need to tweak the expiry settings, etc)
21:26:39  <k1i>Mongo shouldnt be used for session persistence, IMO
21:26:53  <k1i>massive overkill for what amounts as a KV store
21:27:12  <ArxPoetica>I've been wondering over that recently — like — I love Mongo for many things
21:27:15  <k1i>and the real issue is the global write lock
21:27:17  <ArxPoetica>But could it
21:27:19  <k1i>yes
21:27:22  <ArxPoetica>lol
21:27:30  <ArxPoetica>https://github.com/scttnlsn/mubsub
21:27:34  <k1i>redis is just a better store though for massive writespeed
21:27:39  <ArxPoetica>right
21:27:48  <k1i>we are just talking sessions here
21:27:50  <k1i>not model persistence
21:28:00  <ArxPoetica>sure
21:28:07  <ArxPoetica>I get the diff
21:28:42  <ArxPoetica>The only reason I even started asking the question is because of costs associated w/ running different dbs.
21:28:48  <k1i>yep
21:29:03  <ArxPoetica>not really a db guy, tbh
21:29:05  <k1i>basically, in my ideal architecture (for realtime)
21:29:21  <k1i>you have tons of socketstream servers running express/your app's serverside code/models
21:29:39  <k1i>you have a redis cluster on top of that
21:29:56  <k1i>all of the socketstream servers communicate model updates to eachother via ZeroMQ (agnostic of DB)
21:30:27  * evangenieurquit (Remote host closed the connection)
21:30:27  <ArxPoetica>do you have an ideal db?
21:30:37  <k1i>you mean for model persistence?
21:30:40  <ArxPoetica>yes
21:30:41  <k1i>mongo or PGSQL ;)
21:30:46  <ArxPoetica>right
21:31:25  <k1i>but the idea is to have a system that can run completely agnostic to the model datastore
21:31:33  <k1i>technically you could have all models in memory-only
21:31:41  <k1i>being very transient
21:31:50  <ArxPoetica>yeah, I'm on a bit of a shoestring trying to set something up and not choke if it has a sudden dig or viral hit
21:31:54  <k1i>but basically, this is the way meteor should have been written
21:31:54  <ArxPoetica>spike
21:31:59  <k1i>also
21:32:14  <k1i>very easy to scale using any kind of heroku, ec2 (opsworks), etc.
21:32:26  <ArxPoetica>Been using NodeJitsu
21:32:32  <k1i>so basically you have 3 places to scale
21:32:34  <ArxPoetica>but I'll probably switch to ec2 for cost
21:32:43  <k1i>DB servers, app servers, and session persistence server
21:32:59  <k1i>the session persistence server is honestly the cheapest part of the whole thing
21:33:07  <ArxPoetica>That's interesting.
21:33:11  <k1i>redis costs nothing to run
21:33:17  <ArxPoetica>yeah?
21:33:31  <ArxPoetica>Mind me asking what's your background/where you're coming from?
21:33:43  <ArxPoetica>(And I don't think I've noticed you in here before?)
21:33:47  <k1i>ruby on rails
21:33:51  <ArxPoetica>gotcha
21:33:55  <k1i>ive been architecting and planning this for quite a while
21:34:03  <k1i>meteor and derby are unscalable and unacceptable for a lot of reasons
21:34:06  <k1i>socketstream is perfect IMO
21:34:13  <ArxPoetica>It's been great so far
21:34:21  <k1i>meteor is less than derby in some ways, and derby less than meteor in others
21:34:28  <ArxPoetica>Even 0.3 <— which isn't "production ready" is pretty awesome.
21:34:44  <k1i>anyway, a session identifier and some arbitrary data (subs, etc)
21:34:46  <k1i>will be less than 125kb
21:34:47  <k1i>but
21:34:54  <k1i>8192 125kb sessions can fit in 1gb of RAM
21:34:57  <k1i>cheap
21:35:23  <ArxPoetica>interesting
21:35:33  <k1i>concurrents, mind you
21:35:52  <k1i>your sessions are likely going to be less than a few KB
21:36:01  <ArxPoetica>gotcha
21:36:32  <ArxPoetica>I've been trying to price this out for my client
21:36:37  <k1i>the session will store the unique client identifier, any and all subscriptions, and any other lightweight non-datastore-meritorious data
21:37:12  <k1i>i was talking to charuru (meteor guy) - he runs 700 concurrents before getting massive slowdown on a medium EC2 instance
21:37:25  <k1i>(outsources DB hosting)
21:37:34  <k1i>and meteor is super-inefficient for a lot of reasons
21:37:54  <k1i>that is a single node instance
21:40:09  <ArxPoetica>that's epic
21:40:19  <ArxPoetica>I actually love Meteor for a lot of reasons — but love SS way more
21:40:31  <k1i>Meteor is going to succeed due to 11.2m in funding
21:40:39  <k1i>they have a shitty model with a few nice pieces
21:40:49  <k1i>with no horizontal scaling in mind, etc.
21:40:50  <ArxPoetica>yeah
21:41:03  <ArxPoetica>I know someone who tried to scale Derby horizontal
21:41:06  <ArxPoetica>crashed
21:41:08  <k1i>Derby has a lot of problems
21:41:13  <k1i>it has good intentions
21:41:16  <ArxPoetica>he said he'd never do it that way again
21:41:41  <ArxPoetica>I knew they weren't going to work for me when I went into IRC and they responded like know-it-alls to problems I presented
21:41:42  <k1i>I believe the method I described to you is the best, simplest way to scale
21:41:42  <ArxPoetica>:P
21:41:54  <ArxPoetica>yeah, that's awesome
21:41:56  <k1i>zeroMQ for PUSH (rather than polling)
21:42:07  <k1i>redis (or memcached or if you must - mongo) for session storage
21:42:11  <ArxPoetica>Gonna go look into that
21:42:18  <k1i>socketstream servers for app hosting
21:42:20  <ArxPoetica>no — no mongo ha ha
21:42:27  <ArxPoetica>using mongo for models and stuff though
21:42:32  <k1i>yes I agree with that
21:42:43  <k1i>SocketStream is 80% of what I need
21:42:47  <k1i>Angular is another 10%
21:42:54  <k1i>the rest of it I am probably going to have to write
21:43:16  <ArxPoetica>FYI this is a really crude app I and another guy built using SS: https://github.com/engagementgamelab/CivicSeed
21:43:27  <ArxPoetica>has some big bugs, and it's not done, but it works
21:43:37  <ArxPoetica>also has major need for tests
21:43:40  <k1i>also
21:43:53  <k1i>I want the ability to define mongodb connections on the fly
21:43:56  <k1i>for a multi-tenant environment.
21:44:02  <k1i>this is sort of a personal requirement
21:44:14  <ArxPoetica>what does that mean?
21:44:16  <ArxPoetica>on the fly
21:44:24  <ArxPoetica>like from the client?
21:44:29  <k1i>well on the serverside
21:44:29  <k1i>but
21:44:38  <k1i>the client could potentially determine which DB to connect to
21:44:47  <k1i>it would connect for whatever information it would need to pull
21:44:49  <k1i>and then close the connection
21:45:10  <k1i>im just saying at runtime
21:45:21  <ArxPoetica>i c
21:45:54  <ArxPoetica>well, thx for the chat
21:45:57  <ArxPoetica>gotta run soon
21:46:05  <ArxPoetica>are you in here a lot? I haven't noticed you before...
21:46:27  <ArxPoetica>though there are a lot of peeps who sorta lie dormant
21:46:42  <k1i>no, I just found this channel
21:46:46  <k1i>I am in the derby/meteor channels
21:46:54  <k1i>I am probably going to start in the next few days on my own ss-angular
21:47:02  <ArxPoetica>awesome
21:47:26  <ArxPoetica>if you see a guy in here — I think his handle is zenocon — he worked on the ss-angular-demo i sent
21:48:23  <k1i>gotcha
21:48:26  <k1i>I will talk to him about it
21:48:35  <k1i>but odds are my implementation is going to be entirely different than anything else
21:48:39  <k1i>I firmly believe in DRY
21:49:15  <ArxPoetica>very
21:49:17  <k1i>in my strong opinion, I believe that something like grunt should sift through a model file
21:49:28  <k1i>and you can run the model on both client and server
21:49:38  <ArxPoetica>yeah — that's the goal of ss
21:49:41  <k1i>at "compile time," grunt could sift through a model file and replace any local function def
21:49:46  <k1i>with an RPC call
21:49:47  <ArxPoetica>but implementation could be better
21:49:48  <k1i>seamlessly
21:49:50  <k1i>take the same args, etc.
21:50:05  <k1i>so the serverside calling convention of a model function is the same as the clientside calling convention
21:50:12  <k1i>one would just seamlessly be replaced with RPC code, though
21:50:20  <k1i>for the packaged client JS
21:50:28  <k1i>if that makes any sense
21:50:32  <k1i>so User.js
21:50:36  <k1i>defines App.Model.User as a class
21:50:37  <ArxPoetica>So I'm curious how you maintain convention (with grunt, for example), but be open enough for the different models.
21:50:57  <k1i>say there is a function updateTwitter() in there that obviously requires serverside RPC (for credentials, etc, purposes)
21:51:02  <ArxPoetica>I've created my own model convention to not repeat myself, but I'm interested in where you go with this
21:51:10  <ArxPoetica>grunt is a good idea, btw
21:51:13  <k1i>updateTwitter() in the client JS gets rewritten with an SS RPC call
21:51:22  <k1i>updateTwitter() in the serverside version of the code stays the same
21:51:29  <ArxPoetica>keep me posted
21:51:30  <k1i>they take the same args
21:51:37  <ArxPoetica>yeah i get it
21:51:40  <k1i>get called via the same calling convention, etc.
21:52:08  <k1i>to maintain convention across "different models," you would need some kind of metadata
21:52:17  <k1i>to define which functions (or all) get rewritten to "RPC-mode"
21:52:21  * evangenieurjoined
21:52:22  <k1i>and a conventional calling method
21:52:34  <k1i>or method of passing arguments
21:52:58  <ArxPoetica>ah
21:52:58  <k1i>so, to call updateTwitter, you might have to do User.call("updateTwitter", {args}, function callback() {cb});
21:53:06  <ArxPoetica>so it's the metadata that gets standardized
21:53:07  <k1i>but on the serverside that is a seamless, local function execution
21:53:41  <k1i>yeah
21:53:45  <ArxPoetica>hmm…so is this all precompile? (ala grunt)
21:53:54  <k1i>the clientside would be precompiled, yes, though, technically
21:54:05  <k1i>i was thinking in the wrong track
21:54:17  <ArxPoetica>Well, even on the back end
21:54:43  <ArxPoetica>Just thinking — for perf reasons don't want to build in a layer that only works at runtime.
21:55:05  <k1i>you would be able to do something like updateTwitter(args, cb) on the client and the server
21:55:12  <k1i>just like that
21:55:22  <k1i>provided you standardized the calling convention and argument order
21:55:25  <k1i>well
21:55:31  <k1i>grunt would only be needed on the clientside
21:55:36  <ArxPoetica>right
21:55:46  <k1i>to rewrite functions that need to be made into rpc calls (seamlessly)
21:55:51  <k1i>this results in a shitload of less boilerplate code
21:56:13  <k1i>so you arent defining a server/model/user.js and a client/model/user.js which are basically the same thing, but one contains RPC code and one contains the real, needed-to-be-executed code
21:56:24  <ArxPoetica>ah....
21:56:26  <ArxPoetica>okay I get it now
21:56:37  <k1i>but the metadata could be as simple as
21:56:39  <ArxPoetica>so its basically XORing the server
21:56:45  <k1i>to create clientcode, yes
21:56:52  <ArxPoetica>right right
21:56:54  <ArxPoetica>that's smart
21:57:07  <k1i>"//remoteFuncs:[updateTwitter,updateFacebook,deleteUser,createUser]
21:57:08  <ArxPoetica>cool
21:57:17  <ArxPoetica>well i gotta run
21:57:20  <k1i>grunt would find those function definitions (standard js), and rewrite them
21:57:22  <ArxPoetica>catch'ya later
21:57:23  <k1i>alright
21:57:24  <k1i>later
21:57:32  * ArxPoeticapart
22:14:50  * colinsullivanquit (Quit: Leaving.)
22:29:20  * colinsullivanjoined
22:49:11  * colinsullivanquit (Quit: Leaving.)
22:58:56  * paulbjensenjoined
23:21:31  * paulbjensenquit (Quit: paulbjensen)
23:38:28  * ArxPoetica1joined
23:58:31  * ArxPoetica1changed nick to ArxPoetica