00:01:53  * dennismartenssonquit (Ping timeout: 240 seconds)
00:16:45  * lluadquit (Quit: lluad)
00:26:03  * dennismartenssonjoined
00:52:51  * crikeyjpjoined
00:58:00  * crikeyjpquit (Ping timeout: 252 seconds)
00:59:38  * evangenieurjoined
01:13:34  * k1ijoined
01:24:30  * lluadjoined
01:31:11  * dennismartenssonquit (Remote host closed the connection)
01:36:27  <ArxPoetica1>anyone around?
01:36:34  * ArxPoetica1changed nick to ArxPoetica
01:36:39  <ArxPoetica>owenb?
01:36:48  <k1i>yo
01:37:09  <ArxPoetica>hey k1i — have you ever hosted SS in production?
01:37:12  <k1i>nope
01:37:15  <k1i>why, what are you thinking?
01:37:29  <ArxPoetica>Oh, just trying to pack the assets correctly — having some trouble
01:37:39  <ArxPoetica>It's technically only a staging site
01:37:52  <k1i>gotcha
01:37:53  <ArxPoetica>using S3
01:37:56  <k1i>most of my work has been in .4
01:38:00  <ArxPoetica>gotcha
01:40:08  <k1i>I need to write a mongoose proxy
01:40:13  <k1i>a good one
01:40:14  <k1i>for S4
01:47:17  * crikeyjpjoined
01:51:54  * crikeyjpquit (Ping timeout: 252 seconds)
02:01:40  * dennismartenssonjoined
02:09:43  * dennismartenssonquit (Ping timeout: 245 seconds)
02:18:02  * evangenieurquit (Remote host closed the connection)
02:29:34  * sonofjackquit (Remote host closed the connection)
02:36:24  * dennismartenssonjoined
02:46:39  * lluadquit (Quit: lluad)
02:47:17  * ArxPoetica1joined
02:48:22  * ArxPoetica1quit (Client Quit)
02:50:03  * lluadjoined
02:55:59  * ArxPoeticaquit (*.net *.split)
03:03:40  * ArxPoeticajoined
03:14:09  * lluadquit (Quit: lluad)
03:30:25  * ArxPoeticaquit (Quit: Leaving.)
03:49:44  * lluadjoined
03:52:13  * zenoconjoined
04:24:00  * lluadquit (Quit: lluad)
04:29:39  * crikeyjpjoined
04:34:09  * crikeyjpquit (Ping timeout: 252 seconds)
04:38:54  * k1iquit (Quit: k1i)
05:25:38  * k1ijoined
05:46:07  * zenoconquit (Remote host closed the connection)
05:46:36  <k1i>owenb: you here
05:46:48  * zenoconjoined
05:50:36  * crikeyjpjoined
05:50:58  * zenoconquit (Ping timeout: 245 seconds)
05:59:23  * dennismartenssonquit (Read error: Connection reset by peer)
06:01:19  * dennismartenssonjoined
06:16:43  * zenoconjoined
06:24:43  * zenoconquit (Ping timeout: 245 seconds)
06:50:47  * dennisma_joined
06:51:05  * dennismartenssonquit (Read error: Connection reset by peer)
07:20:23  * k1iquit (Quit: k1i)
08:26:48  * crikeyjpquit (Read error: Connection reset by peer)
08:27:22  * crikeyjpjoined
09:51:26  * crikeyjpquit (Remote host closed the connection)
09:57:59  * dennisma_quit (Ping timeout: 246 seconds)
10:37:34  * evangenieurjoined
10:50:23  * crikeyjpjoined
10:55:51  * crikeyjpquit (Ping timeout: 252 seconds)
11:39:55  * crikeyjpjoined
12:19:04  * zenoconjoined
12:52:53  * evangenieurquit (Remote host closed the connection)
13:10:11  * zenoconquit (Remote host closed the connection)
13:26:05  * zenoconjoined
13:55:48  * zenoconquit (Remote host closed the connection)
14:55:58  * lluadjoined
15:44:45  * crikeyjpquit (Remote host closed the connection)
15:54:31  * crikeyjp_joined
17:23:40  * k1ijoined
19:01:53  * mtsrjoined
19:48:34  <owenb>hey k1i. I'm here if you want to chat
19:49:14  <k1i>hey owenb
19:49:27  <k1i>so I have been working with SS .4 for a while, trying to work on a nice, clean model implementation
19:49:36  <k1i>as a realtime service
19:50:04  <k1i>the way I have been structuring it, is, /services/model/modelname.js
19:50:11  <owenb>great
19:50:12  <owenb>that's the idea
19:50:20  <k1i>the issue I am running into
19:50:23  <k1i>is dynamically generated clientside code
19:50:38  <owenb>hmm yeah
19:50:39  <k1i>I want the ability to have one model file, and have it dynamically generate the clientside proxy code for remote calls
19:51:03  <owenb>yeah i know what you mean
19:51:04  <k1i>so basically, you can use the same calling convention on the serverside as the clientside when calling model funcs / instance methods
19:51:08  <owenb>we did this in 0.1 and 0.2
19:51:13  <owenb>yup
19:51:24  <k1i>I was writing a proxy for mongoose when it got too messy
19:51:42  <owenb>ultimately i decided it was faster all round if you just specified the method you wanted to run as a string
19:51:54  <owenb>otherwise you end up generating a tonne of objects on the client
19:52:00  <owenb>some which may never be called
19:52:11  <owenb>and whilst that's going on, the client can't do anything
19:52:11  <k1i>the thing is
19:52:19  <k1i>I wanted to implement some kind of model metadata
19:52:24  <k1i>for functions that CAN be called on the clientside
19:52:27  <k1i>I wanted real OO models on the client
19:52:33  <owenb>ah right
19:52:35  <k1i>some functions, such as convenience methods/dressings
19:52:44  <k1i>can absolutely be called on the clientside
19:52:44  <owenb>yup
19:52:49  <k1i>exports.lcoal
19:52:50  <k1i>exports.remote
19:52:54  <k1i>exports.schema
19:53:27  <k1i>this is my biggest holdup with using SS right now
19:53:33  <owenb>so you can't send code dynamically to the client, but you could send code which generates the functions and objects you need
19:53:35  <k1i>and eventually you'd write an OT service that could handle that smoothly
19:53:44  <k1i>that's what I was doing
19:54:05  <k1i>I felt also, like I was duplicating a shitload of code
19:54:07  <owenb>send a Model prototype to the client, then create an instance for each model on the server
19:54:15  <k1i>between the RPC service
19:54:16  <k1i>and my own
19:54:40  <owenb>even with the callbacks and json on?
19:54:48  <k1i>?
19:54:50  <owenb>what else is rpc doing that you need?
19:55:05  <k1i>the model function remote call sthemselves
19:55:07  <owenb>{use: {json: true, callbacks: true}}
19:55:18  <k1i>I have my own realtime service "model"
19:55:22  <owenb>sure
19:55:32  <k1i>which basically duplicates the RPC functions on a model/instance level
19:55:45  <owenb>so i've started working on something called realtime models
19:55:56  <owenb>it will be implemented as a service
19:55:59  <k1i>I'd be happy to dedicate tons of time/code in this
19:56:06  <owenb>thanks man
19:56:13  <k1i>particularly OT code
19:56:18  <k1i>which I believe needs to be implemented at a model level
19:56:20  <owenb>that would be awesome
19:56:22  <owenb>yes
19:56:27  <k1i>also,I really like/agree with/believe in the .3 method of scaling (redis)
19:56:32  <k1i>sessions/state needs to be maintained in redis
19:56:36  <owenb>true
19:56:39  <k1i>for stupid-easy scaling
19:56:43  <owenb>i've been working on sessions today
19:57:04  <k1i>If you can implement redis in a similar way to .3
19:57:06  <k1i>for scalability
19:57:08  <owenb>needs a bit more work but hoping to push a big new update in the next few days
19:57:08  <k1i>SS will be the best, by far
19:57:18  <owenb>it works the same way
19:57:20  <k1i>ok
19:57:23  <owenb>we use connect session store
19:57:28  <k1i>so pubsub will be maintained in redis?
19:58:18  <owenb>pubsub and sessions both use redis in 0.3. in 0.4 you will def be able to pass the server a Redis connect session store object and use that for sessions. i've not thought about pubsub, but Redis should also be an option there
19:58:25  <owenb>here's the biggest problem I have.....
19:58:27  <owenb>with services
19:58:35  <owenb>i want each one to be independent
19:58:45  <owenb>so you don't have to have pubsub and rpc to use realtime models
19:58:48  <k1i>well
19:58:52  <k1i>it's kind of inherent
19:58:56  <owenb>otherwise we get into dependencies and versioning
19:58:57  <k1i>(pubsub)
19:59:24  <k1i>also
19:59:31  <k1i>If I specify redis as an rts-pubsub provider
19:59:36  <k1i>or zmq as an rts-pubsub provider
19:59:39  <k1i>realtime models should take advantage of it
19:59:43  <owenb>hmmm
19:59:54  <owenb>then services need dependencies
19:59:58  <owenb>there's no way around it
20:00:03  <owenb>i've been trying to avoid this
20:00:29  <k1i>I think htere are huge advantages to dependencies
20:00:32  <owenb>but i agree, it would make things easier
20:00:44  <k1i>I shoudl only have to specify reids once
20:00:46  <k1i>for pubsub
20:00:48  <k1i>for rts-model to use it
20:01:23  <k1i>also
20:01:27  <k1i>there needs to be an OT service
20:01:31  <k1i>IMO
20:01:35  <k1i>I have really been trying to avoid this, myself
20:01:36  <owenb>yeah
20:01:41  <k1i>but eventually, OT will need to be abstracted into a service
20:01:52  <owenb>i kinda agree
20:01:56  <owenb>hmmmm
20:02:00  <k1i>also, I believe models should be datastore agnostic
20:02:05  <k1i>rts-model-mongoose
20:02:17  <owenb>so the realtime models implementation i've started doesn't use the pubsub service, but can use redis
20:02:18  <k1i>could just be instance methods/bindings
20:02:23  <owenb>and should in order to scale
20:02:32  <k1i>that was my issue
20:02:38  <k1i>allowing dependent services
20:02:47  <owenb>i was thinking you'd start one redis connection in your app and pass the same connection to both rts-models and rts-pubsub
20:02:48  <k1i>because there is a LOT of RPC within model code obviously
20:02:59  <k1i>well, the issue is, the API abstraction
20:03:04  <k1i>that's twice the abstraction code
20:03:10  <k1i>for different providers that can do pubsub
20:03:16  <k1i>zmq, eventemitter, redis
20:03:20  <owenb>well a lot of it is already built
20:03:27  <k1i>I mean code duplication
20:03:27  <owenb>into the service layer
20:03:44  <k1i>i feel like the pubsub service itself is a pretty good abstraction
20:03:47  <owenb>e.g. all the broadcasting, callbacks etc
20:04:04  <owenb>it doesn't really add any lines of code
20:04:12  <owenb>just creates an event emitter in the client
20:04:21  <owenb>rts-models will do the same
20:04:38  <owenb>if you use both rts-models and rts-pubsub browserify will still only send you one eventemitter lib
20:04:57  <k1i>yeah
20:05:02  <k1i>but the actual bindings to that eventemitter lib
20:05:07  <owenb>here's where it gets more difficult
20:05:08  <k1i>if you have a common pubsub api
20:05:20  <k1i>you can have any backend and only have to write the abstraction code once (rts-pubsub)
20:06:14  <k1i>id really like to work on this rts-model implementation
20:06:23  <k1i>as that, and sessions, are naturally my major sticking points
20:06:30  <owenb>sessions are coming very soon
20:06:36  <owenb>along with big changes to the modules
20:06:48  <owenb>most exciting thing is you can now connect to the server via a node process
20:06:48  <k1i>if everything is written right
20:06:50  <owenb>not just a browser
20:06:59  <k1i>you can avoid using sticky sessions on enterprise-class horizontal sclaing
20:07:09  <owenb>so you can query a server over a node reply
20:07:10  <owenb>reply
20:07:12  <owenb>repl lol
20:07:15  <owenb>using the same api
20:07:22  <owenb>i'm hoping so too
20:07:24  <owenb>but it's hard
20:07:28  <k1i>yes, it is very hard
20:07:29  <owenb>the problem is multiple tabs
20:07:33  <k1i>all state has to be stored in redis
20:07:37  <owenb>not just that
20:07:37  <k1i>or mapped to redis
20:07:44  <owenb>you don't want to query redis on EVERY message
20:08:06  <owenb>only when the user connects for the first time
20:08:06  <k1i>no, but websocket connections arent transient
20:08:08  <k1i>so yeah
20:08:11  <k1i>you can keep some things stored in memory
20:08:14  <k1i>so long as state is backed up to redis
20:08:19  <owenb>then only when the user changes and saves the session do you write to redis
20:08:32  <owenb>but this only works if all incoming clients (browser tabs) are routed to the same backend server
20:08:41  <owenb>this is how it works in 0.3
20:08:48  <owenb>and i think it's how it will work in 0.4 too
20:08:49  <k1i>why do they need to be routed to the same server?
20:08:56  <owenb>because the session is stored in ram
20:08:56  <k1i>ohh
20:08:58  <k1i>you mean actual multiple tbas
20:08:59  <k1i>tabs
20:09:00  <k1i>with the same session
20:09:03  <owenb>yes
20:09:13  <k1i>interesting
20:09:26  <k1i>you are just trying to avoid having duplicate sessions
20:09:31  <k1i>opened across multiple app servers?
20:09:41  <owenb>if you don't care about sharing sessions between multiple tabs, you each tab could have a different sessionId then you could route each incoming connection to any server you wanted
20:09:58  <k1i>technically you could have the same session opened on multiple tabs
20:09:59  <k1i>errr
20:10:02  <k1i>on multiple apppservers
20:10:06  <k1i>its just really inefficeint
20:10:11  <owenb>exactly
20:10:40  <owenb>so sticky sessions give us high performance so we don't have to hit redis on each request
20:11:06  <owenb>but if you decide you don't care about supporting multiple tabs, it should be possible to pick any random back end server and establish a new session
20:11:12  <owenb>ideally we should give the choice to the app developer
20:11:16  <k1i>well
20:11:19  <k1i>so long as that is a possibility
20:11:20  <k1i>is the thing
20:11:30  <k1i>it has to be CAPABLE of running the same session concurrently on multiple appserver nodes
20:11:37  <k1i>the sticky sessions thing is just a huge efficiency boost
20:11:57  <owenb>but think about the amount of calls to redis on each incoming message
20:12:00  <owenb>going to be crazy
20:12:10  <k1i>redis is designed to handle hundreds of thousands of QPS
20:12:22  <k1i>you dont want it to hit redis on EVERY message
20:12:37  <owenb>yeah, but why drag the session object data over the network each time
20:12:38  <k1i>so you just cache the session in memory
20:12:45  <k1i>you don't drag the whole thing
20:12:47  <k1i>only thechanges
20:13:03  <k1i>you pubsub the changes to a channel "session:sid"
20:13:13  <k1i>and any client subbed to that channel can update when it sees session changes
20:13:40  <k1i>er any appserver subbed to that channel
20:13:42  <owenb>what you describe could be done, but nothing does this at the moment. we'd need a new connect redis session store drive written from scratch
20:13:58  <k1i>i would think that you need to write the session driver from scratch anyway
20:14:01  <owenb>if it works flawlessly it would be great
20:14:03  <k1i>to support this kind of scaling/behavior
20:14:16  <owenb>that's why i'm just not doing it at the moment - or 0.4 will never get released
20:14:20  <k1i>yeah
20:14:20  <k1i>good point
20:14:24  <owenb>but it's totally possible to write this in the future
20:14:37  <owenb>for now sticky sessions is just way easier
20:14:42  <owenb>and high performing
20:14:44  <owenb>but i agree with you
20:14:50  <owenb>if this existed, it would be perfect
20:15:07  <owenb>then a client could connect to any backend app server and they would all stay in sync
20:15:13  <k1i>yeah
20:15:14  <k1i>that's what I had in mind
20:15:16  <k1i>when I thought of the whole system
20:15:20  <owenb>yeah
20:15:22  <owenb>it needs to happen
20:15:26  <owenb>i totally agree
20:16:01  <owenb>well i can confirm we'll be using connect session store drivers in 0.4, so if you fancy writing one that would be great :)
20:16:22  <k1i>the thing is
20:16:24  <owenb>and i can put an option in socketstream 0.4 to not cache sessions in ram
20:16:29  <owenb>that would force a lookup in the store
20:16:35  <owenb>that's when you could do your magic :)
20:16:42  <k1i>sessions need to be stored in subscription yea?
20:16:48  <k1i>so a session is subscribed to this particular data
20:16:57  <k1i>errrr
20:16:59  <k1i>sorry, subscriptions
20:17:01  <k1i>need to be stored in the session
20:17:10  <owenb>in 0.3 we stored which channels a user was subscribed to
20:17:14  <k1i>in the session?
20:17:18  <owenb>yes
20:17:26  <owenb>but i haven't got to all that yet in 0.4
20:17:28  <k1i>now, another thought
20:17:37  <k1i>session -> [websocket instance, websocket instance]
20:17:48  <owenb>socketId
20:17:49  <owenb>s
20:17:50  <k1i>yep
20:17:53  <owenb>yup
20:17:55  <k1i>not all sockets
20:17:57  <k1i>are subscribed to the same shit
20:18:03  <owenb>ah yes
20:18:10  <owenb>this has always been a big debate hehe
20:18:11  <k1i>which isnt that big of a deal, but is an efficiency thing
20:18:39  <owenb>not just that.... it's about reconstructing things when a user accidently closes down a tab
20:18:54  <owenb>interesting with the realtime models stuff i've been working on, we don't care about sessoins
20:19:02  <owenb>we just track which socketId (clientId) sees what
20:19:04  <k1i>ok
20:19:07  <owenb>then notifies them of the updates
20:19:08  <k1i>which is essentially a session variable
20:19:16  <owenb>well not really
20:19:21  <k1i>session -> socketIDs -> subscriptions
20:19:30  <owenb>sessionId is common across tabs, socketId changes all the time
20:19:30  <k1i>why not store that in the session?
20:19:35  <k1i>yes
20:19:36  <k1i>^
20:19:41  <k1i>use one common session store
20:19:43  <k1i>and one common pubsub
20:19:47  <k1i>so not everything is so skewed
20:20:07  <owenb>because a user on a different tab may never have asked for record 133, so why should be use bandwidth to tell them about something they are not interested in
20:20:14  <k1i>yes
20:20:37  <owenb>there will def only be one common session store
20:20:39  <owenb>that's for sure
20:20:48  <k1i>session -> { socket1: {subscriptions: [model1]}, socket2: {subs...
20:20:53  <owenb>but rts-pubsub is only there because we need compatibility with 0.3
20:21:25  <owenb>once 0.4 is out I'm going to totally redo rpc and maybe pubsub and make them much better (maybe under different names)
20:21:37  <k1i>I don't feel like you should have multiple pubsub implementations
20:21:42  <k1i>not across versions I mean
20:21:43  <k1i>but across modules
20:21:50  <k1i>for instance, Rt-models shouldn't decide how to do its own pubsub, IMO
20:22:07  <k1i>and should rely on the other service to handle that for it, to abstract scaling, driver abstraction
20:22:20  <owenb>it's not so clear cut though
20:22:22  <k1i>and eventually, streaming
20:22:32  <owenb>rts models don't care a jot about sessions
20:22:38  <owenb>rts-pubsub only cares about sessions
20:22:43  <owenb>the use cases are different
20:22:54  <owenb>the only thing in common is the means of notifying each app server of changes
20:23:08  <owenb>so you could pass the same redis connection, or whatever to both services - no problem
20:24:30  <owenb>keeping them isolated like this will make it easier to test and means we won't get bogged down in having to check the correct version of everything is installed
20:24:43  <owenb>but there is a time when i think i'm going to need service dependencies
20:25:15  <owenb>the realtime models idea i want to talk about at the conference in a few weeks is totally agnostic to the type of persistent db store you want to use
20:25:30  <owenb>all it does it keep track of who sees what and notifies clients when something changes
20:25:52  <owenb>it does this using an event emitter in the client. but I have also been working on angular bindings
20:26:02  <k1i>where is the conference
20:26:08  <owenb>lyon
20:26:18  <owenb>in three weeks
20:26:23  <k1i>in france?
20:26:26  <owenb>yup
20:26:37  <owenb>hoping to have a relatively stable preview of 0.4 ready by then
20:26:44  <owenb>i.e. stable in terms of ideas
20:26:57  <owenb>lots still to go to make this all production readty
20:27:14  <owenb>but i feel sure i'm on the right track now
20:27:14  <k1i>there are reaosns
20:27:18  <k1i>I believe rts-model needs to rely on sessions
20:27:24  <owenb>it will
20:27:27  <owenb>sessions are common
20:27:31  <owenb>to all services
20:27:33  <owenb>it has to be like that
20:27:51  <owenb>sessions will contain info about who you are
20:27:55  <owenb>i.e. what you're allowed to access
20:28:42  <k1i>for the redis pubsub
20:28:46  <k1i>are you actually using the redis-native pubsub?
20:28:52  <owenb>http://realtimeconf.eu/
20:28:52  <k1i>or is it some kind of set/get driver
20:28:53  <k1i>polling
20:29:23  <k1i>http://redis.io/topics/pubsub
20:29:24  <k1i>this
20:29:26  <owenb>redis native. we do that in 0.3 already
20:29:32  <owenb>yes
20:29:34  <k1i>gotcha
20:29:40  <k1i>how do you think OT is going to be best-handled for models?
20:30:30  <owenb>that's the tricky bit.... so in my idea of realtime models, it will be left for the app developer to figure out how to handle writes. we'll get the information from the client-side app down to the model file, but then the app dev can use any OT library they want to figure out how best to write the data
20:30:57  <k1i>how will a model look?
20:31:11  <owenb>yet to figure this all out, but similar to an rpc file today
20:31:11  <k1i>./services/models/model.js?
20:31:47  <owenb>well you name your services what you want. whatever name you give it determines the directory you store data to
20:31:59  <owenb>but models sounds good hehe
20:32:02  <k1i>i mean
20:32:04  <k1i>so
20:32:16  <k1i>how would you implement, say, a mongoose binding
20:32:23  <k1i>to rts-model
20:32:53  <k1i>but ideally, you are able to call these same functions in the client, yea?
20:34:08  <owenb>no idea about mongoose tbh yet. my ideas on realtime models are still at early stages, but i'm hoping to have something ready before the conference as it's the biggest problem to be solved by far
20:34:23  <k1i>that + OT are the biggest reasons, IMO
20:34:33  <k1i>that derby/meteor have up on SS
20:34:54  <k1i>yet they are unusuable in my (most peoples' cases) for a variety of other reasons
20:35:03  <owenb>totally agree. i know with a great model solution SS is going to be way more attractive to most people
20:35:39  <owenb>biggest thing for me is that it should not depend on how or where you want to store your data
20:35:40  <k1i>that's a huge undertaking
20:35:41  <k1i>yeah
20:35:45  <owenb>exactly
20:35:49  <k1i>activerecord does a fairly good job of this, IMO
20:35:54  <owenb>yu
20:35:56  <owenb>yes
20:35:56  <k1i>but if I want to use mongoose, the default mongo node driver, SQL, etc.
20:36:00  <k1i>it should be doable
20:36:04  <owenb>totally
20:36:10  <owenb>you should be able to store your model data anywhere
20:36:11  <k1i>do you believe you shoudl be able to call those kinds of functions
20:36:16  <k1i>.find for instance
20:36:17  <k1i>on the clientside?
20:36:19  <owenb>yes
20:36:26  <k1i>a la meteor's minimongo
20:36:26  <owenb>you will define each one in your model spec file
20:36:27  <k1i>except extensible
20:36:31  <owenb>exactly
20:36:45  <k1i>do you believe in an OO model representation
20:36:45  <owenb>what the find function does, and where it gets the data from, will be up to your model module
20:36:48  <k1i>where I can define a model
20:36:50  <k1i>and play with the object itself
20:36:58  <k1i>var book = Book.find("idididididid")
20:37:01  <k1i>book.read
20:37:12  <owenb>well that's much more tricky as it depends on what you use on your front end
20:37:25  <owenb>that's a very backbone way of doing things
20:37:31  <owenb>but would never work with Angular
20:37:34  <k1i>I mean
20:37:47  <k1i>you could actually represent models as JS objects
20:37:50  <k1i>and have instance methods, et.c
20:38:07  <owenb>yeah i know. and have functions which take two variables and perform a calculation
20:38:10  <k1i>yes
20:38:16  <k1i>but that wouldn't need to be executed on the serverside
20:38:17  <k1i>so
20:38:22  <k1i>some way of marking that as a clientside func
20:38:54  <k1i>not sure how you would pull that distinction off
20:39:38  <owenb>yeah.... i'm trying to avoid doing all this. it's such a massive amount of work. the important thing is to lay the foundations so someone else can implement models in this way. that will be possible
20:39:53  <owenb>ultimately there are many ways to do realtime models
20:40:14  <owenb>what they all have in common is that the server needs to keep track of which client has seen what data so it can notify them of updates
20:40:40  <owenb>i'm trying to get that bit right, then let people build backbone, angular and whatever else bindings on top if they wany
20:40:42  <owenb>want*
20:40:56  <k1i>yeah
20:40:59  <owenb>though I may have to start the process off just to show how it can be done. i'm picking angular as I really like that
20:41:05  <k1i>I really like angular too
20:41:07  <k1i>BB has way too much boilerplate
20:41:19  <k1i>ive got a current production rails + bb app, shitloads of boilerplate files (50+)
20:41:24  <k1i>with maybe 5-10 models
20:42:02  <owenb>yeah. no one solution is right for everyone in the world of web dev. so unlike meteor and derby, there is likely to be more than one way to do realtime models in SocketStream and i'm just fine with that.
20:42:13  <owenb>i just want to make sure whatever people right is high performing and easy to test
20:42:25  <owenb>*write
20:42:40  <k1i>here's a use case
20:42:46  <k1i>how would you handle an app with multiple databases for instance
20:42:52  <k1i>that can be defined on the fly
20:43:10  <k1i>so this model instance uses DB1, this other one (based on session, maybe) uses DB2
20:43:40  <owenb>answer is I won't
20:44:03  <owenb>it will be up to the app dev to figure out where the data comes from and where it's saved
20:44:16  <owenb>most of the time web apps these days are only writing to a back end REST service
20:44:17  <k1i>but that theoretically should be possible
20:44:22  <owenb>not even directly to the DB
20:44:57  <k1i>the way that meteor handles datastores/dbs
20:45:02  <k1i>everything is defined before runtime
20:45:06  <owenb>the realtime-model idea I have is very very simple and leaves the app developer free to store anything anyway they want
20:45:07  <k1i>(serverside-services)
20:45:51  <owenb>yeah, well in SS you just use whatever npm db drivers you want
20:46:23  <owenb>i'm the first to admit this is not models in the active-record sense
20:46:39  <owenb>but then how you could implement that if all your data is stored in a REST service, I don't know
20:46:57  <owenb>anyway, I'm going to go back to working on sessions
20:47:37  <owenb>but watch out of the next update - and please do experiment with your own ideas around models. my ideas and assumptions may be wrong. I don't know yet
20:48:10  <k1i>are you devleoping
20:48:12  <k1i>on a live branch somewhere?
20:48:28  <owenb>no, it's all local at the mo. i'll push when i'm happy and the example-app doesn't break
20:48:50  <k1i>is there anyway to avoid using the shellscripts
20:48:52  <k1i>with the example app
20:49:00  <owenb>how do you mean?
20:49:09  <owenb>the npm_link thing?
20:59:10  <k1i>yes
20:59:12  <k1i>sorry
20:59:21  <k1i>is there any way to avoid manually linking those packages
21:02:22  <owenb>well the script does the linking for you
21:02:27  <owenb>but it's all temporary
21:02:33  <k1i>also, would it be possible to get a development branch perhaps?
21:02:38  <k1i>for experimental PRs, etc.
21:02:55  <owenb>until i figure out what modules are what and what they should be called
21:03:12  <owenb>the whole repo is experimental so feel free :)
21:03:15  <owenb>nothing is set in stone yet
21:03:54  <owenb>i'm hoping things will settle down to make it the new master on the main socketstream repo in 3 weeks time, but a huge amount of work needs to happen before now and then
21:04:22  <owenb>i will prob just delete the socketstream-0.4 repo then
21:05:44  <k1i>gotcha
21:06:02  <k1i>so the sessions you are currently working on, will take advantage of the connect-redis driver
21:06:04  <k1i>and should be in the next push?
21:06:09  <k1i>id be happy to write the new session driver
21:06:10  <k1i>if so
21:06:56  <owenb>yup. well we're no longer having connect-redis as a dependency, but you'll be able to add that top your app and pass the socketstream server an instance of it, and it will just use that
21:07:03  <owenb>and thanks - would be great
21:07:32  <owenb>i'll make sure it's possible to turn on in memory caching and always ask the session store for the latest session object on each incoming request
21:07:46  <owenb>that way you can decide when to query redis, or return from ram
21:07:56  <k1i>I don't understand how it will "just work"
21:08:04  <k1i>(sessions) that is
21:08:22  <k1i>this sits on express?
21:08:27  <owenb>var store = connect.session.RedisStore({host: 'localhost'}); // or whatever
21:08:47  <owenb>var server = new SocketStream({sessionStore: store}};
21:08:49  <owenb>something like that
21:08:58  <k1i>gtocha
21:09:06  <k1i>and the api for communicating with store is consistent?
21:09:08  <owenb>and if you don't pass it anything, it will just use the in memory session store
21:09:15  <owenb>yup it's been out for years
21:09:17  <owenb>and not changed
21:09:28  <owenb>already there are drivers for mongo and every other db you can think of
21:09:41  <k1i>connect- something
21:09:47  <owenb>yup
21:10:08  <k1i>and we were saying the current issue with it
21:10:16  <k1i>is that it doesn't pubsub upon writes?
21:10:52  <owenb>you could take the existing redis one and modify it to cache recent sessions in memory using a LRU cache then query redis for new incoming session ids, and listen out for any updates using redis.subscribe()
21:11:07  <owenb>yup
21:11:16  <owenb>but maybe someone has written this already
21:11:21  <owenb>it seems like a sensible thing to do
21:11:24  <k1i>and this can all be done within the context of a connect-redis replacement?
21:11:28  <owenb>yes
21:11:43  <owenb>so long as I tell SS not to bother with it's own cache
21:11:49  <owenb>which I will do, having had this conversation :)
21:12:19  <owenb>right I must go. lots to do and I don't have much more time today. thanks for the interest anyway. I'm feeling really excited about 0.4 now - the ideas are really coming together
21:12:22  <k1i>yep
21:12:23  <k1i>me too
21:12:32  <k1i>is there any concern, also, with large session objects and bloated appserver memory caches for sessions?
21:13:21  <owenb>yeah - well ram is cheap, but you'd want to set limits - both on object size and how many are stored in memory at once. there is an LRU module on npm - could be worth a look
21:13:29  <owenb>must go now. speak soon
21:35:03  * evangenieurjoined
21:37:33  * mtsrquit (Ping timeout: 240 seconds)
22:27:18  * colinsullivanjoined
23:09:03  * zenoconjoined