00:31:14  * ryan_stevensquit (Quit: Leaving.)
00:36:57  * saijanai_quit (Quit: saijanai_)
00:44:56  * mikealquit (Quit: Leaving.)
00:57:54  * mikealjoined
00:58:50  <rowbit>Hourly usage stats: []
01:01:59  * mikealquit (Client Quit)
01:04:16  <SubStack>sandwich time
01:28:05  * ryan_stevensjoined
01:34:28  <Raynos>SubStack: https://github.com/Raynos/iterators How do I split this "utility library" into multiple sensible modules?
01:38:33  * mikealjoined
01:46:12  * simcop2387quit (Excess Flood)
01:49:14  * simcop2387joined
01:52:55  * mikealquit (Quit: Leaving.)
01:58:50  <rowbit>Hourly usage stats: []
02:06:41  * antixjoined
02:12:00  * mikealjoined
02:21:20  <guybrush>Raynos: i would put every file you have in a separate module
02:22:14  <Raynos>guybrush: I was thinking that, but then I feel it's too modular and too much noise on the npm
02:22:20  <guybrush>right
02:22:21  <Raynos>it would be like 14 different npm modules
02:22:24  <guybrush>thats what i think too
02:22:38  <Raynos>This code used to be in this library called after
02:22:39  <guybrush>but noise isnt bad maybe
02:22:51  <guybrush>its like good noise :D
02:22:57  <Raynos>and after was a kitchen sink of iterator code, composite code and after itself. So i've already split that library in three different modules
02:23:02  <guybrush>the only problem i see really is the namespace
02:23:14  <guybrush>theres only so much good module-names :D
02:23:28  <guybrush>but thats the only problem actually
02:24:10  <guybrush>but you can use tarball/git-dependencies anyways
02:24:34  <Nexxy>you just have to use... imagination!
02:24:34  <Raynos>well
02:24:43  <Raynos>I dont know whether to call these modules map / reduce / etc
02:25:09  <guybrush>the good thing with npm is you can load all the modules with hoarders at once
02:25:51  <Raynos>guybrush: for example map & filter are already taken
02:26:18  <guybrush>maybe they are useable :D
02:26:46  <Raynos>I could add 14 modules to npm as iterators-mapSync and iterators-map
02:26:50  <guybrush>but i understand what you mean
02:26:51  <Raynos>or I could add one module as iterators
02:27:23  <Raynos>I'm actually taking this route with routil ( https://github.com/Raynos/routil/tree/master/lib ) and splitting out all of those into seperate modules
02:27:30  <SubStack>mapper
02:28:09  <Raynos>mapper name is taken to by some weird ODM
02:28:33  <guybrush>map2 :D
02:28:45  <guybrush>moarBetterMap
02:29:12  <Raynos>this is silly
02:31:30  * dominictarrjoined
02:32:56  <SubStack>maple
02:33:15  <Raynos>all the good names are taken
02:33:19  <SubStack>false
02:33:46  <Raynos>the module false is not taken yet
02:33:56  <Raynos>should I upload `module.exports = false` as the false module?
02:33:57  * mikealquit (Quit: Leaving.)
02:34:09  <guybrush>a funny story regarding module-names is mu2 which is a newer version of mu, now you wonder why there are 2 mu-modules from the same author :D
02:35:11  <guybrush>mu is like a npm-zombie
02:35:29  <SubStack>ยต
02:41:05  <jesusabdullah>maxogden: Tell mikeal I said we don't have localhost redises, just cloud ones
02:58:50  <rowbit>Hourly usage stats: []
03:07:54  <dominictarr>hey SubStack, just discovered that I had a bug in JSONStream
03:08:01  <SubStack>!
03:08:16  <dominictarr>compared to the docs
03:08:42  <dominictarr>I wrote docs that say "if there is no match, emit the root"
03:08:53  <dominictarr>just got a PR that fixes it,
03:09:04  <SubStack>didn't I send a pull request for that?
03:09:17  <SubStack>emitting the root is a really surprising thing to do
03:09:31  <dominictarr>https://github.com/dominictarr/JSONStream/pull/15
03:09:41  <dominictarr>that is what I'm kinda thinking now
03:10:14  <SubStack>that causes bugs and special handling to work around
03:10:44  <SubStack>much better to update the docs I think
03:11:46  <dominictarr>oh, yeah. we made that patch, but never updated the docs.
03:13:18  <dominictarr>it probably makes more sense to emit an error or something if there are no matches.
03:15:20  <SubStack>I don't think so
03:15:22  <SubStack>just don't do anything
03:15:25  <SubStack>exactly like it is now
03:15:32  <SubStack>sometimes there are just no matches
03:16:23  <SubStack>if a dataset is empty that doesn't mean that it is an error, just that there is no data
03:17:06  <SubStack>if people want to throw an error on empty data sets they can listen on 'end' and check a counter
03:19:26  <SubStack>http://www.faqs.org/docs/artu/ch01s06.html#id2878450
03:38:53  * jesusabdullahchanged nick to jhizzle
03:47:33  <isaacs>hey, streamy folks.
03:47:38  <isaacs>i got an idea, wanna hear it?
03:47:48  <dominictarr>yes please
03:48:05  <isaacs>you do something like foo = new ReadableWhateverStream(), right?
03:48:22  <isaacs>then foo.read() --> either a buffer, or `null` if there's nothing available
03:48:45  <isaacs>then, there's a foo.on('readable') that tells you there's something to be read.
03:48:57  <isaacs>instead of foo.on('data') with the buffer
03:49:12  <isaacs>this solves the "must listen for data on the first tick" problem
03:49:38  <isaacs>if you don't foo.read() it, it doesnt' get lost, it just sits there, and the stream is just paused
03:50:03  <SubStack>seems sort of terrible
03:50:06  <SubStack>like polling
03:50:13  <isaacs>SubStack: right, but there's no need to poll
03:50:18  <isaacs>you get "woken up" by the event
03:50:26  <SubStack>maybe call that "flush"
03:50:26  <isaacs>it's basically like kqueue or epoll
03:50:30  <SubStack>or something that's not "read"
03:50:46  <dominictarr>isaacs, I think there is a simpler solution
03:50:48  <isaacs>but! it makes backpressure *really* easy
03:50:52  <isaacs>dominictarr: which is?
03:50:55  <dominictarr>if you want some thing buffered,
03:51:13  <dominictarr>just pipe it into something like this: github.com/dominictarr/pause-stream
03:51:24  <SubStack>agree
03:51:30  <SubStack>core probably shouldn't be buffering
03:51:33  <isaacs>well, the other thing about it, that makes this solution simple, is that file descriptors and sockets already behave this way
03:52:10  <isaacs>have some thing internally effectively doing foo.on('readable', function () { foo.emit('data', foo.read()) })(
03:52:18  <isaacs>(but like, in C, not in JS)
03:52:48  <isaacs>and the foo.read() clears out some internal buffer, by actually calling read(2)
03:53:11  <Raynos>i like the pause-stream thats what I used in some code of mine
03:53:41  <isaacs>dominictarr: "test": "echo \"Error: no test specified\" && exit 1"
03:53:47  <isaacs>dominictarr: i totally trolled you
03:53:49  <isaacs>;P
03:55:11  <isaacs>if we did something like that, it'd be a "lower level" interface, and we'd have to keep the current interface working forever, of course.
03:55:23  <dominictarr>isaacs, fixed in 0.0.3
03:55:54  <dominictarr>I I though stream was being refactored?
03:56:14  <dominictarr>I have some ideas for a few small changes.
03:56:19  <isaacs>but like, createServer(function (req, res) { checkRedis(req.headers.cookie, function (blerg) { if (blerg == blergeyBlerg) { req.pipe(itsATrap) }) })
03:56:44  <isaacs>dominictarr: wanna get the discussion going? send to node-dev
03:56:54  <dominictarr>will do
03:57:03  <isaacs>if it gets too noisy, we can take it offline. don't engage with replies that are obvious nitwitery
03:57:13  <dominictarr>sure.
03:57:20  <isaacs>there will be plenty of STREAMS ARE FINE SHUTUP, and JUST USE STREAMLINE
03:57:46  <Raynos>"just use streamline" ?
03:57:53  <isaacs>:D
03:57:57  <isaacs>Raynos: have fun with that.
03:57:58  <Raynos>who would say that
03:58:01  <isaacs>hahah
03:58:03  <dominictarr>oh, I have only really small very anal changes to propose.
03:58:10  <isaacs>yeah
03:58:17  <Raynos>oh the node-dev mailing list
03:58:22  <isaacs>so, i have a bunch of beefs wiht the current streams.
03:58:28  <Raynos>I was like "whos in #node-dev" and turns out it is no-one
03:58:50  <rowbit>Hourly usage stats: []
03:58:57  <isaacs>my biggest beef is that error handling is very hard, and you MUST pipe on the first tick.
03:59:06  <Raynos>error handling is a pain
03:59:07  <dominictarr>isaacs means the mailing list
03:59:11  <Raynos>I wanted to have a process global stream
03:59:23  <Raynos>and have each req/res pipe a new chunk of data into the stream
03:59:25  <isaacs>also, it's not clear from the Stream class which bits are for the readable interface and which bits are for writable.
03:59:34  <Raynos>but there is no way to return an error from a stream to a single req/res pair
03:59:57  <Raynos>so I need a new stream for each req/res pair to be able to do error handling
04:00:07  <isaacs>dominictarr: actually, lemme start the thread.
04:00:31  <Raynos>The node.js room is quiet. repost o/
04:00:32  <Raynos>Need to write an article about node. Have a few ideas ( https://gist.github.com/ceb49738b2a26018829b ) anyone want to tell me which sounds the most interesting?
04:00:33  <isaacs>i'll write something tonight. your complaints will be a good addition, but i want to set it going in a productive direction, hopefully
04:00:37  <dominictarr>isaacs, have you seen https://github.com/dominictarr/stream-spec/blob/master/stream_spec.md
04:00:46  <isaacs>dominictarr: looking
04:00:47  <dominictarr>https://github.com/dominictarr/stream-spec/blob/master/states.markdown
04:02:17  <SubStack>check this https://github.com/substack/mountie
04:02:17  <dominictarr>on some streams ("through" aka "filter" streams) automatic pausing makes sense
04:02:29  <SubStack>module that lets you compose web servers
04:03:22  <dominictarr>but duplex streams are different.
04:04:27  <dominictarr>adding pause semantics to the stream adds a burden to the stream author
04:05:26  <dominictarr>but piping to a middleware stream that handles that when desired is much more flexible
04:05:29  * saijanai_joined
04:05:53  <Raynos>"middleware stream"
04:06:29  <Raynos>can we replace connect's middleware concept with streams yet
04:07:27  <dominictarr>Raynos, maybe next week.
04:07:57  <Raynos>the only thing that is missing
04:08:01  <dominictarr>github.com/dominictarr/mw-pipes
04:08:03  <Raynos>is accessing meta data of the req object
04:08:07  <Raynos>like req.url
04:08:40  <dominictarr>same as connect but allows you to pass a new stream to next and it becomes the new req or res
04:08:43  <SubStack>there's no need to replace connect middleware
04:08:50  <dominictarr>and the metadata is copied to it.
04:09:02  <SubStack>just build big apps by composing lots of tiny apps together in separate processes
04:09:07  <Raynos>dominictarr: I dont like the req, res, next thing
04:09:09  <SubStack>instead of throwing all the functionality into a single process
04:09:23  <Raynos>SubStack: I agree that you just dont need middleware
04:09:25  <SubStack>connect's issues become a non-problem
04:09:51  <SubStack>use middleware if it makes sense for your tiny pieces but mostly just split things out into lots of federated subcomponents
04:09:57  <SubStack>also you can scale like it's nothing
04:10:04  <SubStack>spin up as many of each type of thing as you want
04:10:17  <SubStack>run the pieces wherever you like
04:10:48  <SubStack>with this approach you can actually achieve that idea of the cloud where you have the slider that goes to "webscale"
04:11:07  <dominictarr>ohohoh
04:11:13  <dominictarr>need a spedometer
04:11:17  <SubStack>haha yes
04:11:25  <SubStack>one of the pieces that remain is the service registry replication
04:11:28  <dominictarr>like you get on ride on lawnmowers
04:11:47  <dominictarr>but instead of saying [tortois] ... [rabbit]
04:11:55  <SubStack>I've got pier working but I need to update the seaport clients to be able to accept multiple fallback hosts
04:12:00  <SubStack>ideally registered in seaport itself
04:12:05  <dominictarr>it says "helloworld" ... "WEBSCALE"
04:12:24  <SubStack>so then seaport pier peers can register themselves and the fallbacks will be noted automatically
04:12:29  <SubStack>robust as fuck
04:12:42  <SubStack>especially when paired up with a system like zygote
04:12:49  <SubStack>although that particular implementation requires refinement
04:12:54  <dominictarr>connect is good at stuff like the cookie parser, etc
04:13:03  <SubStack>yeah connect is fine
04:13:07  <SubStack>just don't build big apps with it
04:13:12  <SubStack>but you shouldn't build big apps PERIOD
04:13:12  <dominictarr>little stuff that needs to happen first, but is reusable.
04:13:16  <SubStack>so it's not a problem
04:13:27  <dominictarr>I think the problem is that it's like (req, res)
04:13:42  <SubStack>if you need streamier pieces just write separate processes to handle those parts and compose them into your application with something like mountie
04:13:43  <dominictarr>it should be one duplex stream, like with tcp.
04:13:59  <dominictarr>and websockets should be the same, as they are just http.
04:14:18  <dominictarr>but like stream.websocket = true or something.
04:14:44  <dominictarr>there is a need in some cases, for stuff like connect on streams.
04:15:20  <Raynos>dominictarr: use libraries for cookie parsing like routil-cookie
04:15:51  <Raynos>if you need a cookie. then get the cookie in your route
04:16:07  <Raynos>dont decide that "all your routes need a cookie so always centrally get the cookie"
04:16:12  <dominictarr>isaacs, I don't have complaints really. just a few small improvements that might simplify stuff.
04:17:03  <dominictarr>but so easy to just drop the cookie middle ware in.
04:17:09  <SubStack>isaacs: also mountie elaborates more on why I think cluster is a harmful approach
04:17:16  <SubStack>it makes load balancing too inflexible and system-specific
04:17:22  <SubStack>just spin up more processes
04:17:28  <SubStack>you should be doing that anyways and it's easy
04:17:42  <SubStack>and it makes version management and redundancy simpler
04:18:41  <dominictarr>SubStack, do you have some instrumentation you can apply to that, to measure performance?
04:19:05  <SubStack>that could be valuable information
04:19:26  <SubStack>I'm more thinking in terms of when you scale out based on cluster, you're limited to the resources of the system
04:19:49  <SubStack>whereas if you start scaling onto the network early it's a much more fluid transition
04:19:59  <dominictarr>the logical machine
04:20:32  <dominictarr>I've been thinking about abstracting away the machine.
04:20:59  <dominictarr>THE 'CLOUD' NEED TO BE MORE FLUFFY
04:24:50  <SubStack>agreed
04:27:16  * blakmatrixjoined
04:29:48  * pikpikquit (Changing host)
04:29:48  * pikpikjoined
04:29:48  * pikpikquit (Changing host)
04:29:48  * pikpikjoined
04:51:45  <dominictarr>isaacs, what should I call the stream api you just proposed?
04:52:07  <dominictarr>I am writing a current-stream -> your-stream adapter, just now.
04:54:37  <isaacs>dominictarr: call it "dart-streams"
04:54:44  <isaacs>dominictarr: since it's a rip-off of their design ;)
04:55:14  <isaacs>SubStack: does mountie do http proxying between processes?
04:58:50  <rowbit>Hourly usage stats: []
04:59:18  <Raynos>dominictarr: where is this stream API ?
05:01:55  <dominictarr>the one that isaacs suggest above
05:01:59  <dominictarr>more like in dart.
05:02:02  <SubStack>isaacs: not between, it just sets up an http proxy and then delegates to other servers over the network
05:02:51  * AvianFluquit (Quit: Leaving)
05:05:12  <dominictarr>SubStack, did you make a repo of the QoS stuff you where experimenting with?
05:08:32  <SubStack>nah, didn't get that stuff working
05:08:46  <SubStack>building some other things at the moment
05:16:20  <isaacs>SubStack: oh, k
05:16:41  <isaacs>SubStack: so each route mount point is pointed at a different port?
05:17:54  <isaacs>it'd be nice if there was an easier way to separate an http server up into independent pieces that didn't rely on http proxying
05:18:15  <isaacs>i think what i really want is just a hot-swappable router
05:18:25  <SubStack>um that's pretty much what mountie does
05:18:35  <SubStack>by way of seaport and bouncy
05:18:48  <SubStack>you spin up services and they attach themselves into the routing tables
05:19:06  <isaacs>SubStack: yeah, but at the cost of http routing
05:19:06  <SubStack>but the servers that you spin up define how they fit into the routing system
05:19:11  <isaacs>that's unacceptable.
05:19:21  <isaacs>er, http proxying
05:19:39  <SubStack>http proxying lets you split up the requests across multiple systems
05:19:56  <isaacs>yeah, but it's unacceptably slow.
05:19:56  <SubStack>and it prevents shared state
05:20:02  <isaacs>nad complicated.
05:20:14  <SubStack>well then fix node's http parser :p
05:20:26  <isaacs>SubStack: it's the TCP layer that's unacceptable
05:20:44  <SubStack>why?
05:20:47  <SubStack>which part is too slow
05:21:06  <isaacs>SubStack: you're doubling the number of tcp connections
05:21:08  <SubStack>and for what purposes?
05:21:20  <isaacs>i guess it's not so terrible if you have keepalives
05:21:56  <isaacs>i can't put my stamp of approval on any server that is not as fast as inhumanely possibble.
05:22:10  <SubStack>that's just silly
05:22:29  <isaacs>faster = cheaper
05:22:32  <SubStack>there are lots of good reasons to trade latency or throughput for horizontal scalability
05:22:37  <isaacs>sure.
05:23:03  <isaacs>but there are also ways to get horizontal scalability that don't reduce latency or throughput
05:23:09  <isaacs>or make websockets trickier.
05:23:22  <SubStack>we're out of ipv4 addresses
05:23:29  <Raynos>i like the idea of having a single application with a single public facing http server
05:23:34  <SubStack>maybe when ipv6 works
05:23:35  <Raynos>that proxies requests to other internal http servers
05:23:50  <Raynos>that way you can build your application out of re-usable components that expose their interface as a http server
05:23:52  <SubStack>likewise!
05:23:54  <isaacs>SubStack: i mean, you can have many servers running on one machine with one ipv4 just fine
05:24:04  <isaacs>SubStack: sharing a single server handle
05:24:18  <isaacs>and then only resort to proxying when you exceed the capacity of a single machine
05:24:50  <isaacs>so, each server is complete, and can stand in for any other
05:24:52  <SubStack>sharing state like that introduces a tier of complexity that can just be skipped most of the time
05:24:59  <isaacs>how is it sharing state?
05:25:05  <SubStack>the server handle
05:25:06  <isaacs>the servers are separate processes
05:25:17  <isaacs>that server handle is hardly "state". it's just a fd
05:25:33  <SubStack>I really dislike everything to do with sharing file descriptors that way
05:25:34  <isaacs>with 8 processes all calling accept() on it
05:25:59  <dominictarr>isaacs, https://github.com/dominictarr/dart-stream/blob/master/index.js
05:26:34  <dominictarr>turning a readable writable stream into a dart stream like you described.
05:26:41  <isaacs>SubStack: why? it's significantly less "state sharing" than having 8 different processes that all talk to seaport
05:27:04  <SubStack>the service registry approach is also way more flexible
05:27:18  <SubStack>and you can very easily expand out into multiple servers
05:27:39  <SubStack>and you get redundancy by just spinning up extras
05:27:45  <isaacs>when talking about completely different services, sure. or, if you need to expand past the bounds of a single machine.
05:27:59  <SubStack>and you can just spin up more servers that attach themselves to the whole system
05:28:13  <SubStack>you don't need to worry about the bounds of single machines
05:28:15  <isaacs>SubStack: and you can also have each website be a cluster sharing a fd
05:28:19  <SubStack>just spin up more machines
05:28:31  <dominictarr>one particular service could still run as a single machine cluster
05:28:32  <SubStack>isaacs: sure but that optimization seems premature
05:28:34  <isaacs>SubStack: if each machine has more than one cpu, why not use them all?
05:28:38  <dominictarr>what is wrong with that?
05:29:10  <SubStack>I don't think we should be steering people towards the shared fd model at first, that should be a backfill approach
05:29:22  <SubStack>I care about scaling system complexity first
05:29:30  <isaacs>well, since most web sites are just one website, it makes a lot of sense.
05:29:42  <SubStack>performance is a thing you can get some minions to hack on once the overall architecture is in place
05:29:44  <isaacs>people make their choice based on the best performance/resource ratio.
05:29:52  <SubStack>but that is a bad design
05:29:57  <Raynos>I have to say the seaport thing is suprisingly complex
05:29:58  <SubStack>one big nasty gigantic webapp
05:29:59  <isaacs>SubStack: performance is a property of architectures, though, not of code!
05:30:01  <SubStack>let's not do that
05:30:04  <isaacs>you can't have minions make things performance.
05:30:24  <isaacs>that's like expecting to have minions hack on security once the overall architecture is in place.
05:30:31  <SubStack>Raynos: which part of it?
05:30:49  <SubStack>yes seaport needs to solve a tricky problem to work well for my purposes
05:30:54  <Raynos>I dont know what the web. prefix is for
05:30:56  <isaacs>the overall architecture has a huge effect on performance.
05:31:07  <SubStack>Raynos: it's so you can have other types of services on your network
05:31:18  <SubStack>registered in your seaport registry
05:31:22  <Raynos>so web.localhost is actually web.<domain>
05:31:41  <SubStack>sure, where <domain> is just the req.header.host
05:31:43  <isaacs>SubStack: of course, we're somewhat arguing past one another - you are building a much more complicated multi-headed thing than most websites.
05:31:45  <Raynos>so adding web.beepboop.com and redirecting beepboop.com in hosts to localhost would work ?
05:32:02  <SubStack>Raynos: yep!
05:32:13  <Raynos>i like this personally
05:32:19  <SubStack>I have something like that set up for beep.boop on my localhost
05:32:59  <SubStack>using dnsmasq though so I can also experiment with subdomains
05:33:11  <SubStack>dnsmasq is the best thing ever for experimenting with subdomains locally
05:33:29  <SubStack>because you can set up wildcard records
05:33:58  <SubStack>isaacs: I'm also arguing that most websites are doing it wrong by building big apps that are mostly just a single process
05:34:23  <SubStack>and so I offer libraries to help realize that vision
05:34:32  <Raynos>SubStack: https://gist.github.com/3115169
05:34:49  <Raynos>when do you want seperate processes and when should you just forward an uri to another httpServer component in the same process ?
05:35:28  <SubStack>that part is up to you
05:36:00  <SubStack>but it should be easy so that people can make good choices about how to best split up their apps
05:36:03  <Raynos>SubStack: I mean, I don't know what the value of the forward-in-one-process.js is. I just came up with that idea in the last couple of days
05:37:03  <SubStack>you could totally have multiple services registered in a single process like that
05:37:20  <dominictarr>isaacs, is this anything like what you where thinking? https://github.com/dominictarr/dart-stream/blob/master/index.js
05:37:21  <SubStack>I'm unsure about when that would be a good idea though
05:37:26  <SubStack>this is all pretty new territory
05:38:31  <SubStack>isaacs: anyways it's fine and rather expected if node itself just builds the things that people clamor for like scaling servers by sharing fds across multiple copies of processes
05:38:51  <SubStack>that's even a useful approach to improving performance in the design that I'm advocating
05:39:28  <SubStack>I'm just advocating that we split up the apps before they get big into lots of processes
05:39:54  <SubStack>because big apps get completely out of control and unmaintainable very quickly
05:40:07  <SubStack>so we should start the process of splitting them up really early
05:40:35  <SubStack>and err on the side of making overly small components for the same reason why it's a good idea to do that with modules
05:41:25  <Raynos>i agree
05:41:54  <SubStack>smallify all the things
05:42:00  <Raynos>how do you spawn all these processes?
05:42:25  <SubStack>you can just spin them up however
05:43:10  <SubStack>there are some implicit design decisions in seaport that should mostly cause the effect that the order shouldn't matter
05:43:25  <Raynos>well yes
05:43:31  <Raynos>I mean, I currently use nodemon
05:43:39  <Raynos>to restart my single process app when I make a change to a file
05:43:43  <SubStack>I haven't messed with that one
05:43:55  <Raynos>are there tools like that for restarting this multi process app whena file is changed
05:44:02  <Raynos>nodemon is very similar to forever / supervisor
05:44:23  <SubStack>I don't usually do stuff when files change
05:44:37  <Raynos>and as an aside, what do you use to restart these processes when they crash?
05:44:49  <SubStack>where?
05:44:55  <Raynos>in production
05:44:59  <Raynos>because of an unknown error
05:45:02  <SubStack>trying to get everything onto fleet
05:49:35  <dominictarr>I gotta go eat. catch you dudes later
05:49:56  * dominictarrquit (Quit: Leaving)
05:52:54  <devaholic>mountie looks similar to what ive used haproxy for, with seaport
05:54:59  <devaholic>having it in node would be handy, but if its the main entry point to x number of services, it might create a bottleneck? especially if a lot of the services are simple
05:58:42  <SubStack>having the same process doing the routing and the seaport hosting?
05:58:50  <rowbit>Hourly usage stats: []
05:59:03  <SubStack>yep that could turn into a bottleneck, but check out http://github.com/substack/node-pier for getting around that
05:59:27  <devaholic>no its like, having a pipe of size x routing to multiple other pipes which are all also size x
05:59:30  <SubStack>then you could just replicate with a dedicated seaport server in your mountie handler
05:59:32  <devaholic>i.e. http servers in node
06:00:05  <devaholic>well yeah, then am i running 9 mounties to handle routing for 9 services?
06:00:19  <SubStack>no you just have 1 mountie proc and 1 seaport proc
06:00:31  <SubStack>and mountie replicates from seaport to get the routing data
06:03:02  <devaholic>it works for scaling out in terms of separation of concerns, but im not sure it works to scale up throughput is all
06:03:34  <SubStack>correct
06:04:17  <devaholic>i hacked up a little thing for sort of doing both a few months ago, that reloads haproxy with seaproxy config anytime there is a change
06:07:11  <Raynos>SubStack: https://github.com/Raynos/after.js/commit/0948a4936b980ce22684638c543744f3ba9c6309
06:07:17  * mikealjoined
06:07:24  <Raynos>Your conversation on small modules made me split up after into 3 different modules o/
06:08:24  <rowbit>SubStack, pkrumins: Encoders down:
06:11:24  <rowbit>SubStack, pkrumins: Encoders down:
06:11:30  <SubStack>awesome! \o
06:12:31  <Raynos>now I need to split up routil >_<
06:12:45  <Raynos>and then ncore too ;_;
06:12:49  <Raynos>so much code to split up
06:14:04  <SubStack>no rush
06:14:24  <rowbit>SubStack, pkrumins: Encoders down:
06:15:24  <rowbit>SubStack, pkrumins: Encoders down:
06:17:27  <rowbit>SubStack, pkrumins: Encoders down:
06:19:27  <rowbit>SubStack, pkrumins: Encoders down:
06:20:25  <rowbit>SubStack, pkrumins: Encoders down:
06:22:24  <rowbit>SubStack, pkrumins: Encoders down:
06:25:00  <Raynos>isaacs: https://github.com/isaacs/error-page/pull/2
06:25:54  <rowbit>SubStack, pkrumins: Encoders down:
06:27:54  <rowbit>SubStack, pkrumins: Encoders down:
06:28:54  <rowbit>SubStack, pkrumins: Encoders down:
06:30:56  <rowbit>SubStack, pkrumins: Encoders down:
06:32:26  <rowbit>SubStack, pkrumins: Encoders down:,,,,,,,,,,,,,,,
06:33:00  * dominictarrjoined
06:33:24  <rowbit>SubStack, pkrumins: Encoders down:
06:34:24  <rowbit>SubStack, pkrumins: Encoders down:
06:36:17  <SubStack>!
06:36:22  <SubStack>that was a lot of them
06:36:24  <rowbit>SubStack, pkrumins: Encoders down:
06:38:55  <rowbit>SubStack, pkrumins: Encoders down:
06:39:54  <rowbit>SubStack, pkrumins: Encoders down:
06:42:20  * ryan_stevensquit (Quit: Leaving.)
06:42:26  <rowbit>SubStack, pkrumins: Encoders down:
06:45:24  <rowbit>SubStack, pkrumins: Encoders down:
06:47:24  <rowbit>SubStack, pkrumins: Encoders down:
06:51:27  <rowbit>SubStack, pkrumins: Encoders down:
06:52:54  <rowbit>SubStack, pkrumins: Encoders down:
06:54:02  * dominictarrquit (Ping timeout: 244 seconds)
06:54:54  <rowbit>SubStack, pkrumins: Encoders down:
06:56:21  * mikealquit (Quit: Leaving.)
06:57:12  * mikealjoined
06:57:24  <rowbit>SubStack, pkrumins: Encoders down:
06:57:58  * mikealquit (Client Quit)
06:58:27  <rowbit>SubStack, pkrumins: Encoders down:
06:58:50  <rowbit>Hourly usage stats: []
06:59:54  <rowbit>SubStack, pkrumins: Encoders down:
07:02:24  <rowbit>SubStack, pkrumins: Encoders down:
07:04:27  <rowbit>SubStack, pkrumins: Encoders down:
07:05:27  <rowbit>SubStack, pkrumins: Encoders down:
07:06:36  * mikealjoined
07:15:53  * dominictarrjoined
07:22:33  * dominictarrquit (Ping timeout: 246 seconds)
07:54:06  <Raynos>isaacs: https://github.com/isaacs/npm-www/pull/51
07:58:50  <rowbit>Hourly usage stats: []
08:03:18  <Raynos>isaacs: every time I learn a bit more about the npm CLI I feel cleverer
08:18:30  * dominictarrjoined
08:22:56  * dominictarrquit (Ping timeout: 250 seconds)
08:27:54  * dominictarrjoined
08:41:16  <devaholic>Raynos o/
08:41:41  <Raynos>devaholic: \o
08:42:07  <devaholic>hows it goin??
08:42:34  <Raynos>pretty good
08:42:42  <Raynos>writing some node code \o/
08:42:47  <devaholic>you are droppin down a lot of repos
08:42:52  <devaholic>hehe
08:42:52  <Raynos>yeah
08:43:10  <Raynos>so much code to write ._.
08:43:14  <Raynos>test-server is bad ass :D
08:43:21  <devaholic>not enough time
08:43:25  <devaholic>whats test-server about?
08:44:52  <Raynos>devaholic: https://github.com/Raynos/routil-static/blob/master/test/test.js#L11
08:45:07  <Raynos>devaholic: https://github.com/Raynos/test-server#example
08:45:17  <Raynos>A quick and dirty way to generate a HTTP server to run your integration tests
08:45:25  <Raynos>I should update the example to include explanation
08:47:03  <devaholic>whats different from just using request + tap or something
08:48:27  <Raynos>its not
08:48:33  <Raynos>it's agnostic to test library
08:48:46  <Raynos>it just removes the boilerplate of "http://localhost:port" from your test code
08:48:57  <Raynos>and it handles creation of server and destruction of server cleanly for you
08:50:58  <Raynos>devaholic: https://github.com/Raynos/test-server#example should be more obvouis now
08:57:02  <Raynos>devaholic: how did you get 35 followers in a day
08:58:51  <rowbit>Hourly usage stats: []
09:03:12  * dominictarrquit (Quit: Leaving)
09:22:20  <devaholic>Raynos: HN
09:22:30  <Raynos>yeah I saw :)
09:58:50  <rowbit>Hourly usage stats: []
10:58:50  <rowbit>Hourly usage stats: []
11:52:29  * dominictarrjoined
11:58:50  <rowbit>Hourly usage stats: []
12:12:04  * devaholicquit (Ping timeout: 252 seconds)
12:22:42  * dominictarrquit (Ping timeout: 252 seconds)
12:34:18  * dominictarrjoined
12:58:50  <rowbit>Hourly usage stats: []
13:07:34  * LOUDBOTjoined
13:20:16  * dominictarrquit (Ping timeout: 252 seconds)
13:50:24  <rowbit>SubStack, pkrumins: Encoders down:
13:58:50  <rowbit>Hourly usage stats: []
14:00:22  * AvianFlujoined
14:04:02  <guybrush>SubStack: how do you think about that approach: mount all client-side used modules like with wreq and expose the mount-prefix (e.g. /<prefix>/<module>) in some way to those modules
14:04:27  <guybrush>so they can use static assets
14:05:28  <guybrush>like... $('#myImg').attr('src',prefix+'/some/module/asset.jpeg')
14:07:24  <guybrush>or maybe a browserify-plugin which adds some mountpoint? require('foo').mountPoint
14:09:47  <guybrush>or should images which are used for styling-purposes just be put into css and go the yarnify way
14:33:14  * dominictarrjoined
14:50:47  <chapel>SubStack: you might enjoy this http://www.slideshare.net/stonse/netflix-cloud-platform-building-blocks
14:51:13  <chapel>SubStack: it looks like they touch on the whole many parts of the whole architecture,
14:58:50  <rowbit>Hourly usage stats: []
15:01:31  * dominictarrquit (Ping timeout: 255 seconds)
15:08:56  * devaholicjoined
15:12:27  <rowbit>SubStack, pkrumins: Encoders down:
15:58:50  <rowbit>Hourly usage stats: []
16:07:36  * devaholicquit (Ping timeout: 248 seconds)
16:22:39  * dominictarrjoined
16:24:49  <SubStack>guybrush: not sure, requires experimentation!
16:24:58  <SubStack>chapel: awesomeness
16:58:50  <rowbit>Hourly usage stats: []
17:42:55  * AvianFluquit (Quit: Leaving)
17:58:50  <rowbit>Hourly usage stats: []
18:07:27  <rowbit>SubStack, pkrumins: Encoders down:
18:12:56  * dominictarrquit (Ping timeout: 246 seconds)
18:17:24  <rowbit>SubStack, pkrumins: Encoders down:
18:18:18  * ryan_stevensjoined
18:22:56  <rowbit>SubStack, pkrumins: Developers waiting in the queue for ie8 (Queue length: 1 on 1 servers. Total servers: 3)
18:28:24  <rowbit>SubStack, pkrumins: Developers waiting in the queue for ie8 (Queue length: 1 on 1 servers. Total servers: 3)
18:31:27  <rowbit>SubStack, pkrumins: Encoders down:
18:47:52  * jhizzlechanged nick to jesusabdullah
18:58:50  <rowbit>Hourly usage stats: []
19:14:16  * dominictarrjoined
19:58:50  <rowbit>Hourly usage stats: []
19:58:50  <rowbit>Daily usage stats: []
19:59:27  <rowbit>SubStack, pkrumins: Encoders down:
20:08:06  * jesusabdullahchanged nick to mr302
20:10:33  * mr302changed nick to jesusabdullah
20:19:25  <rowbit>SubStack, pkrumins: Encoders down:
20:21:24  <rowbit>SubStack, pkrumins: Encoders down:
20:22:22  * ryan_stevensquit (Quit: Leaving.)
20:22:25  <rowbit>SubStack, pkrumins: Encoders down:
20:24:54  <rowbit>SubStack, pkrumins: Encoders down:
20:42:33  * ryan_stevensjoined
20:58:50  <rowbit>Hourly usage stats: []
21:00:29  * dominictarrquit (Ping timeout: 240 seconds)
21:58:50  <rowbit>Hourly usage stats: []
22:21:08  * _sorensenjoined
22:42:59  * _sorensenquit (Quit: Bye!)
22:50:54  * isaacs_mobilejoined
22:58:50  <rowbit>Hourly usage stats: []
23:19:38  * isaacs_mobilequit (Remote host closed the connection)
23:22:10  * ryan_stevensquit (Quit: Leaving.)
23:37:20  * ryan_stevensjoined
23:48:51  * isaacs_mobilejoined
23:50:10  * ryan_stevensquit (Quit: Leaving.)
23:58:50  <rowbit>Hourly usage stats: []