00:00:01  * ircretaryquit (Remote host closed the connection)
00:00:08  * ircretaryjoined
00:05:12  * contrahaxjoined
00:05:12  * peutetrequit (Quit: peutetre)
00:07:41  * peutetrejoined
00:09:04  * peutetrequit (Client Quit)
00:12:13  * ceejbotquit (Remote host closed the connection)
00:15:36  * xero0_0quit (Ping timeout: 252 seconds)
00:16:14  * AvianFluquit (Remote host closed the connection)
00:18:25  * mikolalysenkoquit (Ping timeout: 253 seconds)
00:19:39  * mikolalysenkojoined
00:23:29  * sorensen_joined
00:31:42  * pfrazejoined
00:47:10  * sorensen_quit (Quit: sorensen_)
00:52:26  <rowbit>Hourly usage stats: [free: 10]
00:56:54  * xero314joined
01:05:25  * jlordquit (Ping timeout: 245 seconds)
01:06:03  * ogdquit (Ping timeout: 276 seconds)
01:06:06  * ceejbotjoined
01:06:22  * jlordjoined
01:06:28  * ogdjoined
01:17:58  * ceejbotquit (Remote host closed the connection)
01:18:36  * ceejbotjoined
01:19:51  * feross_joined
01:20:06  <feross_>substack: i can't figure out why this ci.testling test is failing: https://ci.testling.com/feross/readable-stream
01:20:17  <feross_>substack: it has correct tap output and all tests pass
01:20:41  <feross_>substack: look at chrome
01:20:52  <feross_>chrome 29, specifically
01:23:03  <feross_>rvagg: https://ci.testling.com/feross/readable-stream
01:23:09  <feross_>look at chrome 29
01:23:29  * xero314quit (Ping timeout: 240 seconds)
01:23:33  <feross_>browser tests pass, though testling doesn't register that atm
01:27:40  <feross_>substack: oh, it looks like the "ok" messages that node core tests are printing out are confusing tap-finished :/
01:28:00  <feross_>oh man, getting these readable-stream tests to pass in the browser has been quite challenging
01:43:01  * eugenewarejoined
01:45:35  * ceejbotquit (Remote host closed the connection)
01:46:38  * thealphanerdquit (Quit: thealphanerd)
01:48:58  * funkytekquit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
01:50:43  * ceejbotjoined
01:52:26  <rowbit>Hourly usage stats: [free: 9]
02:01:33  * phatedquit (Remote host closed the connection)
02:10:52  * thlorenzjoined
02:20:14  <feross_>substack: https://github.com/substack/watchify/pull/20
02:27:34  * feross_quit (Quit: feross_)
02:29:41  * feross_joined
02:30:00  * eugenewarequit (Remote host closed the connection)
02:30:28  * eugenewarejoined
02:34:59  * eugenewarequit (Ping timeout: 260 seconds)
02:40:56  * ednapiranhajoined
02:52:26  <rowbit>Hourly usage stats: [free: 8]
02:52:58  <rowbit>substack, pkrumins: These encoders are STILL down:
02:54:58  * sorensen_joined
03:02:02  * Kesslerjoined
03:05:38  * eugenewarejoined
03:11:56  * kenperkins_quit (Remote host closed the connection)
03:12:48  * kenperkinsjoined
03:13:40  * kenperkins_joined
03:14:09  * thealphanerdjoined
03:15:00  * ogdquit (Ping timeout: 245 seconds)
03:15:41  * eugenewarequit (Ping timeout: 272 seconds)
03:15:46  * ogdjoined
03:17:06  * kenperkinsquit (Ping timeout: 252 seconds)
03:27:19  * Maciek416joined
03:30:28  <rowbit>substack, pkrumins: Encoders down: (dev-ie6-2)
03:33:32  * phatedjoined
03:47:04  <feross_>rvagg: ping
03:52:26  <rowbit>Hourly usage stats: [free: 13]
03:56:06  <defunctzombie>isaacs: I don't think I had "fs"
03:56:14  <defunctzombie>isaacs: I have some of the other builtins tho
03:59:03  * contrahaxquit (Quit: Sleeping)
04:02:13  * sorensen_quit (Quit: sorensen_)
04:05:18  * defunctzombiechanged nick to defunctzombie_zz
04:06:35  <grncdr>mikolalysenko: you may want to watch this issue https://github.com/npm/npm/issues/4587
04:08:46  <rvagg>feross_: pong
04:09:02  <rvagg>feross_: sorry, not really here, I looked at your testling badge and it looked very red
04:09:16  <feross_>yeah, it's very very red
04:09:47  <mikolalysenko>grncdr: yeah, I opened an issue earlier on npm init
04:09:51  <mikolalysenko>but not much attention
04:10:02  <feross_>rvagg: i've mostly found browser equivalents for the things in the tests, got it outputting tap, etc
04:10:19  <feross_>rvagg: it passes locally
04:10:34  <rvagg>noice
04:10:34  <mikolalysenko>agree though that "^" by default makes the most sense
04:10:38  <feross_>rvagg: but fails sporadically on remote machines
04:10:54  <mikolalysenko>or if not "^" then lock to specific version. but "~" is just silly
04:10:59  <feross_>rvagg: i'm using a very dirty hack to shim process.on('exit', fn)
04:11:03  <rvagg>feross_: I'm thinking that if it's doable we should move the forEach and indexOf into the browserify transform, can that be done for the tests too?
04:11:06  <feross_>rvagg: basically just turning that into a setTimeout
04:11:18  <rvagg>gotta run for now tho
04:11:19  <grncdr>mikolalysenko: say it on the issue. Right now it's just +1's without a lot of substance
04:11:27  <feross_>rvagg: okay im putting this on hold for now
04:11:28  <substack>feross_: how long to the tests take to run?
04:11:40  <feross_>substack: too long, testling keeps killing them
04:11:51  <feross_>substack: locally, i can do setTimeout 1000 and it works fine
04:12:06  <feross_>but i need more like 10000 per test to get it passing on testling
04:12:11  <feross_>but then the tests get killed
04:12:20  <grncdr>does `npm run` fail to kill child processes for anybody else?
04:12:23  <feross_>well, some of the tests can get by with less time
04:13:14  <feross_>substack: i resorted to putting tests into "fast", "medium", "slow" categories
04:13:15  <feross_>substack: https://github.com/feross/readable-stream/blob/v0.10/build/browser-test-replacements.js#L92
04:13:26  <feross_>but i'm disgusted with myself for even trying this approach
04:13:28  <mikolalysenko>grncdr: check it out now
04:13:30  <feross_>hehe
04:14:20  <feross_>a proper process.on('exit', fn) shim would really help here, but it's hard to figure out when all timers are done, etc. in the browser
04:14:29  <grncdr>mikolalysenko cool
04:24:00  <mikolalysenko>grncdr: also sent two pull requests to fix the issue should the decision come down that it is worth doing
04:36:11  * pfrazequit (Ping timeout: 245 seconds)
04:40:51  * ec_joined
04:42:19  <chapel>grncdr mikolalysenko how does ~ cause you trouble?
04:42:24  <chapel>just curious
04:43:00  <grncdr>it doesn't really cause me trouble :)
04:44:14  <grncdr>but mikolalysenko makes a very good point that it doesn't fit with any meaningful semantics of semver
04:44:51  <mikolalysenko>chapel: yes
04:45:01  <mikolalysenko>chapel: err, not does but how
04:45:22  <mikolalysenko>chapel: here is what happens: I upgrade ndarray's minor version with a new feature, everything that returns an ndarray now breaks
04:45:33  <mikolalysenko>or more precisely you get crazy cascading compatibility problems
04:46:16  <chapel>well doesn't ^ make the range even larger?
04:46:32  <chapel>^1.2.3 == >=1.2.3-0 < 2.0.0-0
04:46:38  <mikolalysenko>I need the range larger though
04:46:38  <mikolalysenko>that is the point
04:46:57  <guybrush>mikolalysenko: i really dont understand why one would use ^
04:47:22  <mikolalysenko>guybrush: it is the whole point of semver
04:47:43  <guybrush>when you put a ^ in your package.json its like saying i dont care about api-breaking-stuff
04:48:04  <chapel>thats what confuses me about wanting to use it as well guybrush
04:48:05  <grncdr>guybrush: you might want to review semver...
04:48:09  <guybrush>you can use ~ if you trust the author to use semver properly
04:48:12  <mikolalysenko>^ is for proper semver
04:48:14  <mikolalysenko>~ is stupid, it means nothing
04:48:29  <mikolalysenko>or at least nothing important to do with semver
04:48:30  * thlorenzquit (Remote host closed the connection)
04:48:30  <chapel>maybe you should explain what you mean by proper semver
04:48:34  <grncdr>semver.org
04:48:35  <grncdr>explains it very well
04:48:37  <guybrush>mikolalysenko: just take a look at basically all of substacks modules
04:48:37  <mikolalysenko>^ means compatible with respect to semver
04:48:38  <chapel>I've read it, but its been a while
04:48:44  <guybrush>~ is not stupid
04:48:46  <mikolalysenko>~ means 1.2.x
04:48:46  <chapel>it has nothing about ^ or ~
04:48:47  <grncdr>minor point releases are supposed to be backwards compatible
04:48:53  * thlorenzjoined
04:48:56  <grncdr>e.g. 1.4.x is compatible with 1.3.x
04:49:20  <grncdr>so if you are doing semver "properly" ^ is the correct operator
04:49:31  <mikolalysenko>basically ^ matches all compatible versions with respect to semver
04:49:36  <grncdr>because you can safely upgrade from 1.3 to 1.4
04:49:40  <guybrush>consider module A, which i depend on with A@~0.1.2 - now the author breaks the api and publishes A@0.2.0
04:49:46  <guybrush>i cant see how ^ helps here?!
04:49:46  <chapel>but should the default be that any module you install can have a larger range, but the module authors might not honor semver
04:49:47  <mikolalysenko>guybrush: then the author isn't playing nice with semver
04:50:02  <mikolalysenko>guybrush: it is a bug if a minor release breaks the api
04:50:12  <grncdr>chapel that's the debatable point
04:50:15  <chapel>mikolalysenko: sure, but we are talking the default
04:50:15  <mikolalysenko>guybrush: the author should have incremented the major version
04:50:34  <chapel>^ is there to be used
04:50:38  <guybrush>mikolalysenko: oh i see :D i didnt understand semver
04:50:50  <mikolalysenko>guybrush: ok, glad we are on the same page
04:50:58  <chapel>there is no requirement to use semver properly with npm
04:51:05  <grncdr>chapel: the idea behind making ^ the default would be to encourage appropriate use of semver
04:51:12  <mikolalysenko>major version is for breaking changes, minor for upgrades that are backwards compatible, patch for bugfixes that have no semantic effect
04:51:25  <chapel>I don't see how it improves anything
04:51:29  <grncdr>mikolalysenko, guybrush: IIRC there is a caveat in semver that basically says < 1.0.0 means no rules
04:51:34  <chapel>it will just cause pain for those users that don't understand
04:51:36  <guybrush>yeah the important thing is MINOR == "backwards-compatible"
04:51:41  <mikolalysenko>grncdr: node kind of enforces this already
04:51:48  <mikolalysenko>for < 0.1.0 npm install --save fixes to a specific version
04:51:50  <grncdr>mikolalysenko yeah I noticed the check for 0.1.0 in there
04:51:55  <mikolalysenko>above 0.1.0, it uses ~
04:51:58  <guybrush>funny how i read semver.org multiple times and still was wrong in my head
04:52:13  <grncdr>guybrush: not as many times as you've looked at a package.json I'd wager
04:52:18  <chapel>I don't really care either way, but I don't see how using ~ vs ^ would actually break something for someone
04:52:22  <grncdr>(or Gemfile for that matter)
04:52:39  <mikolalysenko>chapel: here is how using ~ breaks stuff
04:52:40  <guybrush>grncdr: yeah haha, in some way its all about what _the author_ thinks how semver works :p
04:52:47  <mikolalysenko>I have a bunch of modules that use ndarray internally
04:52:47  <guybrush>not what semver.org says
04:52:50  <mikolalysenko>some of them return ndarrays
04:52:57  <mikolalysenko>I want to add a small upgrade to ndarray
04:53:00  <mikolalysenko>I should bump minor version
04:53:01  <grncdr>guybrush: that is both true and wrong ;)
04:53:01  <guybrush>so for every module you are using, you have to check how the author uses semver anyway...
04:53:03  <mikolalysenko>BUT all those modules use ~
04:53:04  * thlorenzquit (Ping timeout: 264 seconds)
04:53:08  <mikolalysenko>now the modules that return ndarrays will return older ndarrays
04:53:17  <mikolalysenko>and modules that want to use the new backwards compatible method won't have it
04:53:23  <chapel>so its an ecosystem issue
04:53:26  <chapel>not individual user issue
04:53:27  <grncdr>guybrush: the idea behind making it default is to set the expectation that authors *don't* do stupid shit
04:53:38  <grncdr>chapel: yes
04:53:41  <chapel>your example makes sense btw
04:53:42  <mikolalysenko>chapel: second problem, you want latest version with all fixes
04:53:47  <mikolalysenko>author adds backwards compatible feature bumps minor version
04:53:47  <guybrush>haha gld we talked about it again :)
04:53:51  <mikolalysenko>fixes critical bug
04:53:56  <rowbit>Hourly usage stats: [free: 15]
04:53:57  <mikolalysenko>now your module locked to ~ doesn't get critical bug fix
04:54:22  <mikolalysenko>there are more examples too, but I am also kind of skyping so I need to pause here for a bit
04:54:26  <chapel>well, that is somewhat pandering to stupid users as well
04:54:34  <chapel>well not stupid
04:55:05  <chapel>if I want to use the latest version of something, I install the latest version
04:55:34  <mikolalysenko>the problem is if you publish a module, your module uses some other module
04:55:41  <mikolalysenko>that module gets a minor bump then a bug fix
04:55:50  <mikolalysenko>that bug also affects users of your module
04:56:04  <mikolalysenko>then they have to suffer from the fact that you didn't upgrade properly
04:56:09  <chapel>thats your fault though
04:56:22  <mikolalysenko>it is the problem with using ~
04:56:23  <chapel>a module author shouldn't use npm install --save
04:56:38  <mikolalysenko>then why have npm install --save?
04:56:38  <chapel>you should have hard set semver
04:56:46  <chapel>it is a convenience thing
04:56:47  <guybrush>no
04:56:48  <chapel>more for end users
04:56:49  <guybrush>thats not true
04:56:50  <chapel>not module authors
04:57:10  <guybrush>when you have _a lot_ of tiny modules
04:57:13  <mikolalysenko>semver is most important when authoring modules
04:57:18  <mikolalysenko>you need automatic upgrading
04:57:19  <guybrush>updating all the dependencies is a pita
04:57:21  <guybrush>you have to use semver
04:57:24  <mikolalysenko>exactly
04:57:27  <chapel>sure, but you can do ^ explicitly
04:57:34  <chapel>the option is there
04:57:34  <mikolalysenko>yeah, and you always should
04:57:45  <mikolalysenko>which is why it should be a default
04:57:58  <mikolalysenko>the only reason not to use it is if you don't trust the module author
04:58:01  <chapel>I just don't think you should use --save if you publish modules, specially since it uses ~
04:58:17  <chapel>if it was switched, then sure
04:58:24  <guybrush>you only pin versions in the app (i.e. endpoint)
04:58:24  <mikolalysenko>yeah, for sure
04:59:38  <chapel>I guess I err on the side of not trusting an author
04:59:45  <chapel>even if they are trustworthy
04:59:58  <chapel>I want to know what they changed before I use their changes
05:01:04  <mikolalysenko>well, you can opt out of semver with fixed versions
05:01:06  <chapel>sure
05:01:06  <mikolalysenko>but if you are using ~ you are stuck with the same problem
05:01:14  <mikolalysenko>the real issue here is not what to do when you don't trust something
05:01:15  <guybrush>hm i think you cant opt-out
05:01:17  <guybrush>as soon as you use some module that uses semver
05:01:19  <mikolalysenko>there is already an established and simple solution, just lock the version
05:01:26  <chapel>well I don't see it as a problem for myself, I don't author modules really (nothing that people use) :P
05:01:26  <mikolalysenko>the problem is that ~ is a horrible halfway bastard child
05:01:33  <mikolalysenko>it doesn't do semver and it doesn't lock the version protecting you from the author's mistakes
05:01:33  <guybrush>haha
05:01:49  <mikolalysenko>there is literally no pratical reason to ever use it
05:01:51  <mikolalysenko>and having it as a default is crazy
05:02:03  <chapel>yeah mikolalysenko, ~ can still let malicious or bad module authors cause trouble
05:02:10  <mikolalysenko>exactly
05:02:17  <guybrush>well same with ^
05:02:22  <chapel>its the same issue
05:02:31  <guybrush>^ doesnt prevent you from bad module authors either
05:02:32  <chapel>but ^ lets you benefit from proper semver
05:02:36  <rowbit>substack, pkrumins: Encoders down: (dev-ie6-1)
05:02:40  <mikolalysenko>if that is your issue, don't specify a range. lock the version and you are safe
05:02:41  <chapel>so in that respect it makes sense
05:02:43  * feross_quit (Quit: feross_)
05:02:56  <guybrush>~ is less aggressive :p
05:03:02  * contrahaxjoined
05:03:11  <chapel>the crux here, is --save currently isn't good for module authors, and not really protective to users
05:03:23  <guybrush>--save should pin the version imho
05:03:27  <chapel>I agree
05:04:02  <guybrush>if you want to do crazy semver-stuff you need to open your editor!
05:04:52  <mikolalysenko>I guess I could live with that, but if that is really how you want to do it then why bother with semver at all?
05:06:17  * ecquit (Write error: Connection reset by peer)
05:08:31  * Kesslerquit (Write error: Connection timed out)
05:08:46  * ceejbotquit (Write error: Broken pipe)
05:09:39  <mikolalysenko>it kinda discourages people to take versioning seriously
05:09:41  <mikolalysenko>I lean toward making ^ the default
05:09:45  <chapel>when I am installing something as a user, I am usually installing the latest version
05:09:52  <chapel>and want that version going forward
05:09:57  <chapel>explicitly upgrading when something changes
05:10:14  <chapel>its more work, but when you are running servers, its important to know things work as you expect
05:10:24  <chapel>its bad enough someone can overwrite a version on npm
05:10:24  <guybrush>its all about trust in the authors, you have to look for changes anyway
05:10:31  <guybrush>github makes it easy too look up the commit-history
05:10:32  <chapel>yeah
05:10:47  <guybrush>thats why i like the history.md files in tj's repos
05:11:00  <guybrush>though, small repos dont need it as long as the commit-messages are not terrible
05:12:00  <chapel>mikolalysenko: I do think proper semver is great, and wouldn't be against ^ as the default
05:12:02  <guybrush>woah it must have been hell on earth before there was svn or even cvs
05:12:06  <chapel>guybrush: I do wonder what they did
05:12:12  <mikolalysenko>yeah, the main thing is that ~ just has to go
05:12:13  <chapel>and imagine what they would think if we showed them git/github today without everything that happened in between
05:12:19  <guybrush>well they had punched cards :D
05:12:20  * funkytekjoined
05:12:27  <chapel>well the people writing the code, and the people running it were separate
05:13:23  * funkytekquit (Client Quit)
05:13:26  * dominictarrjoined
05:13:56  <chapel>actually, I should rephrase
05:13:57  <mikolalysenko>programs were a lot smaller in the old days
05:14:07  <mikolalysenko>and didn't do very much
05:14:18  <mikolalysenko>people got really excited about things like a quick sort that actually worked
05:14:44  <chapel>there were the people who wrote the logic, then people translated it to punch cards, and then ran it, was very laborious and time consuming
05:14:49  <mikolalysenko>you can look back at the literature and find tons of papers on trivial stuff
05:14:53  <chapel>sure
05:15:04  <chapel>have you been to the computer history museum?
05:15:12  <mikolalysenko>yeah
05:15:24  <mikolalysenko>though one of the funniest old papers has to ivan sutherland's thesis
05:15:41  <mikolalysenko>he was the guy who did sketchpad, which was in many ways way ahead of its time
05:15:54  <mikolalysenko>it was a cad system that did basic constraint solving sketching, etc.
05:16:00  <mikolalysenko>but you know what his thesis was about?
05:16:14  <mikolalysenko>linked lists.
05:16:25  <mikolalysenko>he reinvented the linked list, and that was his thesis topic
05:16:29  <chapel>when was that?
05:16:46  <mikolalysenko>1963 or something around that time
05:17:26  <mikolalysenko>here is a movie of sketchpad btw: http://www.youtube.com/watch?v=USyoT_Ha_bA
05:18:48  * funkytekjoined
05:21:41  * sorensen_joined
05:22:09  <mikolalysenko>if you look up his thesis, go to the section called "ring structure" or something like that
05:31:54  <chapel>fun watching the sketchpad video
05:32:09  <chapel>and how it was revolutionary to be able to interact with a computer that way
05:32:22  <chapel>hard to imagine now
05:32:33  <chapel>can only wonder what it will be like in 40+ years
05:34:26  * ceejbotjoined
05:35:37  <guybrush>direct brain-computer interface via implanted chip :D
05:36:56  <guybrush>and augmented reallity becomes a default, we dont even need our eyes, nose, ears anymore
05:37:52  <guybrush>though this might take a little longer than 40 years haha
05:38:11  <guybrush>and its hard to imagine what comes after that
05:39:34  <guybrush>and i wonder if html is still arround when that is reallity hahaha
05:39:59  <guybrush>javascript will be for sure!
05:42:30  * eugenewarejoined
05:43:37  <dominictarr>guybrush, you'll probably still have to hit imaginary buttons though
05:45:46  * ednapiranhaquit (Quit: Leaving...)
05:49:15  * sorensen_quit (Quit: sorensen_)
05:52:26  <rowbit>Hourly usage stats: [free: 9]
05:55:23  * guybrushquit (Excess Flood)
05:55:35  * guybrushjoined
06:03:07  * nrwjoined
06:11:40  * joatesjoined
06:12:56  <joates>dominictarr: pm?
06:13:24  <nrw>this seems like it might be a place to talk about level-js. anyone know why we're turning 'del' into 'remove'? https://github.com/maxogden/level.js/blob/master/index.js#L66-L72
06:13:42  <nrw>it seems to break batch operations.
06:15:31  <guybrush>nrw: /j ##leveldb (not that i want to force you not to ask here, just there might be more people that can help you)
06:15:39  * hoobdeeblaquit
06:16:14  <guybrush>oh you are in that channel already, nvm haha
06:16:22  * feross_joined
06:16:25  <nrw>guybrush: i'm mid copy->paste. :)
06:26:51  * ceejbotquit (Ping timeout: 252 seconds)
06:34:23  * Maciek416quit (Remote host closed the connection)
06:35:55  <nrw>answer to my question: indexedDB needs 'remove' operations instead of 'del'. level-js just doesn't copy the array of batch ops. it modifies the array directly.
06:42:51  <dominictarr>nrw, it's always been del in levelup
06:43:43  <nrw>dominictarr: oh good. for a minute there, i thought i was losing my mind.
06:45:56  <nrw>i'm still hunting for a way to make level-js play nice with level-scuttlebutt.
06:47:11  <nrw>ah. i got it. copying the array does pass tests.
06:48:25  <dominictarr>nrw, aha, so looks like this could be fixed with a pull request to level-js
06:48:48  <nrw>dominictarr: yep. just about to send that pr!
06:52:07  <dominictarr>sweet!
06:52:26  <rowbit>Hourly usage stats: [developer: 2, free: 12]
06:53:35  <substack>cap
06:54:07  * captain_morganjoined
06:55:48  * ceejbotjoined
06:57:26  <nrw>dominictarr: is this still true? https://github.com/dominictarr/level-scuttlebutt/blob/master/index.js#L141-L146
06:57:26  * ceejbotquit (Read error: Connection reset by peer)
06:57:45  * ceejbotjoined
06:57:53  <nrw>level-live-stream uses pull-level, right?
06:58:47  <dominictarr>yeah, I will delete that comment
06:59:26  * phatedquit (Remote host closed the connection)
06:59:41  <nrw>it's all good news today. :)
07:00:05  <dominictarr>oh, I had already deleted it, but just hadn't pushed it.
07:03:04  * ceejbotquit (Ping timeout: 272 seconds)
07:03:19  * jcrugzzquit (Ping timeout: 272 seconds)
07:03:24  * ceejbotjoined
07:08:01  * ceejbotquit (Ping timeout: 252 seconds)
07:08:02  <dominictarr>fotoverite, you are writing a nodebook?
07:16:53  * mikolalysenkoquit (Ping timeout: 248 seconds)
07:21:14  * ceejbotjoined
07:25:39  * contrahaxquit (Quit: Sleeping)
07:26:14  * ceejbotquit (Ping timeout: 265 seconds)
07:29:35  * jcrugzzjoined
07:30:03  * marcello3d_zzZchanged nick to marcello3d
07:30:57  <jjjohnny>dominictarr: was scuttlebutt always a series of immutable transactions?
07:31:53  <jjjohnny>""
07:37:21  * jcrugzzquit (Ping timeout: 252 seconds)
07:39:29  * fotoveritequit (Quit: fotoverite)
07:46:24  <dominictarr>yes
07:46:35  <dominictarr>it wouldn't work at all otherwise
07:47:25  * coderzachquit (Remote host closed the connection)
07:52:26  <rowbit>Hourly usage stats: [developer: 0, free: 12]
07:56:58  <dominictarr>chrisdickinson, hey I really like your git workshop idea
07:57:31  <dominictarr>though, I think it would be at least a days worth of intensive deep dive
07:57:57  <dominictarr>I really think that content stores are greatly under utilized, they have some great properties!
07:59:17  <guybrush>its so annoying that querySelectorAll().forEach() doesnt work :/
08:09:07  * ralphtheninjaquit (Ping timeout: 260 seconds)
08:09:11  * marcello3dchanged nick to marcello3d_zzZ
08:09:16  * marcello3d_zzZchanged nick to marcello3d
08:12:51  * dominictarrquit (Quit: Leaving)
08:13:21  * dominictarrjoined
08:13:50  * nrwpart
08:17:56  * coderzachjoined
08:19:14  * marcello3dchanged nick to marcello3d_zzZ
08:21:57  * ceejbotjoined
08:22:00  <eugeneware>guybrush: it is annoying. Though you can arrayify it with Array.prototype.slice.call(document.querySelectorAll(selector)) I believe.
08:22:39  <guybrush>not so sexy as $(selector).hide()
08:22:39  * coderzachquit (Ping timeout: 260 seconds)
08:23:36  * mikolalysenkojoined
08:23:39  <guybrush>but hey, at least jquery is now properly on npm :)
08:26:23  * ceejbotquit (Ping timeout: 245 seconds)
08:28:26  * mikolalysenkoquit (Ping timeout: 264 seconds)
08:39:37  * marcello3d_zzZchanged nick to marcello3d
08:44:02  * captain_morganquit (Ping timeout: 264 seconds)
08:45:08  * anvakaquit (Remote host closed the connection)
08:52:26  <rowbit>Hourly usage stats: [developer: 0, free: 18]
08:53:22  * eugenewarequit (Remote host closed the connection)
08:53:50  * eugenewarejoined
08:58:57  * eugenewarequit (Ping timeout: 272 seconds)
08:59:17  * marcello3dchanged nick to marcello3d_zzZ
09:01:49  * funkytekquit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
09:10:01  * dguttmanquit (Quit: dguttman)
09:10:28  <gildean>guybrush: you can also use Array.prototype.forEach.call(document.querySelectorAll('.selector'), function (element) { ... });
09:10:56  <gildean>or if you don't mind the overhead, then [].forEach.call(function (element) { ... });
09:12:35  <guybrush>i can write all sorts of helper-functions, but i dont want to - i just want to put something up real quick without looking for all sorts of modules and stuff. now the boilerplate for all the stuff is growing to much, just npm i jquery will do for now
09:13:30  <guybrush>also i think this discussion is kind of old in here haha, didnt want to warm it up again :p
09:17:49  * dguttmanjoined
09:19:08  * coderzachjoined
09:22:16  * dguttmanquit (Client Quit)
09:23:38  * coderzachquit (Ping timeout: 252 seconds)
09:37:09  * collypopsquit (Ping timeout: 252 seconds)
09:48:47  * marcello3d_zzZchanged nick to marcello3d
09:52:26  <rowbit>Hourly usage stats: [developer: 0, free: 10]
09:59:20  * marcello3dchanged nick to marcello3d_zzZ
09:59:50  * thealphanerdquit (Quit: thealphanerd)
10:00:37  * eugenewarejoined
10:04:17  <dominictarr>feross, by the way, if sha1 perf is the main reason for the bottleneck mentioned in the webtorent readme, there are sha1 implementations with much better perf than that.
10:04:38  * Kesslerjoined
10:04:39  <dominictarr>this one: https://github.com/srijs/rusha
10:04:42  * rsolejoined
10:05:01  <dominictarr>can do 10mb in 230 ms, that is 50 mb per second
10:05:50  <dominictarr>(this is still 10 times slower than openssl, but better than 2mb/s)
10:05:59  * eugenewarequit (Ping timeout: 252 seconds)
10:06:40  <feross_>dominictarr: awesome, didn't realize that there were others working on this problem besides you
10:09:56  <dominictarr>creationix also has quite a fast sha1
10:10:33  <dominictarr>oh well there are a few crypto implementations
10:10:44  <feross_>yeah, i knew there were several
10:10:55  <feross_>looked at pretty much every one of them when i was doing peercdn
10:11:09  <feross_>but they were all in a pretty sad state, i think
10:11:18  <dominictarr>yeah, and mostly quite old
10:11:20  <feross_>the fastest i found did 2mb/s
10:11:34  <feross_>and most didn't even support typed arrays
10:11:49  <feross_>looks like a lot has changed in the last year on that front
10:11:51  <dominictarr>yeah, no most of them do binary weirdly
10:11:59  <dominictarr>yeah.
10:12:46  <dominictarr>If you where building a new system from scratch (i.e. compatibility not an issue) I'd use this hash: https://github.com/dominictarr/blake2s
10:14:35  <feross_>reason?
10:16:48  <feross_>found https://blake2.net/
10:18:08  <dominictarr>it's the fastest, and is designed to not have the known weaknesses from sha*
10:18:35  <dominictarr>sha1 has a known weakness which reduces the strength to 2^62 or something like that
10:18:36  <feross_>that's interesting. if a weakness is later discovered won't it's speed actually be a downside?
10:19:05  <feross_>less ram and more speed makes it a lot easier for an attacker
10:19:38  <feross_>but if there's no weaknesses found a really long time, then it's awesome to be able to hash super fast :)
10:19:52  * coderzachjoined
10:21:28  * coderzac_joined
10:21:28  * coderzachquit (Read error: Connection reset by peer)
10:21:38  <dominictarr>I think it's really difficult to model the degredation of security in the face of future weaknesses
10:22:40  <dominictarr>I mean, a hash that is 2x faster effects usability but that hardly effects security
10:23:00  <dominictarr>where we have to consider 10000x speed differences
10:23:55  <rvagg>ogd: http://jamescarl.us/blog/learn-you-the-node-js/ "This challenge is a part of a series of challenges called...The Art Of Node by Max Ogden"
10:24:04  <rvagg>ogd: plus this made the HN front page
10:24:06  <rvagg>wut wut?
10:24:39  * ceejbotjoined
10:25:16  <dominictarr>like, the sha1 weakness changes the number of possibilities from 2^80 to 2^60
10:25:58  * coderzac_quit (Ping timeout: 245 seconds)
10:27:04  <feross_>yeah, that's a huge change
10:27:25  <feross_>rvagg: if you comment/tell a mod, they'll often change the link to the original
10:27:50  <rvagg>feross_: nah, I'm just amused by it all
10:27:53  <rvagg>not bothered
10:28:08  <rvagg>plus, correct HN??? who does that?
10:28:29  <dominictarr>haha
10:29:05  <dominictarr>feross, correction: a perfect 160 bit hash should take 2^80 attempts to break on average
10:29:25  * ceejbotquit (Ping timeout: 248 seconds)
10:29:35  <dominictarr>but sha1 collisions can be found in 2^52
10:31:19  <dominictarr>that is 268 million times easier
10:33:18  * feross__joined
10:33:34  * feross___joined
10:33:43  <dominictarr>https://www.schneier.com/blog/archives/2012/10/when_will_we_se.html
10:33:44  * dools_joined
10:33:49  <substack>rvagg: really weird writeup of learnyounode
10:34:01  <substack>I don't even get what it's trying to communicate
10:34:17  <dominictarr>^ so, schneier estimates that it will cost 47k to generate a sha1 collision in 2021
10:34:29  <rvagg>substack: ditto, I guess I shouldn't be surprised that it was upvoted on HN but it came through on the feed
10:35:11  * paul_irish_joined
10:36:19  * philipnjoined
10:38:10  <substack>philipn: there were some folks passing through from davis at sudoroom tonight
10:38:35  <substack>avid daviswiki users
10:39:07  <dominictarr>however, if sha1 wasn't weaker it would still cost 260M*47k = about 12 trillion dollars to generate a collision
10:39:32  <dominictarr>which is 1/5 of the world economy.
10:40:30  <dominictarr>which is the kind of scale that would maybe be justified if we needed to find a collision to protect ourselves from an aliens attacking earth.
10:40:39  * marcello3d_zzZchanged nick to marcello3d
10:40:55  * feross_quit (*.net *.split)
10:40:55  * joatesquit (*.net *.split)
10:40:57  * paul_irishquit (*.net *.split)
10:40:59  * ferossquit (*.net *.split)
10:41:03  * doolsquit (*.net *.split)
10:41:04  * philipn_quit (*.net *.split)
10:41:05  * feross__changed nick to feross
10:47:45  * joatesjoined
10:52:07  * marcello3dchanged nick to marcello3d_zzZ
10:52:26  <rowbit>Hourly usage stats: [developer: 0, free: 15]
10:57:00  <rowbit>substack, pkrumins: These encoders are STILL down:
11:00:02  * phatedjoined
11:05:00  * phatedquit (Ping timeout: 245 seconds)
11:22:15  * coderzachjoined
11:22:19  * dominictarrquit (Ping timeout: 260 seconds)
11:25:24  * ceejbotjoined
11:26:52  * coderzachquit (Ping timeout: 252 seconds)
11:30:14  * ceejbotquit (Ping timeout: 264 seconds)
11:41:28  * marcello3d_zzZchanged nick to marcello3d
11:46:08  * guybrushquit (Excess Flood)
11:46:37  * guybrushjoined
11:51:20  * marcello3dchanged nick to marcello3d_zzZ
11:52:26  <rowbit>Hourly usage stats: [developer: 0, free: 8]
12:01:47  <Altreus>I love how PAUSE still says "If your browser can handle file uploads"
12:03:10  <rsole>dominictarr: that was funny haha
12:03:18  <Altreus>doh wrong chan
12:08:24  * marcello3d_zzZchanged nick to marcello3d
12:18:11  * marcello3dchanged nick to marcello3d_zzZ
12:21:13  * rsolequit (Quit: rsole)
12:21:33  * rsolejoined
12:22:57  * coderzachjoined
12:25:55  * rsolequit (Ping timeout: 252 seconds)
12:26:05  * ceejbotjoined
12:27:39  * coderzachquit (Ping timeout: 260 seconds)
12:30:19  * ceejbotquit (Ping timeout: 252 seconds)
12:52:26  <rowbit>Hourly usage stats: [developer: 0, free: 7]
12:52:44  * rsolejoined
13:04:20  * ralphtheninjajoined
13:09:09  * marcello3d_zzZchanged nick to marcello3d
13:18:21  * rsolequit (Ping timeout: 252 seconds)
13:19:00  * marcello3dchanged nick to marcello3d_zzZ
13:23:42  * coderzachjoined
13:26:52  * ceejbotjoined
13:28:26  * coderzachquit (Ping timeout: 264 seconds)
13:31:17  * ceejbotquit (Ping timeout: 248 seconds)
13:52:26  <rowbit>Hourly usage stats: [developer: 0, free: 16]
14:09:59  * marcello3d_zzZchanged nick to marcello3d
14:19:37  * marcello3dchanged nick to marcello3d_zzZ
14:24:24  * coderzachjoined
14:27:33  * ceejbotjoined
14:28:45  * coderzachquit (Ping timeout: 245 seconds)
14:31:48  * ceejbotquit (Ping timeout: 245 seconds)
14:43:50  * AvianFlujoined
14:44:51  * fotoveritejoined
14:48:56  * rsolejoined
14:49:10  * thlorenzjoined
14:52:26  <rowbit>Hourly usage stats: [developer: 0, free: 19]
14:53:19  * rsolequit (Ping timeout: 252 seconds)
15:10:58  * marcello3d_zzZchanged nick to marcello3d
15:20:31  * marcello3dchanged nick to marcello3d_zzZ
15:25:10  * coderzachjoined
15:28:22  * ceejbotjoined
15:28:38  * kanzurequit (Ping timeout: 246 seconds)
15:28:46  * kanzurejoined
15:30:02  * coderzachquit (Ping timeout: 265 seconds)
15:32:59  * ceejbotquit (Ping timeout: 240 seconds)
15:37:25  * AvianFluquit (Remote host closed the connection)
15:41:05  * AvianFlujoined
15:49:20  * dguttmanjoined
15:52:26  <rowbit>Hourly usage stats: [developer: 0, free: 5]
15:55:13  * joatesquit (Quit: Leaving)
15:56:47  * ceejbotjoined
16:00:33  * AvianFluquit (Remote host closed the connection)
16:01:03  * ceejbotquit (Ping timeout: 252 seconds)
16:03:15  * rsolejoined
16:03:21  * mikolalysenkojoined
16:03:45  * pfrazejoined
16:06:46  * eugenewarejoined
16:11:23  * marcello3d_zzZchanged nick to marcello3d
16:12:14  * eugenewarequit (Ping timeout: 264 seconds)
16:17:23  * defunctzombie_zzchanged nick to defunctzombie
16:21:19  * marcello3dchanged nick to marcello3d_zzZ
16:24:10  * rsolequit (Ping timeout: 265 seconds)
16:25:55  * coderzachjoined
16:30:33  * coderzachquit (Ping timeout: 252 seconds)
16:31:05  * AvianFlujoined
16:32:09  * coderzachjoined
16:33:35  * AvianFluquit (Remote host closed the connection)
16:33:56  * AvianFlujoined
16:42:13  * AvianFluquit (Remote host closed the connection)
16:49:34  * coderzachquit (Remote host closed the connection)
16:49:47  * coderzachjoined
16:50:20  * coderzachquit (Remote host closed the connection)
16:52:27  <rowbit>Hourly usage stats: [developer: 0, free: 20]
16:57:01  <rowbit>substack, pkrumins: These encoders are STILL down:
16:57:28  * ceejbotjoined
17:02:03  * ceejbotquit (Ping timeout: 260 seconds)
17:12:08  * marcello3d_zzZchanged nick to marcello3d
17:21:42  * marcello3dchanged nick to marcello3d_zzZ
17:30:39  * thlorenzquit (Remote host closed the connection)
17:39:15  * ceejbotjoined
17:46:56  * hoobdeeblajoined
17:51:25  * coderzachjoined
17:52:27  <rowbit>Hourly usage stats: [developer: 1, free: 23]
17:52:33  <Kessler>hi folks, was wondering if anyone did bench comparison between axon and node-zmq... my tests show that axon outperforms node-zmq in request/reply scenario, maybe someone could corroborate from their own experience?
17:55:49  * coderzachquit (Ping timeout: 248 seconds)
18:05:34  * ednapiranhajoined
18:07:53  * thealphanerdjoined
18:12:53  * marcello3d_zzZchanged nick to marcello3d
18:19:18  * Kesslerquit (Ping timeout: 245 seconds)
18:22:29  * marcello3dchanged nick to marcello3d_zzZ
18:37:39  * thlorenzjoined
18:37:40  * jibayjoined
18:40:39  * thlorenzquit (Remote host closed the connection)
18:42:30  * thealphanerdquit (Quit: thealphanerd)
18:43:08  * coderzachjoined
18:47:36  * jcrugzzjoined
18:50:20  * thealphanerdjoined
18:51:27  * i_m_cajoined
18:52:26  <rowbit>Hourly usage stats: [developer: 0, free: 29]
18:57:05  * i_m_caquit (Ping timeout: 272 seconds)
19:02:45  * marcello3d_zzZchanged nick to marcello3d
19:04:06  * ceejbotquit (Remote host closed the connection)
19:06:00  * occamshatchetquit (*.net *.split)
19:11:39  * AvianFlujoined
19:19:23  * cpupquit (Quit: Leaving)
19:22:18  * cpupjoined
19:22:45  * occamshatchetjoined
19:26:56  * thlorenzjoined
19:31:15  * thlorenzquit (Ping timeout: 245 seconds)
19:31:21  * captain_morganjoined
19:39:31  * ceejbotjoined
19:43:05  * ceejbotquit (Read error: Connection reset by peer)
19:43:28  * ceejbotjoined
19:44:00  * thealphanerdquit (Quit: thealphanerd)
19:50:35  * rsolejoined
19:51:41  * sorensen_joined
19:52:26  <rowbit>Daily usage stats: [developer: 3, free: 347]
19:52:26  <rowbit>Hourly usage stats: [developer: 0, free: 23]
19:55:48  * i_m_cajoined
19:58:20  * phatedjoined
20:12:39  * contrahaxjoined
20:26:51  * ednapiranhaquit (Quit: Leaving...)
20:32:21  * Kesslerjoined
20:38:08  * thealphanerdjoined
20:39:23  * thlorenzjoined
20:41:18  * i_m_caquit (Ping timeout: 265 seconds)
20:44:11  * Kesslerquit (Ping timeout: 252 seconds)
20:50:23  * thlorenzquit (Remote host closed the connection)
20:52:26  <rowbit>Hourly usage stats: [developer: 0, free: 17]
20:55:08  * AvianFluquit (Remote host closed the connection)
20:57:33  * AvianFlujoined
21:02:22  * phatedquit (Remote host closed the connection)
21:03:03  * sorensen_quit (Quit: sorensen_)
21:04:27  * ferossquit (Quit: feross)
21:04:27  * feross___changed nick to feross
21:08:27  * yorickjoined
21:12:34  * eugenewarejoined
21:12:53  * thlorenzjoined
21:14:44  * marcello3dchanged nick to marcello3d_zzZ
21:17:12  * eugenewarequit (Ping timeout: 252 seconds)
21:20:36  * ceejbotquit (Remote host closed the connection)
21:22:34  * feross_joined
21:25:03  <feross_>wow, github's handling of the badge issue sucks
21:25:08  <feross_>I just got this email from support:
21:25:14  * sorensen_joined
21:25:14  <feross_>"Unfortunately we had to disable svg proxying for the time being due to security concerns. I'm currently investigating when svg images will return to the READMEs, but I don't have a solid date at this piont."
21:25:35  <feross_>Now all my badges are broken for real... https://github.com/feross/webtorrent
21:26:07  <feross_>instead of letting the svg images load normally without their proxy, they decided it was better to 404 every image
21:26:23  * cianomaidinjoined
21:31:30  * cpupquit (Ping timeout: 252 seconds)
21:35:47  * cianomaidin_joined
21:35:47  * cianomaidinquit (Read error: Connection reset by peer)
21:35:48  * cianomaidin_changed nick to cianomaidin
21:35:54  <chapel>feross_: shields.io supports png
21:35:57  <ogd>feross_: bah
21:36:32  * cpupjoined
21:36:33  <chapel>so your badges are broken because of github, but you can fix them if you wanted to
21:37:32  * thealphanerdquit (Quit: thealphanerd)
21:38:32  <feross_>chapel: i changed all my badges to shields.io a few days ago because i noticed they were one of the few services whose badges weren't getting cached for days by github's silly https proxy
21:38:43  <feross_>chapel: so now i have to do that again
21:38:49  * rsolequit (Ping timeout: 265 seconds)
21:39:09  <feross_>their handling of this whole thing has been super unprofesh
21:39:10  <mikolalysenko>substack: is there a generic method to create a brfs like transform that matches all occurences or calls to some function in a browserify transform?
21:39:17  <chapel>well, githubs proxy handles https links as well, so everything is proxied
21:39:30  <feross_>chapel: yep, i knew that
21:39:42  <chapel>feross_: as end users (e.g. people browsing) it makes sense
21:39:46  <chapel>one its faster
21:39:58  <feross_>chapel: that's not a valid reason
21:39:59  <chapel>two it is more private
21:40:15  <chapel>just because you affects you doesn't mean its an issue for everyone
21:40:26  <mikolalysenko>substack: ie instead of say finding all calls to fs.readFileSync, I want to match all calls to mymodule.foo
21:40:35  <mikolalysenko>or something even just mymodule(...)
21:40:38  <chapel>github is in a precarious position in that they host user generated content
21:41:00  <chapel>images can be privacy issues
21:41:06  <feross_>chapel: a huge number of people use images in readme to report CI status, and having LIVE feedback on that is 1000x more useful than saving 50ms on image load time
21:41:15  <chapel>is it?
21:41:24  <feross_>chapel: yes
21:41:46  <feross_>most of the badge services have their own cdns already
21:42:13  * cpupquit (Ping timeout: 245 seconds)
21:42:13  <chapel>I believe badges were a side effect
21:42:31  <chapel>I think they were trying to block analytic tracking
21:42:54  <feross_>chapel: the privacy argument is silly too, and too convenient in timing with the launch of their own analytics service (https://github.com/blog/1672-introducing-github-traffic-analytics)
21:43:01  * sorensen_quit (Ping timeout: 272 seconds)
21:43:18  <chapel>the original reason for the proxy was to make sure https was secure and not loading unsecure resources
21:43:37  <chapel>feross_: if you don't like it, use something else
21:43:41  <feross_>chapel: :(
21:43:52  <chapel>like serious, its a badge
21:43:57  <chapel>there are more important things to worry about
21:44:16  <feross_>chapel: you're so helpful
21:44:33  <chapel>actually, I'm sorry, I was a dick there
21:44:41  <chapel>I understand the frustration, but don't share it
21:44:59  <chapel>I tend to ignore issues I have no control over
21:45:13  <chapel>so something like this, I don't get upset at github, or X company/person
21:45:41  <feross_>yeah, i'm not going to worry about it. it's just disappointing since github is usually so good at doing what users want
21:45:45  <chapel>I also don't like attributing malice to actions that I don't like
21:46:05  <chapel>since in most cases, it isn't malicious
21:46:09  * cpupjoined
21:47:17  <feross_>yeah, i get that. like if you come home and your sandwich that you kept in the fridge is eaten, your roommate probably didn't do it *to make you mad* but because he was just hungry and he though no one was planning to eat it
21:47:44  <feross_>^ hah, i realize that's oddly specific... not saying i did any such thing
21:47:59  <chapel>thats a good example though
21:48:04  <chapel>since it can feel malicious
21:48:21  <chapel>specially if you had intention to eat that sandwich when you got home
21:49:27  <substack>mikolalysenko: I don't know of any modules for that off-hand
21:49:44  <substack>but you could easily make the brfs source do that
21:51:32  <mikolalysenko>substack: hmm
21:52:03  <mikolalysenko>substack: so the basic problem is "match all occurences of require('foo'), replace with some custom string"
21:52:26  <rowbit>Hourly usage stats: [developer: 0, free: 24]
21:52:37  <mikolalysenko>substack: var x = require("foo"); foo(somestaticparams); --becomes---> customfoo(staticparams)
21:53:43  * ceejbotjoined
21:58:26  * ceejbotquit (Ping timeout: 264 seconds)
21:59:27  * thealphanerdjoined
22:03:53  <substack>yep
22:04:05  <substack>mikolalysenko: falafel makes that stuff relatively easy
22:04:35  <mikolalysenko>well, you need to track variable references and do some work
22:04:41  <mikolalysenko>I mean if you want to do it right...
22:05:03  <mikolalysenko>like you could have: var x = inBrowser ? require("a") : require("b")
22:05:13  <mikolalysenko>so some constant folding/control flow might be good...
22:06:36  * thealphanerdquit (Quit: thealphanerd)
22:08:05  <substack>mikolalysenko: it becomes undecidable pretty fast if you go too far down that route
22:08:52  <mikolalysenko>substack: undecidable is fine, you can make a best faith effort
22:09:09  <mikolalysenko>plenty of compile time operations are undecidable
22:09:20  <mikolalysenko>eg templates in c++, macros in lisp, haskell types, etc.
22:10:12  <mikolalysenko>and constant propagation is a pretty well studied problem in programming languages...
22:10:22  <mikolalysenko>though the tools for doing it in js are not really there at the moment
22:10:50  <mikolalysenko>http://en.wikipedia.org/wiki/Constant_folding
22:11:09  <mikolalysenko>constant folding + prune control flow graph and you could cover 99% of all the important cases I bet
22:13:36  * phatedjoined
22:16:03  <jesusabdullah>https://github.com/joyent/node/issues/7030#issuecomment-33914310
22:16:05  <jesusabdullah>much frustrate
22:17:03  <ogd>feross_: does the badge in https://github.com/maxogden/csv2html show up for you? i get a broken image, but its just a png
22:17:21  <feross_>ogd: it looks broken for me too
22:17:24  <ogd>hmm dang
22:17:59  * phatedquit (Ping timeout: 240 seconds)
22:18:36  <ogd>changing it to http:// from https:// didnt do anything
22:18:42  <feross_>ogd: github has particular trouble proxying nodei.co for some reason
22:18:51  <feross_>ever since the https proxy change
22:19:01  <feross_>i think rvagg is aware, but it's unclear why it's happening
22:19:01  <ogd>i wonder why
22:21:14  * ceejbotjoined
22:26:18  <ogd>feross_: ok i emailed github support
22:28:04  * funkytekjoined
22:29:27  * mikolalysenkoquit (Ping timeout: 252 seconds)
22:33:29  * mikolalysenkojoined
22:34:07  * phatedjoined
22:34:23  <feross_>i think github will eventually support badging through their interface, making badges in readmes go away
22:34:46  <feross_>and then all these services will use the github api to report status
22:35:00  <feross_>sort of how like travis reports status on PRs
22:36:27  <substack>I like that less.
22:36:34  <substack>more overhead and coordination required
22:41:02  <feross_>yeah, not saying it's better. more lock in to github. becomes harder to just git clone your repo and go somewhere else
22:41:32  <feross_>but i'm betting it'll happen. they have to do something with that 100mm of funding
22:48:28  * ceejbotquit (Ping timeout: 245 seconds)
22:52:26  <rowbit>Hourly usage stats: [developer: 0, free: 38]
22:57:02  <rowbit>substack, pkrumins: These encoders are STILL down:
22:58:23  * jcrugzzquit (Ping timeout: 272 seconds)
22:58:25  * hoobdeeblaquit (Remote host closed the connection)
22:58:45  * hoobdeeblajoined
23:05:43  <isaacs>defunctzombie: thanks, was just checking. *somebody* had it, becasue ti's {deleted:true} rather than just not found
23:07:32  <defunctzombie>https://npmjs.org/package/gulp-grunt https://npmjs.org/package/grunt-gulp
23:07:50  <defunctzombie>I think we can all safely go home now
23:07:52  <defunctzombie>our job is done
23:08:09  <isaacs>rvagg: Here's a crazy idea... what if you *don't* replicate a "full" registry replica, but instead, do something like what skimdb does, and put the tarballs elsewhere?
23:08:22  <isaacs>rvagg: like, in a S3 location that's close to AU or something
23:08:30  <rvagg>isaacs: sure, but that'
23:08:42  <rvagg>that's probably going to require time and effort and understanding which i can't afford atm
23:08:46  <isaacs>i hear ya
23:09:01  <isaacs>anyway, yeah, for a way to host the tarballs AND metadata? we outgrew couchdb in 2012
23:09:13  <rvagg>
23:09:18  <isaacs>as a host of just metadata, it's still actually great.
23:09:26  <isaacs>and the replication story is not trivial
23:09:47  <rvagg>yeah, I imagine it'd probably be relatively pleasant with just a bunch of json
23:09:52  <isaacs>part of the reason i'm excited about LMDB is that it has a lot in common with Couch's strong points
23:10:20  <phated>isaacs: probably off topic but why is it a requirement for NPM to host tarballs?
23:10:35  <defunctzombie>isaacs: you should look at what apt does and how apt-mirror replicates
23:10:58  <rvagg>LMDB could be a bit of a risk, has limitations with storage size and there are more tradeoffs than Chu would like to admit
23:11:04  <defunctzombie>isaacs: I think there are things to be learned there. the apt registries have been running a long long time
23:11:07  <rvagg>and it's a tuning nightmare compared to leveldb
23:11:17  <defunctzombie>isaacs: and apt-mirror is a well understood thing iirc
23:11:19  <rvagg>but anyway, ymmv, worth giving a go at least
23:17:25  * ralphtheninjaquit (Ping timeout: 248 seconds)
23:18:33  * ceejbotjoined
23:24:24  * eugenewarejoined
23:42:22  * cianomaidinquit (Quit: cianomaidin)
23:52:27  <rowbit>Hourly usage stats: [developer: 0, free: 24]
23:56:18  * thlorenzquit (Remote host closed the connection)