00:07:24  * chorrelljoined
00:09:39  * chorrellquit (Client Quit)
00:11:10  * chorrelljoined
00:15:05  * chorrellquit (Client Quit)
01:06:39  * node-ghjoined
01:06:39  * node-ghpart
01:10:05  <joaocgreis>Trott, mhdawson: I went ahead and disabled it in node-test-commit
01:14:36  <joaocgreis>mhdawson: hope to have done the right thing, let me know if not
01:21:02  <jbergstroem>here now
01:21:37  <jbergstroem>Trott: the addition was intentional but perhaps a bit premature
01:34:42  * node-ghjoined
01:34:42  * node-ghpart
03:49:55  <Trott>joaocgreis jbergstroem Thanks!
03:51:37  * node-ghjoined
03:51:37  * node-ghpart
04:28:05  <Trott>On the FreeBSD hosts, I get processes dating from July when I do `ps -auwx | grep node`. I don't know enough about how the hosts work to know if that's a problem or if that's normal operation. I'm guessing it's a problem but not one that actually impacts anything in a significant way.
06:57:14  <ljharb>what's the exact command used to generate the SHASUMS256 values?
06:57:19  <ljharb>like in http://nodejs.org/dist/v0.10.20/SHASUMS256.txt
08:01:06  * Trottquit (Ping timeout: 276 seconds)
08:01:26  * Trottjoined
10:28:03  * thealphanerdquit (Quit: farewell for now)
10:28:36  * thealphanerdjoined
10:42:20  <jbergstroem>Trott: its probably because i haven't logged in a bit and killed any processes :/
10:42:39  <jbergstroem>Trott: i remember us having issues with stalling processes on freebsd fora while
10:43:02  <jbergstroem>Trott: thanks for helping out (assumed you killed it)
10:47:41  <jbergstroem>ljharb: check https://github.com/nodejs/node/blob/master/tools/release.sh
12:33:28  * evanlucasquit (Remote host closed the connection)
12:46:38  * evanlucasjoined
13:25:39  * node-ghjoined
13:25:39  * node-ghpart
13:36:28  * chorrelljoined
14:12:59  * node-ghjoined
14:12:59  * node-ghpart
14:13:00  * lance|afkchanged nick to lanceball
14:18:05  <Trott>Is this an example of the "have to clear out the workspace" issue on Windows or is this more of a thing that happens from time to time that goes away on the next run? https://ci.nodejs.org/job/node-test-binary-windows/3238/RUN_SUBSET=2,VS_VERSION=vcbt2015,label=win10/console "stderr: warning: failed to remove test/addons/load-long-path/build/"
14:18:16  <Trott>joaocgreis ^^^^^ I suppose
14:19:17  * Fishrock123joined
14:38:05  * node-ghjoined
14:38:05  * node-ghpart
14:48:27  * chorrellquit (Quit: Textual IRC Client: www.textualapp.com)
15:33:17  <jbergstroem>ouch, we had three node processes on freebsd-2 and two on freebsd-1
15:35:19  <jbergstroem>good thing being that since end of july we haven't had any new processes spawned
16:48:26  <Trott>504 is making me sad on Jenkins. :-(
16:48:43  <Trott>(Trying to onboard someone right now, so it's making me especially sad. I want them to kick off a job.)
16:48:52  <Trott>(Main page is working, but node-test-pull-request is timing out.)
16:48:58  <Trott>AND, it's back!
16:53:19  * ofrobotsquit (Ping timeout: 252 seconds)
16:53:39  * ofrobotsjoined
16:55:16  <jbergstroem>yeah test-pr will be the slowest one until we figure out tap stuff
16:55:27  * joaocgreisquit (Ping timeout: 265 seconds)
16:55:47  * joaocgreisjoined
16:55:54  <jbergstroem>i'm kind of changing path though; since we will have even more tap producers shortly we either need a tap2junit universal tool or improve the tap plugin. I'd prefer the later.
17:50:39  * jenkins-monitorquit (Remote host closed the connection)
17:50:52  * jenkins-monitorjoined
17:53:23  * chorrelljoined
17:57:57  * chorrellquit (Client Quit)
17:58:52  * chorrelljoined
18:13:39  * node-ghjoined
18:13:39  * node-ghpart
18:13:50  <Trott>And as we add more collaborators, the probability that our velocity (and therefore our use of the CI infrastructure) will go up and up....
18:13:58  * node-ghjoined
18:13:58  * node-ghpart
18:25:41  * chorrellquit (Quit: Textual IRC Client: www.textualapp.com)
18:35:51  * node-ghjoined
18:35:51  * node-ghpart
18:47:40  * node-ghjoined
18:47:40  * node-ghpart
19:26:15  * Fishrock123quit (Remote host closed the connection)
19:27:42  * Fishrock123joined
21:00:56  * lanceballchanged nick to lance|afk
21:03:39  <jbergstroem>Trott: yeha
21:04:24  <jbergstroem>based on the 'onboarding' i did with phillipj earlier today i did an outline of what we should mention -- if you guys have any suggestions before I start formalising, feel free to share! https://hackmd.io/s/B1mNsWYF
21:16:22  * node-ghjoined
21:16:22  * node-ghpart
21:29:08  * node-ghjoined
21:29:08  * node-ghpart
21:31:28  * node-ghjoined
21:31:28  * node-ghpart
21:32:24  * node-ghjoined
21:32:24  * node-ghpart
21:32:41  * node-ghjoined
21:32:41  * node-ghpart
21:51:40  * Fishrock123quit (Remote host closed the connection)
22:05:13  * node-ghjoined
22:05:13  * node-ghpart
22:10:50  <ljharb>what's the exact command used to generate the SHASUMS256 values? like in http://nodejs.org/dist/v0.10.20/SHASUMS256.txt
22:11:09  <ljharb>when i run `shasum -a 256` on my machine, i don't get the same hash value
22:21:45  <ljharb>^ rvagg since i think you might know
22:31:18  <jbergstroem>ljharb: its called dist-sign which invokes this: https://github.com/nodejs/build/blob/12db79f166656da301138539fa2c5dac73992eb8/setup/www/tools/promote/_resha.sh
22:32:52  <ljharb>hm
22:33:26  <ljharb>ok so then why would i download the file and get different output?
22:33:38  <ljharb>like, literally curling it, and running the same command on my mac
22:34:57  <ljharb>hm, this happens with the sha1 also
22:39:11  <joaocgreis>ljharb: sha256 seems good to me. What exact file is giving you a different hash?
22:39:22  <ljharb>node-v0.10.20-darwin-x64.tar.gz
22:40:02  <ljharb>i get 27bd120bdcee6aa85dac0a2602ee32ca3240f546 and 4492c89e55b431d39f4f47b2f04232d8aa6f08f3004b8f41655e864697852ff5 - and the lists have 6f827b5bb1184160a58e0aac711791b610c30afd and f059b3d9dfd42fa9d7d8542e51aea6c92d87aff1b9023fc1c7c12acb7f3d28e5
22:42:25  <joaocgreis>I get f059b3d9dfd42fa9d7d8542e51aea6c92d87aff1b9023fc1c7c12acb7f3d28e5 , no problem here
22:43:11  <ljharb>weird
22:43:19  <ljharb>what command are you using to download the binary?
22:43:26  <ljharb>i'm wondering if it's something about the curl command i'm using
22:45:08  <ljharb>s/binary/tarball
22:45:55  <ljharb>ok nvm, i redownloaded it and they match
22:45:59  <ljharb>so it's something on my computer, clearly
22:46:38  <joaocgreis>used chrome, let me try curl
22:48:11  <joaocgreis>curl -LO http://nodejs.org/dist/v0.10.20/node-v0.10.20-darwin-x64.tar.gz then sha256sum.exe node-v0.10.20-darwin-x64.tar.gz gives the correct sum
22:52:56  * Fishrock123joined
22:57:20  * Fishrock123quit (Ping timeout: 258 seconds)
23:05:27  <ljharb>k yeah thanks joaocgreis and jbergstroem, i think i must have just modified the file locally somehow
23:05:32  <ljharb>(doing some nvm experimentation)
23:29:29  <joaocgreis>Trott: I investigated that Windows failure you mentioned earlier, and I'm just clueless. It's not the workspace issue. Before, I blamed most of these failures on a stray node process, but there I can confirm there is none in the machine.
23:29:46  <jbergstroem>joaocgreis: :(
23:31:41  <joaocgreis>I'll add a git clean to the end of the job, let me know if you see it again
23:32:18  <Trott>OK, will do! Thanks for looking into it!
23:32:35  <jbergstroem>joaocgreis: how long does a clean take?
23:33:20  <joaocgreis>few seconds at most, but saves the time for the next run
23:50:29  <joaocgreis>I've deleted the second daily master job, no point keeping it around
23:53:53  <jbergstroem>ok