00:43:05  * not-an-aardvarkjoined
01:08:14  * node-ghjoined
01:08:15  * node-ghpart
03:44:44  * mscdexjoined
03:45:13  <mscdex>it looks like the freebsd11-x64 node lost connection with jenkins again?
04:55:40  <Trott>jbergstroem ^^^^^ (although, yeah, it's 2AM where they are, so it might be a while before they see this...)
04:56:03  <Trott>https://ci.nodejs.org/job/node-test-commit-freebsd/4879/nodes=freebsd11-x64/console
05:00:29  <Trott>Meh. Ending that hung job seems to have unstuck test-digitalocean-freebsd11-x64-2.
05:07:55  <Trott>Also just terminated a few hung test processes on test-joyent-freebsd10-x64-1 so hopefully that will clear things up on that box (which has been red red red for a while)...
07:07:20  <phillipj>mhdawson: Recommended For Most Users & Latest Features, right? maybe we should just remove that temporarily
09:33:06  * not-an-aardvarkquit (Quit: Connection closed for inactivity)
10:23:30  <jbergstroem>Trott: yeah, just cancel. the job
10:23:39  <jbergstroem>minus period sign
10:42:45  * thealphanerdquit (Quit: farewell for now)
10:43:15  * thealphanerdjoined
13:13:44  <mhdawson>phillipj: that might makes sense, just remove the text under both until we switch back to LTS/Current being displayed
13:28:17  * captainplanetjoined
13:28:18  * captainplanetchanged nick to Guest98900
13:29:17  <Guest98900>jbergstroem it's thealphanerd ... can't connect to my bouncer from the hotel for some reason atm
13:29:24  <jbergstroem>ohi :)
13:29:29  <Guest98900>seems like ci-release is down :S
13:29:32  <Guest98900>getting nginx page
13:29:32  <jbergstroem>checking
13:29:47  <jbergstroem>worksforme? :|
13:29:51  <Guest98900>lol
13:30:02  <Guest98900>ci-release.nodejs.org?
13:30:20  <Guest98900>ugh this internet is acting weird... I'm going to eat and come back
13:30:33  <Guest98900>but I may need you to start a release job in a bit if I can't get it working later
13:30:34  <Guest98900>ttfn
13:30:39  * Guest98900quit (Client Quit)
13:31:05  <jbergstroem>ok
14:06:06  <mhdawson>release ci seemed ok to me as well
14:11:24  <jbergstroem>i restarted/updated it just in case
14:15:31  * node-ghjoined
14:15:31  * node-ghpart
14:35:35  * node-ghjoined
14:35:36  * node-ghpart
14:36:05  * node-ghjoined
14:36:05  * node-ghpart
14:36:30  * node-ghjoined
14:36:30  * node-ghpart
14:53:31  * chorrelljoined
15:10:49  * mscdexpart ("Leaving")
15:19:40  * not-an-aardvarkjoined
15:52:01  * Guest78006joined
15:52:17  * Guest78006changed nick to mylesborins
15:52:57  <mylesborins>hey jbergstroem I'm still getting the nginx page from my connection
15:52:59  <mylesborins>for ci-release
15:53:11  <jbergstroem>what nginx page more specifically?
15:53:34  <jbergstroem>is 503?
15:54:12  <mylesborins>generic "Welcome to nginx!"
15:54:23  <mylesborins>I've also ssh'd into another box which also gets the same page
15:54:41  <mylesborins>when I curl'd "ci-release.nodejs.org"
15:54:42  <jbergstroem>can you has privmsg me your ip?
15:55:18  <jbergstroem> [19/Oct/2016:10:51:01 -0500] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0"
15:55:19  <jbergstroem>that's prob you
15:55:50  <mylesborins>sent
15:56:01  <mylesborins>I'm having some really odd stuff going on with this network
15:56:06  <mylesborins>although I don't think this is one of them
15:56:08  <mylesborins>brb
16:02:54  <jbergstroem>its like you're not sending the correct host header
16:05:00  <mylesborins>weird
16:05:03  <mylesborins>how would that happen?
16:05:28  <jbergstroem>not sure
16:05:53  <jbergstroem>do you get redirected to gh for login?
16:06:03  <mylesborins>check the logs again
16:06:07  <mylesborins>I don't get any redirect
16:06:24  <jbergstroem>ah
16:06:29  <jbergstroem>http://ci-release.nodejs.org
16:06:38  <jbergstroem>doesn't seem to redirect at least with curl
16:06:45  <mylesborins>sigh
16:06:47  <mylesborins>lol
16:06:51  <mylesborins>works now
16:07:01  <jbergstroem>strange:
16:07:13  <jbergstroem>https://gist.github.com/jbergstroem/0c3255b6d1943d7cbaab738f187a8449
16:07:25  <jbergstroem>is it one of those things where *: suddenly starts to matter?
16:07:30  <jbergstroem>ah, server name :[
16:07:55  <jbergstroem>mylesborins: fixed.
16:08:10  <mylesborins>:D
16:32:15  <mylesborins>jbergstroem I'm also completely unable to look at test results on osx or bsd
16:32:17  <mylesborins>constant 504s
16:32:17  <mylesborins>:)
16:32:19  <mylesborins>:(
16:33:31  <mylesborins>actually not able to load the individual test jobs either
16:34:43  <jbergstroem>this is ci. and not ci-release?
16:34:52  <jbergstroem>probably because we have a million jobs waiting
16:36:19  <jbergstroem>jenkins can't really handle these scenarios i feel like :(
16:36:44  <mylesborins>:(
16:36:48  <mylesborins>just ci yeah
16:36:52  <mylesborins>trying to test the release
16:40:37  <mylesborins>there are only 5 jobs rn
16:40:37  <mylesborins>afaict
16:41:14  <mylesborins>none of which are even running on bsd
16:42:28  <mylesborins>and now it is working
16:42:29  <mylesborins>ಠ_ಠ
16:51:27  <jbergstroem>this is cray cray
16:51:31  <jbergstroem>i'm restarting jenkins
16:52:01  <jbergstroem>lel
16:52:07  <jbergstroem>above slave had ~120 node processes
16:53:04  <jbergstroem>64 bit slaves too
16:53:12  <jbergstroem> load average: 165.06, 169.07, 170.69
16:55:28  <jbergstroem>no_logfile_per_isolate something
16:55:32  <jbergstroem>seems to be the biggest contender
16:57:31  <jbergstroem>thealphanerd: https://gist.github.com/jbergstroem/689ca4bffa3f43f6b635a3725353a8ee
16:57:42  <jbergstroem>snip out of the job which seems to take 100% consistently ove freebsd, smartos machines
17:03:31  <Trott>Hmmm, that may have been me. Almost certainly.
17:05:13  <Trott>V8 tick-processor tests. I'm trying to improve reliability. Some changes totally worked locally for me, but I guess did terrible things on some machines in CI. :-|
17:05:14  <Trott>Sorry.
17:06:10  <mylesborins>when did that land? is it in v6?
17:06:15  <Trott>(Wondering now if we don't want to run those tests at all as they are basically V8 tests and not really Node.js tests.)
17:06:28  <Trott>mylesborins: No, didn't land. I was doing a stress test to see if it fixed the flakiness on some platforms.
17:06:30  <mylesborins>I'm getting lots of flakes on bsd right now :(
17:06:34  <mylesborins>ahhhhh
17:07:20  <jbergstroem>wtf
17:07:26  <jbergstroem>i can't even remove the dead threads
17:07:28  <jbergstroem>i am so upset right now
17:07:47  <jbergstroem>pretty please don't start anymore jobs
17:08:05  <Trott>Yeah, I'm not touching anything...
17:09:56  <jbergstroem>when restarting stuff doesn't even wrok you know you're in it to win it..
17:10:21  <Trott>Like, you rebooted a machine and it's still got process cruft hanging around?!?!?!
17:10:36  <jbergstroem>no i restarted jenkins to reset pending states
17:10:43  <jbergstroem>but nope threads won't die
17:11:27  <jbergstroem>it looks like multiple workers has gottten the same job many times
17:11:31  <jbergstroem>and it just dies at that stage
17:11:44  <jbergstroem>when were we moving to buildbot again? :'(
17:11:58  <Trott>Restart Jenkins again? For luck?
17:12:09  <jbergstroem>not going to help
17:12:14  <jbergstroem>these are serialized
17:17:49  <jbergstroem>ok almost done
17:19:30  <jbergstroem>back to zero
17:19:40  * mylesborinsquit (Quit: My MacBook has gone to sleep. ZZZzzz…)
17:19:42  <jbergstroem>i have to leave for an hour but hopefully it'll play while im gone
17:21:27  <Trott>Don't I already owe you a bottle of wine for something? This makes two, doesn't it?
18:03:06  * not-an-aardvarkquit (Quit: Connection closed for inactivity)
18:15:01  * imyllerjoined
18:15:29  * captainplanetjoined
18:15:53  * captainplanetchanged nick to Guest79256
18:16:30  * Guest79256changed nick to mylesborins
18:19:04  <mylesborins>everything back to normal?
18:29:35  * mylesborinsquit (Quit: My MacBook has gone to sleep. ZZZzzz…)
18:44:07  * evanlucasquit (Remote host closed the connection)
18:44:40  * gener1cjoined
18:44:46  <gener1c>wonk
18:46:32  <Trott>I'm tempted to trigger a node-daily-master as a low-risk way to see if it's all working again or not, but I also feel like I've done enough damage for one day.
18:46:50  <Trott>thealphanerd: ^^^^^
18:47:22  * evanlucasjoined
19:05:20  <jbergstroem>back; looks ok no?
19:12:57  * evanlucasquit (Remote host closed the connection)
19:16:03  * evanlucasjoined
19:43:24  * chorrellquit (Quit: My MacBook has gone to sleep. ZZZzzz…)
19:43:53  * chorrelljoined
19:48:33  * chorrellquit (Ping timeout: 252 seconds)
20:58:12  <ofrobots>Is https://nodejs.org/dist available over rsync (to facilitate mirroring)?
21:24:50  <jbergstroem>ofrobots: yes
21:25:17  <jbergstroem>http://unencrypted.nodejs.org -- see https://github.com/nodejs/build/issues/55
21:27:42  <ofrobots>thanks!
21:29:23  <jbergstroem>strangely enough the cronjob sint' running
21:29:24  <jbergstroem>checking
21:31:42  <jbergstroem>missing key
21:33:47  <jbergstroem>yep, bringing up to date now
21:34:14  <jbergstroem>ofrobots: fyi, we've chosen to set a EOL at 2022 to encourage everyone looking for secure alternatives
21:37:31  * mylesborinsjoined
21:51:55  <ofrobots>jbergstroem: what are the recommended alternatives. rsync is conveneint
21:51:58  * mylesborinsquit (Remote host closed the connection)
21:52:09  <jbergstroem>ofrobots: don't know just yet
21:52:21  <jbergstroem>we just set a time for 2022 so we could re-evaulate.
21:52:26  <jbergstroem>ipfs is interesting
21:54:11  * mylesborinsjoined
22:01:19  * mylesborinsquit (Quit: My MacBook has gone to sleep. ZZZzzz…)
22:04:30  <jbergstroem>ofrobots: we're not official just yet; so if you get some kind of temporary error thta's probably me finalizing the setup
22:05:33  <ofrobots>ack
22:10:22  * mylesborinsjoined
22:35:49  <rvagg>thealphanerd: having trouble signing shasums? what's the error you're getting?
22:38:11  <mylesborins>I got it working
22:38:14  <mylesborins>generating blog post right now
22:38:14  <mylesborins>the tool was failing saying there was no signed release
22:38:16  <mylesborins>but on github it is showing as verified
22:38:17  <mylesborins>it was then failing and saying it wasn't signed with my key
22:38:26  <mylesborins>but when I skipped those checks it generated the correct shas
22:38:46  <mylesborins>git secure-tag was being very odd
22:38:46  <Trott>parallel/test-dgram-send-callback-buffer-length is failing a bunch of freebsd10-64 on joyent but not on digitalocean.
22:38:54  <rvagg>oh, not signed with your key
22:38:55  <rvagg>eeek
22:38:57  <mylesborins>and didn't request my passphrase
22:39:01  <mylesborins>it is signed with my key
22:39:03  <mylesborins>I verified
22:39:10  <mylesborins>and confirmed again previous releases that it is the same key
22:39:32  <mylesborins>before pushing anything I compared the signature of the signature with my past signatures
22:40:06  <mylesborins>and the github UI verified me as the signer
22:40:12  <rvagg>hm, yeah, tag looks good
22:40:20  <mylesborins>yeah
22:40:26  <mylesborins>everything is fine with the releaes
22:40:29  <rvagg>hopefully just a one-time glitch? you have a v4.x to test on soon eh?
22:40:33  <mylesborins>yup
22:40:46  <mylesborins>I'll do a dry run later this week on a personal repo
22:40:48  <rvagg>mylesborins: if you have the exact output from release.sh I'd like to see it
22:41:07  <mylesborins>the first time I got "Could not find signed tag for v6.9.1"
22:41:27  <mylesborins>then when I got that error I got "GPG key for v6.9.1 tag is not yours, cannot sign"
22:41:34  <jbergstroem>Trott: can be trailing prcesses
22:41:42  <jbergstroem>Trott: perhaps have a look?
22:41:50  <rvagg>mylesborins: oh, I think I know what this is
22:42:01  <jbergstroem>a few dgram-related tests have been failing for a bit :/
22:42:19  <mylesborins>??
22:42:19  <Trott>I logged on and killed a leftover dgram test from a while ago, but it didn't solve the issue, it seems.
22:42:53  <jbergstroem>they come back
22:43:00  <jbergstroem>pretty much always
22:43:18  <Trott>Hmmm...although a stress test is now coming up clean, so maybe it did? Hmmmm....
22:43:28  <rvagg>mylesborins: it's this! https://github.com/nodejs/node/pull/8824
22:43:41  <rvagg>mylesborins: I released with Linux yesterday so it didn't bother me, we should have got that merged
22:44:05  <mylesborins>I'm on 1.10
22:44:07  <mylesborins>10.10
22:44:08  <mylesborins>¯\_(ツ)_/¯
22:44:10  <mylesborins>lol
22:45:35  <mylesborins>ok blog post is up and release is done
22:46:05  <mylesborins>thanks for being around @rvagg
22:47:40  * mylesborinsquit (Quit: My MacBook has gone to sleep. ZZZzzz…)
22:49:53  * node-ghjoined
22:49:53  * node-ghpart
22:50:40  <jbergstroem>workks too: http://unencrypted.nodejs.org/download/release/latest-boron/
22:53:27  <jbergstroem>mhdawson: ping
22:53:31  * imyllerquit (Quit: My iMac has gone to sleep. ZZZzzz…)
22:54:59  * node-ghjoined
22:54:59  * node-ghpart
23:04:56  <Trott>When the console web page freezes like this, it usually means the host lost its connection to Jenkins? And stopping the test will "fix" it? /cc jbergstroem https://ci.nodejs.org/job/node-test-commit-freebsd/4908/nodes=freebsd10-64/console
23:05:22  <jbergstroem>Trott: i'm not completely sure what that issue is but i think its related to the connectivity during high load
23:05:29  <jbergstroem>you need to cnacel the job :/
23:08:26  <jbergstroem>i've see nit happen on freebsd hosts, arm hsots, smartos hosts and linux hosts
23:08:43  <jbergstroem>would have preferred seeing it isolated to one os/host only :(
23:35:33  * not-an-aardvarkjoined
23:45:55  * mylesborinsjoined