01:35:06  * jenkins-monitorquit (Remote host closed the connection)
01:35:06  * jenkins-monitor1quit (Remote host closed the connection)
01:35:15  * jenkins-monitor1joined
01:35:16  * jenkins-monitorjoined
03:57:42  <Trott>FreeBSD acting up again. Pondering most efficient way to find the culprit test (or if we just want to keep eliminating common.PORT from everything we can until the issue goes away). Anyway, can someone kill stray processes or reboot or whatever? cc jbergstroem
03:58:13  <jbergstroem>Trott: two running processes:
03:58:21  <Trott>Ooooh, what are they?
03:58:23  <jbergstroem>test/fixtures/clustered-server/app.js
03:58:33  <jbergstroem>test/parallel/test-debug-port-numbers.js
03:59:00  <Trott>Awesome, thanks!
03:59:00  <jbergstroem>..and this on the other machine: test/parallel/test-debug-port-numbers.js
03:59:44  <Trott>Well, that was certainly efficient. Thanks.
04:00:08  <jbergstroem>np
04:01:17  <jbergstroem>smartos; lets check
04:01:43  <jbergstroem>fixtures/clustered-server/app.js
04:01:58  <jbergstroem>test-debug-port-numbers.js
04:13:30  <Trott>Might as well check OS X since it just started acting up?
04:26:41  <Trott>In CI, TEST_THREAD_ID is always 1, 2, 3, numbers like that? It's never, say, 10, 20, 30, 40?
04:28:07  <jbergstroem>rvagg: can you check ^
04:28:23  <jbergstroem>Trott: no its a number incremented from amount of jobs
04:28:35  <jbergstroem>oh, wait. thread might actually be different
04:28:37  <jbergstroem>let me check
04:29:35  <jbergstroem>no, its low; but i recall port being thread_id * 100 or something at some stage
04:30:25  <Trott>Still is, which is why I'm asking. Although now that I'm thinking more about it, I'm realizing my line of thinking may not be plausible. Harumph. As you were.
04:34:11  <Trott>Looks like node-nodesource-raspbian-wheezy-pi1p-8-mininodes is also having an EADDRINUSE party...
04:34:55  <jbergstroem>well hasn't the last week been regression-land
04:35:42  <jbergstroem>no running processes
04:38:12  <Trott>Oh, thanks heavens, though, it looks like it really is node-debug-port-numbers. Which landed two days ago.
04:39:32  <jbergstroem>sha?
04:47:16  <Trott>18fb4f9a912e775dde49e31bf749a9060b2c59e6
04:51:43  <Trott>I stress tested it on test-digitalocean-freebsd10-x64-1 to see if it was the problem, and it failed after 5 runs. So, uh, yeah, you might need to kill the process on that host.
04:52:36  <Trott>Also: iojs-voxer-osx1010-2
05:29:17  <jbergstroem>ok
05:41:23  <rvagg>checking osx
05:43:58  <rvagg>been too long since I last restarted the osx machines so they are all slow and basically locked up on the UI
05:44:03  <rvagg>it's like running Windows of old
06:14:29  <jbergstroem>Trott: will you create an issue?
06:19:37  * node-ghjoined
06:19:38  * node-ghpart
06:21:03  * node-ghjoined
06:21:03  * node-ghpart
06:22:23  <rvagg>Trott: osx machines don't have any lingering processes
06:35:32  <jbergstroem>Trott: The lies, summoning us like this! Don't anger the old gods.
07:35:56  * node-ghjoined
07:35:56  * node-ghpart
07:36:26  * node-ghjoined
07:36:26  * node-ghpart
10:07:07  * rvaggquit (Quit: Updating details, brb)
10:07:17  * rvaggjoined
10:31:09  * node-ghjoined
10:31:09  * node-ghpart
10:54:53  * thealphanerdquit (Quit: farewell for now)
10:55:23  * thealphanerdjoined
15:29:44  * node-ghjoined
15:29:45  * node-ghpart
18:53:51  * bnoordhuisjoined
18:54:16  <bnoordhuis>hey all, getting 504 Gateway Time-out again. is someone rebooting the CI or is it down?
19:18:03  <jbergstroem>nothing intentional
19:18:03  <jbergstroem>front page or sub page?
19:39:55  <bnoordhuis>ah, it seems to be working again
19:40:08  <bnoordhuis>i was at https://ci.nodejs.org/job/node-test-commit/ btw
19:43:18  <bnoordhuis>i'm trying to be clever and run a change where `make run-ci` does `killal -9 node` in order to clean up stuck processes
19:44:02  <bnoordhuis>it remains to be seen how effective that is but... https://ci.nodejs.org/job/node-test-commit/3554/
20:59:08  * bnoordhuisquit (Quit: leaving)
23:08:20  * rmgquit (Remote host closed the connection)