01:06:17  * chorrelljoined
01:10:49  * Fishrock123quit (Remote host closed the connection)
01:52:01  * chorrellquit (Quit: My Mac has gone to sleep. ZZZzzz…)
01:57:33  * Fishrock123joined
02:40:28  * Fishrock123quit (Remote host closed the connection)
02:50:02  * Fishrock123joined
02:52:27  * Fishrock123quit (Remote host closed the connection)
03:00:19  * thealphanerdquit (Ping timeout: 250 seconds)
03:07:08  * thealphanerdjoined
03:08:52  <Trott>I'm dealing with a small N here because clicking through the Jenkins interface can be wearing, but it seems like node-nodesource-raspbian-wheezy-pi2-11-svincent and node-nodesource-raspbian-wheezy-pi2-10-svincent are failing catastrophically in CI, resulting in pi2-raspbian-wheezy being almost never green today. /cc rvagg
03:09:30  <rvagg>Trott: k, got a link showing an example?
03:14:53  * thealphanerdquit (Ping timeout: 268 seconds)
03:18:22  <Trott>https://ci.nodejs.org/job/node-test-binary-arm/1602/RUN_SUBSET=0,nodes=pi2-raspbian-wheezy/console
03:18:42  <Trott>Clearer one, maybe: https://ci.nodejs.org/job/node-test-binary-arm/1602/RUN_SUBSET=1,nodes=pi2-raspbian-wheezy/console
03:19:05  <Trott>And this: https://ci.nodejs.org/job/node-test-binary-arm/1602/RUN_SUBSET=3,nodes=pi2-raspbian-wheezy/console
03:19:09  * thealphanerdjoined
03:20:04  <Trott>It's not just those two hosts, lots of stuff seems to have exploded today on pi2, but those two seem to be blowing up more than the others.
03:21:55  <rvagg>ah, git related, I'll update their git caches
03:25:33  <rvagg>I'll also clean workspaces when the current runs are done
03:26:18  <jbergstroem>we should start doing that on success through jenkins
03:26:28  <jbergstroem>add a post job that runs make clean
03:26:39  <jbergstroem>(on success)
03:33:03  <Trott>Can you post something here when you've cleared the caches? I'm trying to get one particular CI to green and I won't try again until at least that happens....
03:53:40  * Fishrock123joined
03:58:22  * Fishrock123quit (Ping timeout: 248 seconds)
04:21:27  * Fishrock123joined
04:22:10  * Fishrock123quit (Remote host closed the connection)
04:45:37  <rvagg>Trott: should be all cleared and good to go .. cross fingers
04:50:09  <Trott>Here we go: https://ci.nodejs.org/job/node-test-binary-arm/1629/
04:50:31  <Trott>(Meaning: "OK, let's see how this goes..." Not meaning: "Here we go again, problem is still there.")
04:55:28  <thealphanerd>Trott are you programming at wafflejs again
04:55:28  <thealphanerd>:D
04:55:50  <Trott>I *was*. Now I'm home in bed running CI.
04:56:12  <Trott>How's the Great White North?
04:59:20  <thealphanerd>:P
05:16:19  * rmgquit (Remote host closed the connection)
05:22:30  <Trott>rvagg: Oof, well now all the Pi 2 devices failed... https://ci.nodejs.org/job/node-test-binary-arm/1629/RUN_SUBSET=0,nodes=pi2-raspbian-wheezy/console
05:22:50  <Trott>https://ci.nodejs.org/job/node-test-binary-arm/1629/RUN_SUBSET=0,nodes=pi2-raspbian-wheezy/console
05:43:27  <joaocgreis>jbergstroem: the windows jobs already run git clean on success for some months, AFAIK working perfectly
05:43:39  <jbergstroem>what does git clean do?
05:44:22  <joaocgreis>removes all generated files
05:45:43  <joaocgreis>the compile jobs created huge lib files, we had to clean the workspaces too often
07:58:25  * targosjoined
08:10:20  * sgimenojoined
09:54:26  <jbergstroem>jbergstroem: ok
09:54:29  <jbergstroem>joaocgreis: even
10:18:55  * rmgjoined
10:24:09  * rmgquit (Ping timeout: 276 seconds)
10:27:57  * thealphanerdquit (Quit: farewell for now)
10:28:28  * thealphanerdjoined
11:42:10  * chorrelljoined
12:17:58  * chorrellquit (Quit: My Mac has gone to sleep. ZZZzzz…)
12:34:18  * chorrelljoined
14:00:51  * chorrellquit (Quit: My Mac has gone to sleep. ZZZzzz…)
14:09:55  * Fishrock123joined
14:11:50  <Trott>Still having CI problems galore with pi2-raspbian-wheezy. https://ci.nodejs.org/job/node-test-binary-arm/1632/RUN_SUBSET=addons,nodes=pi2-raspbian-wheezy/ https://ci.nodejs.org/job/node-test-binary-arm/1632/RUN_SUBSET=4,nodes=pi2-raspbian-wheezy/console https://ci.nodejs.org/job/node-test-binary-arm/1632/RUN_SUBSET=3,nodes=pi2-raspbian-wheezy/console
14:12:15  * chorrelljoined
14:19:39  * rmgjoined
14:24:57  <Trott>Did we add a bunch of pi2 devices relatively recently? This may be me imagining things, but it *seems* that we increased the number of devices and the tests all started getting flaky again.
14:26:08  <Trott>Which doesn't make sense unless they're running some place where two dozen machines could result in ISP throttling or something. So, I'm probably wasting everyone's time even thinking about this. Which is good because I have to run off.
14:28:26  <Trott>(Looking at Jenkins, the pi2 problems are very recent. Jobs were passing reliably until 9:35AM April 5. Last successful job before problems: https://ci.nodejs.org/job/node-test-binary-arm/1598/
14:30:54  <Trott>Actually a job or so after that is still fine. The real trouble seems to start around 11:40AM April 5. https://ci.nodejs.org/job/node-test-binary-arm/1602/ seems to be the start of "OMG, why are the pi2 devices failing all over the place??!!"
14:35:51  <Trott>OK, actually running off now for an hour or two. Guess I should throw in a gratuitous rvagg here so he gets notified and sees the above in the interim. Oh, except that I think it's the middle of the night in Australia right now. Oh well, maybe he's an insomniac.
14:39:47  * chorrellquit (Quit: Textual IRC Client: www.textualapp.com)
16:17:19  * node-ghjoined
16:17:19  * node-ghpart
16:40:40  * sgimenoquit (Quit: Leaving)
16:58:45  <joaocgreis>Trott: "they're running some place where two dozen machines could result in ISP throttling or something" that's probably it. They're in Rod's office, all connected to the same not too fast (adsl?) connection
16:59:29  <Trott>Interesting. It might be worth seeing what happens to reliability if we take half or more of them offline for an hour.
17:00:24  <joaocgreis>rvagg: the reference repo of the test-binary-arm job was wrong, I corrected it, let's see if it helps: https://ci.nodejs.org/job/node-test-binary-arm/1646/
17:12:05  * Fishrock123quit (Remote host closed the connection)
17:13:32  <joaocgreis>Trott: EADDRINUSE failures... have those also been happening or is this a isolated unrelated thing?
17:14:39  <Trott>Those seemed to come up at the same time.
17:14:48  <Trott>I don't *know* that, but it *seems* to have.
17:15:53  <rvagg>urgh
17:16:45  <rvagg>they might all need a reboot
17:17:29  <rvagg>will have to look in the morning, too hard right now but Trott if you come to any more conclusions wrt what might be wrong please drop thoughts in here and I'll try and address what I can when I wake up
18:53:31  * Fishrock123joined
20:57:19  * jenkins-monitorjoined
20:58:17  * jenkins-monitor1joined
20:58:23  * jenkins-monitorquit (Remote host closed the connection)
20:58:32  * jenkins-monitorjoined
22:20:54  <Trott>Pi builds are going better.
22:43:01  * node-ghjoined
22:43:02  * node-ghpart
22:44:13  * node-ghjoined
22:44:13  * node-ghpart
22:44:37  * node-ghjoined
22:44:38  * node-ghpart
22:52:59  * Fishrock123quit (Read error: Connection reset by peer)
22:53:47  * Fishrock123joined
22:59:25  * node-ghjoined
22:59:25  * node-ghpart
23:00:00  * node-ghjoined
23:00:00  * node-ghpart
23:04:58  * node-ghjoined
23:04:59  * node-ghpart
23:09:41  * node-ghjoined
23:09:42  * node-ghpart
23:38:28  * node-ghjoined
23:38:28  * node-ghpart