Cluster Status

From ACENET
Jump to: navigation, search
Ambox notice.png This page is maintained manually. It gets updated as soon as we learn new information.

Clusters

Please click on the name of the cluster below in the table to quickly get to the corresponding section of this page. The outage schedule section is a single place where data about all scheduled outages are represented.

Cluster Status Planned Outage Notes
Mahone Online No outages
Placentia Online Thu Nov 23-Mon Nov 27
Fundy Online No outages
Glooscap Online No outages

Services

Service Status Planned Outage Notes
WebMO Online Date to come We are experiencing problems submitting WebMO jobs
Account creation Online No outages
PGI and Intel licenses Online No outages
Videoconferencing (IOCOM Server) Online No outages
Legend:
Online cluster is up and running
Offline all users cannot login or submit jobs, or service is not working
Online some users can login and/or there are problems affecting your work

Outage schedule

Grid Engine will not schedule any job with a run time (h_rt) that extends into the beginning of a planned outage period. This is so the job will not be terminated prematurely when the system goes down.

  • After November 15, groups which have not registered their "Transition Ready" status will be blocked from submitting new jobs at Mahone. See New Systems Migration for more information.
  • Placentia will be offline from 12h00 (NST) Thursday November 23 until Monday November 27 due to a planned power outage at Memorial University.

Mahone

  • Mahone has been returned to service.
15:47, November 8, 2017 (AST)
  • An unplanned overnight power outage at the SMU has caused all nodes - including the storage system - to crash. The sysadmins are in the process of powering everything up again and assessing any damage.
08:56, November 7, 2017 (AST)

Placentia

  • The cooling problems in the Placentia machine room have been resolved and the associated outage ended without loss of jobs.
12:01, October 23, 2017 (ADT)

Fundy

  • No recent issues

Glooscap

  • The metadata server was hung all night March 7-8. It was rebooted this morning and Glooscap is operating once again, although technical staff continue to be cautious about its future behaviour. To try to alleviate the load on the metadata server we are withdrawing compute nodes cl002 through cl058 from service. This represents a reduction of 188 cores in the capacity of the cluster.
11:24, March 8, 2017 (AST)
User Support
Resources