Siku
Siku is a high-performance computer cluster installed in 2019 at Memorial University in St. John's, Newfoundland.
It is funded in large part by the Atlantic Canada Opportunities Agency (ACOA) with the intention of generating regional economic benefits through industry engagement, while recognizing the important work that ACENET does for academic research in the region.
Siku is only accessible to selected clients. Industrial researchers should write to info@ace-net.ca. Principal Investigators of academic research groups may use this access request form.
Contents
Addresses
Login node (ssh): siku.ace-net.ca
|
Globus endpoint: computecanada#dtn.siku.ace-net.ca
|
Data transfer node (rsync, scp, sftp,...): dtn.siku.ace-net.ca
|
Authentication and authorization
If you have been granted access to Siku and you have a Compute Canada account, you can log in using your Compute Canada username and password.
We encourage you to use a passphrase-protected SSH key pair for regular access to Siku.
Known issues
- Multi-Processing using
libverbs
is not working as expected. MPI implementations, however, should work.
- Directories are automatically created at first logon. This may produce a race condition that results in errors like the following:
Could not chdir to home directory /home/username: No such file or directory /usr/bin/xauth: error in locking authority file /home/username/.Xauthority Lmod has detected the following error: Unable to load module because of error when evaluating modulefile: ...
Should this occur on first login, simply log out, wait a minute, and log back in again.
Similarities and differences with national GP clusters
Siku is designed from experience gained with the Compute Canada systems, Béluga, Cedar, Graham, and Niagara. Users familiar with those systems will find much familiar here.
- The filesystem is similarly structured. See Storage and file management.
- There is no "Nearline" archival filesystem.
- The same scheduler is used, Slurm, although with simpler policies. See "Job Scheduling", below.
- The same modules system provides access to the same list of available software.
Job scheduling
Tasks taking more than 10 CPU-minutes or 4 GB of RAM should not be run directly on a login node, but submitted to the job scheduler, Slurm.
Scheduling policies on Siku are simpler than those on Compute Canada general-purpose systems.
- Maximum run-time limit is 24 hours for unpaid accounts.
- Paid clients have higher priority than academic (free) clients, but with quarterly usage limited by contract. See Tracking paid accounts.
- Academic groups with equipment hosted at Siku typically run at the same priority as paid clients, but arrangements for individual groups may vary. Your Principal Investigator should have received specific information from our staff about how to submit jobs.
- GPUs should be requested following this example:
#SBATCH --gres=gpu:v100:2 #SBATCH --partition=all_gpus
- See "Node characteristics" below for the numbers of GPUs installed.
- Your Slurm account name is not necessarily the same as your account name on Compute Canada clusters. Most users on Siku have only one valid Slurm account and need not specify an
--account
parameter. If you see the message "Invalid account or account/partition combination specified", try submitting without--account
or-A
. If that fails to resolve the issue, you can check what accounts are valid for you withsacctmgr show associations where user=$USER format=account%20
, or Ask Support for help.
Storage quotas and filesystem characteristics
Filesystem | Default Quota | Backed up? | Purged? | Mounted on Compute Nodes? |
---|---|---|---|---|
Home Space | 52 GB and 512K files per user | Yes | No | Yes |
Scratch Space | 20 TB and 1M files per user | No | Not yet implemented | Yes |
Project Space | 1 TB and 512K files per group | Yes | No | Yes |
Node characteristics
Nodes | Cores | Available memory | CPU | Storage | GPU |
---|---|---|---|---|---|
40 | 40 | 186G or 191000M | 2 x Intel Xeon Gold 6248 @ 2.5GHz | ~720G | - |
6 | 40 | 376G or 385024M | 2 x Intel Xeon Gold 6248 @ 2.5GHz | ~720G | - |
9 | 40 | 275G or 282000M | 2 x Intel Xeon Gold 6248 @ 2.5GHz | ~720G | - |
1 | 40 | 186G or 191000M | 2 x Intel Xeon Gold 6148 @ 2.4GHz | ~720G | 3 x NVIDIA Tesla V100 (32GB memory) |
1 | 40 | 186G or 191000M | 2 x Intel Xeon Gold 6148 @ 2.4GHz | ~720G | 2 x NVIDIA Tesla V100 (32GB memory) |
- "Available memory" is the amount of memory configured for use by Slurm jobs. Actual memory is slightly larger to allow for operating system overhead.
- "Storage" is node-local storage. Access it via the $SLURM_TMPDIR environment variable.
- Hyperthreading is turned off.
Operating system: CentOS 7
SSH host keys
- ED25519 (256b)
SHA256:F9GcueU8cbB0PXnCG1hc4URmYYy/8JbnZTGo4xKflWU
MD5:44:2b:1d:40:31:60:1a:83:ae:1d:1a:20:eb:12:79:93
- RSA (2048b)
SHA256:cpx0+k52NUJOf8ucEGP3QnycnVkUxYeqJQMp9KOIFrQ
MD5:eb:44:dc:42:70:32:f7:61:c5:db:3a:5c:39:04:0e:91