Siku

From ACENET
Jump to: navigation, search


Siku is a high-performance computer cluster installed in 2019 at Memorial University in St. John's, Newfoundland.

It is funded in large part by the Atlantic Canada Opportunities Agency (ACOA) with the intention of generating regional economic benefits through industry engagement, while recognizing the important work that ACENET does for academic research in the region.

Siku is only accessible to selected clients. Industrial researchers should write to info@ace-net.ca. Principal Investigators of academic research groups may use this access request form.

Addresses

  • SSH: siku.ace-net.ca
  • Globus end point, data transfer node: computecanada#dtn.siku.ace-net.ca

Known issues

  • Multi-Processing using libverbs is not working as expected. MPI implementations, however, should work.
  • Directories are automatically created at first logon. This may produce a race condition that results in errors like the following:
Could not chdir to home directory /home/username: No such file or directory
/usr/bin/xauth:  error in locking authority file /home/username/.Xauthority
Lmod has detected the following error:  Unable to load module because of error when  evaluating modulefile: ...

Should this occur on first login, simply log out, wait a minute, and log back in again.

Similarities and differences with national GP clusters

Siku is designed from experience gained with the Compute Canada systems, Béluga, Cedar, Graham, and Niagara. Users familiar with those systems will find much familiar here.

  • The filesystem is similarly structured. See Storage and file management.
    • There is no "Nearline" archival filesystem.
  • The same scheduler is used, Slurm, although with simpler policies. See "Job Scheduling", below.
  • The same modules system provides access to the same list of available software.

Job scheduling

Tasks taking more than 10 CPU minutes or 4 GB of RAM should not be run directly on a login node, but submitted to the job scheduler, Slurm.

Scheduling policies on Siku are simpler than those on Compute Canada general-purpose systems.

  • Maximum run-time limit is 24 hours for unpaid accounts, 72 hours for paid accounts.
  • There is only one partition.
  • Paid clients have higher priority than academic (free) clients, but with usage limited by contract. See Tracking paid accounts.
  • GPUs should be requested like so (the count may be 1 or 2):
#SBATCH --gres=gpu:v100:2
#SBATCH --partition=all_gpus
  • Your account name is not necessarily the same as your account name on Compute Canada clusters. If you see the message "Invalid account or account/partition combination specified", try submitting without the --account or -A parameter.

Storage quotas and filesystem characteristics

Filesystem Default Quota Backed up? Purged? Mounted on Compute Nodes?
Home Space 52 GB and 512K files per user Yes No Yes
Scratch Space 20 TB and 1M files per user No Not yet implemented Yes
Project Space 1 TB and 512K files per group Yes No Yes

Node Characteristics

Nodes Cores Available memory CPU Storage GPU
40 40 186G or 191000M 2 x Intel Xeon Gold 6248 @ 2.5GHz ~720G -
6 40 376G or 385024M 2 x Intel Xeon Gold 6248 @ 2.5GHz ~720G -
2 40 186G or 191000M 2 x Intel Xeon Gold 6148 @ 2.4GHz ~720G 2 x NVIDIA Tesla V100 (32GB memory)
  • "Available memory" is the amount of memory configured for use by Slurm jobs. Actual memory is slightly larger to allow for operating system overhead.
  • "Storage" is node-local storage. Access it via the $SLURM_TMPDIR environment variable.
  • Hyperthreading is turned off.

Operating system: CentOS Linux release 7

SSH host keys

ED25519 (256b)
SHA256:F9GcueU8cbB0PXnCG1hc4URmYYy/8JbnZTGo4xKflWU
MD5:44:2b:1d:40:31:60:1a:83:ae:1d:1a:20:eb:12:79:93
RSA (2048b)
SHA256:cpx0+k52NUJOf8ucEGP3QnycnVkUxYeqJQMp9KOIFrQ
MD5:eb:44:dc:42:70:32:f7:61:c5:db:3a:5c:39:04:0e:91