Installed in late 2019, Siku (meaning “sea ice” in Inuktitut) was funded in large part by the Atlantic Canada Opportunities Agency (ACOA) with the intention of generating regional economic benefits through industry engagement, while recognizing the important work that ACENET does for academic research in the region. The system is to be used in a manner which leads it to be financially self-sustaining.
The priority for Siku is paid industry usage with unpaid academic use welcomed on the unused capacity. (Academic researchers may also access the pay-for-use model if they require the same priority as industry on Siku.)
Siku is not intended to replace academic researcher use of the national infrastructure, but rather to accommodate needs and priorities that cannot be well met by these systems. Academic researchers can request access to Siku via a light-weight application form to determine if the system is the right fit for your needs.
Access will be reviewed annually in April.
Located at Memorial University, Siku is a 2000 core computing cluster with a focus on industry engagement and regional research priorities. It incorporates Intel Cascade Lake CPUs, a high-throughput, low-latency EDR Infiniband interconnect, AI-capable NVIDIA Tesla V100 GPUs, a 1.5 PB parallel filesystem, tape back-up, and offers both batch and cloud-computing interfaces.
These national systems, installed between 2016 and 2018, use cutting-edge technology, and are available to post-secondary institution researchers at no charge.
Located at the University of Victoria, Arbutus is an OpenStack cloud, with emphasis on hosting virtual machines and other cloud workloads. The system, provided by Lenovo, has 6,944 CPU cores across 248 nodes, each with on-node storage and 10Gb networking. It accesses 1.6PB of persistent storage, primarily via Ceph in a triple-redundant configuration.
Located at Simon Fraser University, the Cedar system is a heterogeneous cluster, suitable for a variety of workloads. With over 3.6 petaFLOPS of computing power, Cedar has greater computational power than the entire fleet of Compute Canada’s aging legacy systems combined.
Located at the University of Waterloo, Graham is a heterogeneous cluster, suitable for a variety of workloads. It has a small OpenStack partition, and includes local storage on nodes. Specifications include over 20,000 CPU cores across a diverse set of node types, including GPU nodes. The Graham system is entirely liquid cooled, using rear-door heat exchangers.
Located at the University of Toronto, Niagara is an end-to-end Lenovo solution with 1,500 ultra-dense ThinkSystem SD530 compute nodes, providing more than three petaflops of processing power, supported by 12 petabytes of storage. Mellanox EDR InfiniBand is used to create an industry-first Dragonfly+ network topology featuring adaptive routing to provide the high-speed low-latency communications necessary for large-scale full-system simulations. Burst-buffer technology from Excelero helps improve performance for data-intensive work loads. The system leverages Lenovo Ethernet for cluster management.