Parallel computing is the business of breaking a large problem into tens, hundreds, or even thousands of smaller problems which can then be solved at the same time using a cluster of computers, or supercomputer. It can reduce processing time to a fraction of what it would be on a desktop or workstation, or enable you to tackle larger, more complex problems. It’s widely used in big data mining, AI, time-critical simulations, and advanced graphics such as augmented or virtual reality. It’s used in fields as diverse as genetics, biotech, GIS, computational fluid dynamics, medical imaging, drug discovery, and agriculture.
The format is one day per week with two 2-hour sessions over five weeks. Over the course of 10 sessions we cover general parallel computing, Dask, OpenMP, GPU accelerator programming, and Message Passing Interface (MPI).
- Tuesday, May 16, 9:30 - 11:30 am & 1:00 - 3:00 pm
- Tuesday, May 23, 9:30 - 11:30 am & 1:00 - 3:00 pm
- Tuesday, May 30, 9:30 - 11:30 am & 1:00 - 3:00 pm
- Tuesday, June 6, 9:30 - 11:30 am & 1:00 - 3:00 pm
- Tuesday, June 13, 9:30 - 11:30 am & 1:00 - 3:00 pm
All times above are in Atlantic time (UTC-3:00).
Each two-hour session includes lectures and learning exercises. Online office hours are available each week so participants can get extra help.
The course is aimed at researchers and innovators, both academic and industrial. It is designed for participants familiar with the Linux command line and who have some level of programming experience. Completion of the ACENET Basics Series (introduction to HPC, Linux, shell scripting, and the Slurm job scheduler), or equivalent experience is strongly recommended.
Participants must have a computer with a Mac, Linux, or Windows operating system (not a tablet, Chromebook, etc.). As with any on-line course, a headset and a second monitor will be of benefit.
Participants must register using their institutional / organizational email address (not a personal email, ie. gmail)
Instructions on how to connect will be provided prior to the event.