| CS 494 |
Parallel and Distributed Processing |
Spring 2021 |
Meets: MW 4:30pm - 5:45pm
Room: online sync
Call number: 42280 (ugrad) and 42281 (grad)
Instructor:
Ajay Kshemkalyani
Office hours: W 5:45-6:30pm BB Collaborate
Email: ajay@uic.edu
Course Description:
This class explores the foundations of parallel and distributed computing. Emphasis will be laid on parallel and distributed algorithms, for basic topics ranging from graph algorithms to mutual exclusion, deadlock detection, and predicates detection algorithms, to design of communication algorithms for elementary primitives such as multicast, broadcast, scatter, gather, all-to-all personalized communication on distributed systems as well as clusters based on specific topologies. Basic concepts such as scalability of systems (i.e., combination of architecture plus algorithm) will also be explored. As a case study, the Message-Passing Interface (MPI) will be introduced, and students will be exposed to programming using MPI on UIC’s Extreme cluster. Ultimately, the goal is to foster an understanding of the various ways in which concurrent actions in the system and the lack of global knowledge in the system can affect the design of parallel and distributed algorithms.
List of topics:
- Basics of cluster design (1 week)
- Algorithms for implementing basic communication primitives on topologies (2 weeks)
- Introduction to MPI (1 week)
- Scalability analysis of parallel systems (1 week)
- Distributed graph algorithms (for traversals, collaborative exploration,
routing, MIS, synchronizers, etc) (3 weeks)
- Mutual exclusion algorithms, deadlock detection algorithms, termination
detection algorithms, leader election algorithms (3 weeks)
- Predicate detection algorithms (1 week)
- Total order and causal order multicast (1 week)
- Search algorithms and dynamic load balancing for discrete optimization
(1.5 week)
Prerequisites:
Programming skills amenable to programming the UIC Extreme cluster, in a language such as C, is required. CS 401 and CS 450 are recommended but not required.
Grading:
Grading will be on the curve.
The following is only a tentative breakup of the evaluation
scheme and is subject to (reasonable) changes as the course progresses.
Also depends on the class size.
- Midterm 1 (25%):
- Midterm 2 (25%):
- Programming assignments (30%)
- Term paper (20%)
Required Textbooks/ Sources:
-
Introduction to Parallel Computing, by V. Kumar, A. Grama, A. Gupta,
G. Karypis, Benjamin-Cummins, 2nd edition, 2003.
-
Distributed Computing: Principles, Algorithms, and Systems, by
Kshemkalyani and Singhal, Cambridge University Press, March 2011 edition
Auxiliary resources:
- An Overview Chart
- Winners of the Dijkstra Award for Most Influential Paper in Distributed Computing, 2000-.
- The best and the fastest supercomputers! Study this site including the report on HPC benchmarks.
- UIC's EXTREME cluster and a
Knowledge base (how to login, run jobs, list of FAQs, etc.)
and Slides from ACER
- MPI tutorial at Argonne National Lab
- Interconnection Networks
- Amdahl's Law
- The Travelling Salesman Problem
- Overhead slides used in class.
Course progress chart (tentative):
- Week 1: Introduction; processors; classification; platforms (Ch 1 + TOP500 + extra notes); Chapter 1 (DP)
- Week 2: Introduction
- Week 3: Models: (Ch 2-PP)
- Week 4: Basic Communication operations: (Ch 4-PP)
- Week 5: Basic Communication operations: (Ch 4-PP),
Lab 1 due March 8 at 11:59pm
- Week 6: Principles of Designing Parallel Algorithms:
Use Professor George Karypis's PDF lecture slides
"GK lecture slides", for Chapter 3, Principles of Parallel Algorithms,
from here and
slides for Gaussian Elimination and LU Decomposition examples
- Week 7: Performance and Scalability: Chapter 5 (revised slides)
- Week 8: Terminology and Basic Algorithms (Chapter 5-DP)
- Week 9: Chapter 5-DP, Lab 2 due April 5 at 11:59pm
- Week 10: Chapter 5-DP
- Week 11: Chapter 5-DP
- Week 12: Chapter 2-DP, Lab 3 due May 4 at 11:59pm
- Week 13: Chapter 3-DP
- Week 14: Chapter 4-DP
- Week 15: Test on Wed, Apr 28, 4:30-6:30pm in Gradescope
Syllabus: Chapter 1, Chapter 2, Chapter 3.1-3.5.1, Chapter 4.1-4.3, Chapter 5, all of the Distributed Processing book