 last modification:Tuesday, 02-Mar-2021 09:30:18 CET

# lecture NMNV532 - exercises

## Exercises

#### Lecture 1 - introduction to python

ssh r3d3.karlin.mff.cuni.cz
python

• vector combination (axpy) and matrix-vector multiplication (gemv) operations

#### Lecture 2 - introduction to parallel programing

• [OpenMP] example in C (in ~hron/GIT/nmnv532/lecture2/omp/)
• MPI example in C (in ~hron/GIT/nmnv532/lecture2/mpi/c/)
• MPI in python using MPI4Py (in ~hron/GIT/nmnv532/lecture2/mpi/python/)

#### Lecture 3 - introduction to MPI4Py

• [MPI4Py documentation ↵]
• MPI operations: send/recv, broadcast/reduce, scater/gather
• dot product in parallel
• matrix-vector multiplication in parallel

#### Lecture 4 - parallel matrix operations in MPI4Py

• matrix distribution by rows/columns and by blocks (cartesian communicator)
• Jacobi method in parallel

#### Lecture 5 - using PETSc4py

• distributed vectors and matrix objects in PETSc
• ksp objects - linear solvers in PETSc [summary ↵]

#### Lecture 6 - using Global Arrays library GA4py

• Global Arrays - implementation of Partitioned Global Address Space (PGAS) programming model
• vector, matrix objects using GA4py (in ~hron/GIT/nmnv532/lecture5)

## Final test tasks....

• explore weak (e.i. fixed problem size per processor) and strong (e.i. fixed global problem size) scaling for dense matrix-vector multiplication in parallel using MPI4py (lectures 3,4) or GA4py (lecture 7)
• make table or plot of the scaling efficiency vs number of processors see for example here
• instead of matrix-vector operation you can choose any other (vector-vector dot product, matrix-matrix product, etc.)