My courses, my interests



Most of my courses target efficient computations in computational linear algebra. Solving large systems of linear equations are often a focal point of many efforts of modern computational science. And, of course, mathematical formulation of this task is inseparably connected not only to numerical analysis and numerical algorithms, but also to theoretical background and algorithms of computer science. While an important item used in this field is that of a matrix, in order to solve problems efficiently one needs to consider matrices not as black boxes, but looking into their structure. Roughly said, we need to exploit matrix sparsity, as indicated in the following figures.

Once we try to do this, we enter a completely new world. In order to understand i, we need not only to use concepts of graph theory and other tools of computer science, but also understand, at least roughly, development and trends related to computer architectures. The two course I am involved in are about this.

Overall, a significantly more emphasis is devoted to algorithms . In order to construct new computational algorithms, we need to get understanding. And the main stress is put exactly to get understanding, believing that students have obtained a sufficient theoretical background in other courses.

The first of my courses devotes to direct methods targeted to solve large and sparse systems of linear algebraic equations. It can be considered as an introduction showing the role of the classical, graph-based sparsity. The second course targets parallel matrix-based computations. This course is shared with my great colleague Jaroslav Hron. While Jaroslav show how to do real parallel computations (based on coding in Python using unix-based environment), my focus is to show that parallel computations are something that is inevitable and should be considered. Either in order to develop new algorithms running on modern computer architectures, or taking the computer features into account in modifying implementations. My position here is to present a general overview and not to get stuck with details of particular computers, programming patterns, programming tools since they are in the process of ongoing changes.

   

   

   





Sparse matrices in direct methods (2023/2024)




The goal of this course is to discuss sparse matrices in the numerical linear algebra tasks like
  • Solving systems of linear algebraic equations by direct methods (variants of elimination)
  • Solving systems of linear algebraic equations by preconditioned iterative methods (discussing preconditioner constructions)


    Exercises are prepared in order to demonstrate discussed algebraic approaches. They are based on Matlab. But, even inside Matlab, data structures that are considered are often such that production codes are based on.

    Parallel matrix computations (2023/2024)




    The goal of this course is to present parallel computations in numerical mathematics, in particular, in numerical linear algebra, as an indispensable tool in contemporary computations. We will discuss basic features of computer architectures and basic principles behind parallel computations.


    The course is shared with Jaroslav Hron who presentes exercises: real parallel coding using Python.

    Starší / Older