Convex optimization Winter 2019/20
Back to main page
Final exam part 1 question list
- 01) Define convex cone, proper cone, dual cone
- 02) Define convex set, show that an given set is convex from the definition
- 03) Operations that preserve convexity (list at least 3)
- 04) Definition of conical hull, convex hull
- 05) Support hyperplane theorem for convex sets -- what does it say, sketch the
situation
- 06) Define convex function, show that a given function is convex using the
definition
- 07) Properties of first and second derivatives smooth convex functions
- 08) Properties of the minimum of convex function
- 09) Operations on functions that preserve convexity
- 10) Define concave function, log-convex function, quasiconvex function
- 11) Define convex optimization problem
- 12) Define feasible and optimal solution
- 13) What is the feasibility problem? How does this problem relate to
optimization problems?
- 14) Define LP, QP, SOCP
- 15) Define SDP
- 16) Define the cone of positive semidefinite matrices, give at least 1
application
- 17) Define a cone program (program with generalized inequalities as constraints)
- 18) Define a vector optimization problem, explain what a Pareto optimal solution
is
- 19) Explain what scalarization is, why it is useful
- 20) Explain the main idea of how to approximate the optimal solution of a quasiconvex
program by solving a series of convex programs
- 21) Define dual program (including Lagrangian and Lagrange dual function)
- 22) Weak duality theorem -- statement and applications
- 23) State the Slater criterion and strong duality theorem
- 24) Give at least 2 applications of strong duality theorem
- 25) Sensitivity of optimal value to initial conditions: State the 2 results we
had and explain what they mean in practice
- 26) State the Farkas lemma, give 1 example of application
- 27) Explain what complementarity is, give an example of how to use it to compute
the primal optimal solution if we know the dual optimal solution
- 28) State the KKT conditinos, explain why they are important
- 29) Norm minimization problem -- what it is, 1 example
- 30) Robust norm minimization -- what it is, 1 example
- 31) Support Vector Machine -- what it is, what is the “support”
- 32) Maximum likelihood method -- what it is, how to use it
- 33) Maximum a posteriori probability method -- what it is, how to use it
- 34) Löwner-John ellipsoid -- what it is, what is it for, how to calculate it
- 35) Define what a “descent method” means, how is it used for unconstrained
minimization
- 36) Explain what a sparse matrix is, why are they useful
- 37) Explain what gradient descent is, what are its advantages and disadvantages
- 38) Explain what Newton's method is, what its advantages and disadvantages are
- 39) Give at least two ways of how to minimize a function subject to linear
equality constraints
- 40) Give a sketch of interior point methods