STOCHASTIC OPTIMAL CONTROL
Goal of the course.
Basic problems, methods and results in the theory of optimal control for dynamical systems perturbed by noise will be presented. Both discrete-time and continuous-time models will be considered, over finite and infinite horizon. Continuous time models will be described by stochastic differential equations. The main solution methods will be dynamic programming, the study of the Hamilton-Jacobi-Bellman equation for the value function - including cases of solutions with low regularity - backward stochastic differential equations, the stochastic maximum principle (in the sense of Pontryagin). The main applications presented throughout the course will address economical and financial models, as well as the linear-quadratic stochastic optimal control problem.
1) Discrete-time stochastic optimal control.
Controlled dynamical systems perturbed by noise, admissible control processes, payoff functionals over finite horizon. Value function, dynamic programming and the Hamilton-Jacobi-Bellman (HJB) equation. Extensions to discounted functionals over infinite horizon. Application to the Samuelson's model for optimal portfolio.
2) Optimal control of stochastic differential equations.
Stochastic differential equations with control parameters, admissible control processes, payoff functionals over finite and infinite horizon. Value function and the dynamic programming principle. Hamilton-Jacobi-Bellman (HJB) equations of parabolic and elliptic type. Verification theorems for regular solutions of the HJB equation. Linear-quadratic stochastic optimal control. Introduction to generalized solutions to the HJB equation, in the viscosity sense. Application to optimal portfolio problems, in particular the Merton's model.
3) Backward stochastic differential equations.
Formulation, existence and uniqueness results. Applications to hedging strategies in financial market models and to option pricing. Probabilistic representation of the value function of an optimal control problem and of the solution to HJB equations.
4) Stochastic maximum principle.
First variation of a payoff functional related to controlled stochastic differential equations and necessary optimality conditions. Duality arguments and stochastic maximum principle in the sense of Pontryagin. Sufficient optimality conditions under concavity or convexity assumptions.
5) Overview on other problems and methods.
Under certain circumstances, various other topics may also be presented, for instance control with partial observation, ergodic control, optimal stopping problems (and its application to princing of American options), optimal switching, impulse control.
Prerequisites and other information.
Attending students are supposed to know the contents of a course in measure-theoretic probability. Other prerequisites, on which only short reminders will be given, are stochastic integration with respect to Brownian motion, the related stochastic calculus and stochastic differential equations driven by Brownian motion. Attendance to lectures and exercise classes is not compulsory, but highly recommended.