Mathematical Sciences at the University of Liverpool

Sample PhD Projects in Stochastics

General Interests

Markov processes; stochastic analysis; stochastic epidemic models; optimal control theory; Bayesian statistics; non-parametric statistics


Dr Jinglai Li

My current research interests are in scientific computing, computational statistics, uncertainty quantification, data science and their applications in various scientific and engineering problems. Specific topics include: Bayesian inferences, inverse problems, Monte Carlo simulations, risk analysis and failure probability estimation, data assimilation and filtering methods, optimisation/decision-making under uncertainty, and machine learning.

Possible projects are:

1. Data driven prior modelling for Bayesian image processing

2. Uncertainty quantification and Bayesian inference for deep learning

3. Risk-aware localisation and control of autonomous vehicles

4. High dimensional Hamiltonian Monte Carlo methods for Bayesian inference


Dr Kai Liu

1. Ruin probabilities for Lévy processes

This project is interested in using Lévy process for risk reserve modelling.

2. Optimal Consumption under partial Observations for a Stochastic Systems with Delay

In this project, we introduce time delay into the stochastic system which allows us to take into account the fact that it may take some time before new market information affects the value of our investment. Also the decisions we make regarding consumption may be based on both present and past values of the wealth. The second case captures in addition the fact that we do not always have complete information about all the parameters in a mathematical model in finance.

3. Robust techniques for pricing and hedging options

The classical approach to the pricing and hedging of financial options - pioneered by Black and Scholes, is to postulate a model for the underlying asset, and use the notion of risk-neutral pricing to find the arbitrage-free price of derivative contracts based on the underlying. The recent financial crisis has demonstrated the frailty of such model based techniques, and highlighted the need for more robust methods of hedging and pricing derivative contracts. A starting point for an alternative approach is to ask: given a set quoted prices, when are these prices consistent with some model? This proves to be the starting point for a number of interesting questions, and a project in this area would look to investigate some of these.

4. Stability of infinite dimensional stochastic systems

This project is devoted to the investigation of various criteria, theoretical or practical, of stochastic stability and their possible applications in popular models like stochastic reaction-diffusion equations and stochastic volatility in option pricing with very large number of incremental market activities.

5. Large deviation of stochastic systems with memory

Over the last three decades, large deviation problem for stochastic evolution equations, especially stochastic partial differential equations, has been extensively investigated by many researchers. In this project, we will deal with the large deviation principle for families of probability measures associated with the stochastic retarded functional evolution equations.


Dr Alexey Piunovskiy

Optimal Control: theory and applications.

Many real life phenomena are described by ordinary differential equations. Others can be approximated by Markov chains or more complicated random processes. If we can affect dynamically this or that parameter then we deal with a controlled dynamical system. Generally speaking, the problem is to find the best, optimal control. Remember, many problems are multiple objective.
The aim of such a project for students can be: to justify the problem statement, to study optimisation methods like dynamic programming and Pontryagin's maximum principle, to investigate analytically (or numerically) a particular version of the problem formulated, to undertake computer simulations, to compare different mathematical models describing the same real life phenomenon.

Projects:

1. Convex Analytic Approach to certain controlled Markov chains and jump processes

2. Analysis of communication networks

3. Approximations of controlled birth-and-death processes


Dr Yi Zhang

Continuous-time Markov decision processes, roughly speaking, are those pure jump processes whose transition (jump) rates are controlled according to a policy, which is, generally speaking, a predictable (with respect to the underlying filtration such as the internal history of the associated Marked point process) transition function. The system dynamic of a continuous-time Markov decision process looks as follows: starting with the initial state, after a (non-stationary) exponentially distributed time, the process jumps to a new state with another exponential sojourn time, and so on, where the intensities of the exponential sojourn times as well as the distribution of the new state after each jump are controlled. Such processes have many applications to telecommunication, reliability and production, inventories, and so on.

Under a fixed performance criterion, the policies that give the best performance are called optimal. We are interested in the solvability of the associated optimization problems.

We can mention the following three projects:

1. Continuous-time Markov decision processes with multiple players

This includes the development of the dynamic programming and linear programming of the underlying problems, providing mild conditions for the existence of randomized stationary policies giving the saddle points out of the class of history-dependent ones, solving numerical examples etc.

2. Continuous-time Markov decision processes with total reward criteria

This total reward criterion is often studied under an absorbing condition, under which the class of stationary policies is sufficient. This project is about the non-absorbing case, where we expect the class of stationary policies is not sufficient even for unconstrained single player problems.

3. Risk sensitive criterion for continuous-time Markov decision processes

This involves the investigation of various formulations of the problems taking into account the risk. In more detail, one can consider the associated mean-variance optimization problem or those with an appropriately selected utility function. Models with and without constraints will be both investigated.


Dr Kamilla Zychaluk

1. Consolidating and expanding knowledge of statistical analysis for complex data

The exact project depends on the student's specific interest. The aim is to consolidate and expand knowledge of statistical analysis for complex data, starting from identifying problems and building possible models, through checking model assumptions, to interpretation of the results. This will be complemented by a simulation study.

2. Exploring different types of bootstrap and investigating their performance in various applications

Bootstrap is a re-sampling based method widely used in many different statistical applications. The project would concentrate on exploring different types of bootstrap and investigating their performance in various applications.

3. Kernel, wavelet and local polynomial estimation