The analysis of time series data is important for many fields, ranging from meteorology and engineering to finance. Gaussian processes (GPs) are a simple and general class of probability distributions on functions, widely used to perform regression and classification in the machine learning field, useful for modelling and forecasting time series. However, standard GPs have computational difficulties for large data sets and a lack of expressiveness for non-stationary time series. This thesis proposes a state space GP method focusing on learning non-stationary time series.
First, from the modelling perspective, we present a GP-based deep state space model (DeepSSM): a multi-layer state space model, where each layer is modelled as a GP, and the output of the previous layer is the lengthscale of the next layer. This GP model assumes that the prior is not homogeneous over time, and is specifically useful in some cases where the time series data is non-stationary, allowing for learning the time-variant lengthscale and capturing richer information than a standard GP. Based on the state space model equations, this thesis derives a variance correction which accounts for undesirable effects that occur when the lengthscale changes over time.
Second, this thesis also compares the properties of the multi-layer state space model with traditional, kernel-based GP constructions, showing that it exhibits some desirable features. Third, this thesis derives an inference method based on the Extended Kalman filter, which allows us to predict non-stationary behaviour, whilst quantifying our uncertainty about those predictions.
One result shows that, compared with standard GP, DeepSSM is capable of uncovering the time-varying information. The model is demonstrated on various time series, however, it is applicable to any one-dimensional data.