Dou, Baojun
(2015)
Three essays on time series: spatiotemporal modelling,
dimension reduction and changepoint detection.
PhD thesis, The London School of Economics and Political Science (LSE).
Abstract
Modelling high dimensional time series and nonstationary time series are two import aspects in time series analysis nowadays. The main objective of this thesis is to deal with these two problems. The first two parts deal with high dimensionality and the third part considers a change point detection problem. In the first part, we consider a class of spatiotemporal models which extend popular econometric spatial autoregressive panel data models by allowing the scalar coefficients for each location (or panel) different from each other. The model is of the following form: yt = D(λ0)Wyt + D(λ1)yt−1 + D(λ2)Wyt−1 + εt, (1) where yt = (y1,t, . . . , yp,t) T represents the observations from p locations at time t, D(λk) = diag(λk1, . . . , λkp) and λkj is the unknown coefficient parameter for the jth location, and W is the p×p spatial weight matrix which measures the dependence among different locations. All the elements on the main diagonal of W are zero. It is a common practice in spatial econometrics to assume W known. For example, we may let wij = 1/(1 + dij ), for i ̸= j, where dij ≥ 0 is an appropriate distance between the ith and the jth location. It can simply be the geographical distance between the two locations or the distance reflecting the correlation or association between the variables at the two locations. In the above model, D(λ0) captures the pure spatial effect, D(λ1) captures the pure dynamic effect, and D(λ2) captures the timelagged spatial effect. We also assume that the error term εt = (ε1,t, ε2,t, . . . , εp,t) T in (1) satisfies the condition Cov (yt−1, εt) = 0. When λk1 = · · · = λkp for all k = 1, 2, 3, (1) reduces to the model of Yu et al. (2008), in which there are only 3 unknown regressive coefficient parameters. In general the regression function in (1) contains 3p unknown parameters. To overcome the innate endogeneity, we propose a generalized YuleWalker estimation method which applies the least squares estimation to a YuleWalker equation. The asymptotic theory is developed under the setting that both the sample size and the number of locations (or panels) tend to infinity under a general setting for stationary and αmixing processes, which includes spatial autoregressive panel data models driven by i.i.d. innovations as special cases. The proposed methods are illustrated using both simulated and real data.
In part 2, we consider a multivariate time series model which decomposes a vector process into a latent factor process and a white noise process. Let yt = (y1,t, · · · , yp,t) T be an observable p × 1 vector time series process. The factor model decomposes yt in the following form: yt = Axt + εt , (2) where xt = (x1,t, · · · , xr,t) T is a r × 1 latent factor time series with unknown r ≤ p and A = (a1, a2, · · · , ar) is a p × r unknown constant matrix. εt is a white noise process with mean 0 and covariance matrix Σε. The first part of (2) is a dynamic part and the serial dependence of yt is driven by xt. We will achieve dimension reduction once r ≪ p in the sense that the dynamics of yt is driven by a much lower dimensional process xt. Motivated by practical needs and the characteristic of high dimensional data, the sparsity assumption on factor loading matrix is imposed. Different from Lam, Yao and Bathia (2011)’s method, which is equivalent to an eigenanalysis of a non negative definite matrix, we add a constraint to control the number of nonzero elements in each column of the factor loading matrix. Our proposed sparse estimator is then the solution of a constrained optimization problem. The asymptotic theory is developed under the setting that both the sample size and the dimensionality tend to infinity. When the common factor is weak in the sense that δ > 1/2 in Lam, Yao and Bathia (2011)’s paper, the new sparse estimator may have a faster convergence rate. Numerically, we employ the generalized deflation method (Mackey (2009)) and the GSLDA method (Moghaddam et al. (2006)) to approximate the estimator. The tuning parameter is chosen by cross validation. The proposed method is illustrated with both simulated and real data examples. The third part is a change point detection problem. we consider the following covariance structural break detection problem: Cov(yt)I(tj−1 ≤ t < tj ) = Σtj−1, j = 1, · · · , m + 1, where yt is a p × 1 vector time series, Σtj−1̸ = Σtj and {t1, . . ., tm} are change points, 1 = t0 < t1 < · · · < tm+1 = n. In the literature, the number of change points m is usually assumed to be known and small, because a large m would involve a huge amount of computational burden for parameters estimation. By reformulating the problem in a variable selection context, the group least absolute shrinkage and selection operator (LASSO) is proposed to estimate m and the locations of the change points {t1, . . ., tm}. Our method is model free, it can be extensively applied to multivariate time series, such as GARCH and stochastic volatility models. It is shown that both m and the locations of the change points {t1, . . . , tm} can be consistently estimated from the data, and the computation can be efficiently performed. An improved practical version that incorporates group LASSO and the stepwise regression variable selection technique are discussed. Simulation studies are conducted to assess the finite sample performance.
Actions (login required)

Record administration  authorised staff only 