Cookies?
Library Header Image
LSE Theses Online London School of Economics web site

Generative adversarial networks for sequential learning

Xu, Tianlin (2022) Generative adversarial networks for sequential learning. PhD thesis, London School of Economics and Political Science.

[img] Text - Submitted Version
Download (18MB)
Identification Number: 10.21953/lse.00004425

Abstract

Generative modelling aims to learn the data generating mechanism from observations without supervision. It is a desirable and natural approach for learning unlabelled data which is easily accessible. Deep generative models refer to a class of generative models combined with the usage of deep learning techniques, taking advantage of the intuitive principles of generative models as well as the expressiveness and flexibility of neural networks. The applications of generative modelling include image, audio, and video synthesis, text summarisation and translation, and so on. The methods developed in this thesis particularly emphasise on domains involving data of sequential nature, such as video generation and prediction, weather forecasting, and dynamic 3D reconstruction. Firstly, we introduce a new adversarial algorithm for training generative models suitable for sequential data. This algorithm is built on the theory of Causal Optimal Transport (COT) which constrains the transport plans to respect the temporal dependencies exhibited in the data. Secondly, the algorithm is extended to learn conditional sequences, that is, how a sequence is likely to evolve given the observation of its past evolution. Meanwhile, we work with the modified empirical measures to guarantee the convergence of the COT distance when the sequences do not overlap at any time step. Thirdly, we show that state-of-the-art results in the complex spatio-temporal modelling using GANs can be further improved by leveraging prior knowledge in the spatial-temporal correlation in the domain of weather forecasting. Finally, we demonstrate how deep generative models can be adopted to address a classical statistical problem of conditional independence testing. A class of classic approaches for such a task requires computing a test statistic using samples drawn from two unknown conditional distributions. We therefore present a double GANs framework to learn two generative models that approximate both conditional distributions. The success of this approach sheds light on how certain challenging statistical problems can benefit from the adequate learning results as well as the efficient sampling procedure of deep generative models.

Item Type: Thesis (PhD)
Additional Information: © 2022 Tianlin Xu
Library of Congress subject classification: Q Science > QA Mathematics
Sets: Departments > Statistics
Supervisor: Bergsma, Wicher and Acciaio, Beatrice
URI: http://etheses.lse.ac.uk/id/eprint/4425

Actions (login required)

Record administration - authorised staff only Record administration - authorised staff only

Downloads

Downloads per month over past year

View more statistics