Cookies?
Library Header Image
LSE Theses Online London School of Economics web site

Uniform convergence and learnability.

Anthony, Martin Henry George (1991) Uniform convergence and learnability. PhD thesis, London School of Economics and Political Science.

[img]
Preview
PDF
Download (3MB) | Preview

Abstract

This thesis analyses some of the more mathematical aspects of the Probably Approximately Correct (PAC) model of computational learning theory. The main concern is with the sample size required for valid learning in the PAC model. A sufficient sample-size involving the Vapnik-Chervonenkis (VC) dimension of the hypothesis space is derived; this improves the best previously known bound of this nature. Learnability results and sufficient sample-sizes can in many cases be derived from results of Vapnik on the uniform convergence (in probability) of relative frequencies of events to their probabilities, when the collection of events has finite VC dimension. Two simple new combinatorial proofs of each of two of Vapnik's results are proved here and the results are then applied to the theory of learning stochastic concepts, where again improved sample-size bounds are obtained. The PAC model of learning is a distribution-free model; the resulting sample sizes are not permitted to depend on the usually fixed but unknown probability distribution on the input space. Results of Ben-David, Benedek and Mansour are described, presenting a theory for distribution-dependent learnability. The conditions under which a feasible upper bound on sample-size can be obtained are investigated, introducing the concept of polynomial Xo-finite dimension. The theory thus far is then applied to the learnability of formal concepts, defined by Wille. A learning algorithm is also presented for this problem. Extending the theory of learnability to the learnability of functions which have range in some arbitrary set, learnability results and sample-size bounds, depending on a generalization of the VC dimension, are obtained and these results are applied to the theory of artificial neural networks. Specifically, a sufficient sample-size for valid generalization in multiple-output feedforward linear threshold networks is found.

Item Type: Thesis (PhD)
Uncontrolled Keywords: Statistics
Sets: Collections > ProQuest Etheses
URI: http://etheses.lse.ac.uk/id/eprint/2080

Actions (login required)

Record administration - authorised staff only Record administration - authorised staff only

Downloads

Downloads per month over past year

View more statistics