Previous abstract | Contents | Next abstract

Overfitting Avoidance for Stochastic Modeling of Attribute-Value Grammars

We present a novel approach to the problem of overfitting in the training of stochastic models for selecting parses generated by attribute-valued grammars. In this approach, statistical features are merged according to the frequency of linguistic elements within the features. The resulting models are more general than the original models, and contain fewer parameters. Empirical results from the task of parse selection suggest that the improvement in performance over repeated iterations of iterative scaling is more reliable with such generalized models than with ungeneralized models.


Tony Mullen and Miles Osborne, Overfitting Avoidance for Stochastic Modeling of Attribute-Value Grammars. In: Proceedings of CoNLL-2000 and LLL-2000, Lisbon, Portugal, 2000. [ps] [pdf] [bibtex]
Last update: June 27, 2001. erikt@uia.ua.ac.be