Diverse Classifiers for NLP Disambiguation Tasks: Comparison, Optimization, Combination, and Evolution.

In this paper we report preliminary results from an ongoing study that investigates the performance of machine learning classifiers on a diverse set of Natural Language Processing (NLP) tasks. First, we compare a number of popular existing learning methods (Neural networks, Memory-based learning, Rule induction, Decision trees, Maximum Entropy, Winnow Perceptrons, Naive Bayes and Support Vector Machines), and discuss their properties vis a vis typical NLP data sets. Next, we turn to methods to optimize the parameters of single learning methods through cross-validation and evolutionary algorithms. Then we investigate how we can get the best of all single methods through combination of the tested systems in classifier ensembles. Finally we discuss new and more thorough methods of automatically constructing ensembles of classifiers based on the techniques used for parameter optimization.

Zavrel, J., Degroeve, S., Kool, A., Daelemans, W., Jokinen, K. (2000) Diverse Classifiers for NLP Disambiguation Tasks: Comparison, Optimization, Combination, and Evolution. Proceedings of CEvoLE 2/TWLT 18: "Learing to Behave" 201-221. Ieper, Belgium.









Contact:
VIB / UGent
Bioinformatics & Evolutionary Genomics
Technologiepark 927
B-9052 Gent
BELGIUM
+32 (0) 9 33 13807 (phone)
+32 (0) 9 33 13809 (fax)

Don't hesitate to contact the in case of problems with the website!

You are visiting an outdated page of the BEG/Van de Peer Lab site.

Not all pages have been ported, so these archived pages are still available.

Redirect to the new website?