The computation of word meaning

Date: 
Tuesday, May 14, 2013 - 14:00 - 16:00
Location: 
Grote Kauwenberg 18, D building, room D424
Presenter: 
Tim Van de Cruys

In the course of the last two decades, significant progress has been
made with regard to the automatic extraction of word meaning from
large-scale text corpora using unsupervised machine learning
methods. The most successful models of word meaning are based on
distributional similarity, calculating the meaning of words according
to the contexts in which those words appear. The first part of his
tutorial provides a general overview of the algorithms and notions of
context used to calculate semantic similarity. We will look in some
detail at dimensionality reduction, an unsupervised machine learning
technique that is able to reduce a large number of contexts to a
limited number of meaningful dimensions. In the second part of this
tutorial, participants will gain some hands-on experience with regard
to the computation of semantic similarity. Participants will have the
chance to construct a number of distributional models and perform
dimensionality reduction calculations using a designated Python
library for semantic similarity.

Signups closed for this CLiPS Colloquium