CoNLL 2014: Program

Thursday June 26 2014
9:00 AM - 10:30 AM    Session 1 (Chair: Alessandro Moschitti)
   Opening remarks
   What's in a p-value in NLP?
Anders Søgaard, Anders Johannsen, Barbara Plank, Dirk Hovy, Héctor Martínez Alonso
   Domain-Specific Image Captioning
Rebecca Mason and Eugene Charniak
   Reconstructing Native Language Typology from Foreign Language Usage
Yevgeni Berzak, Roi Reichart, Boris Katz
   Automatic Transliteration of Romanized Dialectal Arabic
Mohamed Al-Badrashiny, Ramy Eskander, Nizar Habash, Owen Rambow
10:30 AM - 11:00 AM    Coffee break
11:00 PM - 12:30 PM    Session 2: Shared Task (Chair: Hwee Tou Ng)
   The CoNLL-2014 Shared Task on Grammatical Error Correction
Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto Christopher Bryant
   Grammatical error correction using hybrid systems and type filtering
Mariano Felice, Zheng Yuan, Øistein E. Andersen, Helen Yannakoudakis, Ekaterina Kochmar
   The AMU System in the CoNLL-2014 Shared Task: Grammatical Error Correction by Data-Intensive and Feature-Rich Statistical Machine Translation
Marcin Junczys-Dowmunt and Roman Grundkiewicz
   The Illinois-Columbia System in the CoNLL-2014 Shared Task
Alla Rozovskaya, Kai-Wei Chang, Mark Sammons, Dan Roth, Nizar Habash
12:30 AM - 2:00 PM    Lunch break
2:00 PM - 3:30 PM    Session 3 (Chair: Trevor A. Cohn)
   Learning to Rank Answer Candidates for Automatic Resolution of Crossword Puzzles
Gianni Barlacchi, Massimo Nicosia, Alessandro Moschitti
   Inducing Neural Models of Script Knowledge
Ashutosh Modi and Ivan Titov
   Grounding Language with Points and Paths in Continuous Spaces
Jacob Andreas and Dan Klein
   Looking for hyponyms in vector space
Marek Rei and Ted Briscoe
   Lexicon Infused Phrase Embeddings for Named Entity Resolution
Alexandre Passos, Vineet Kumar, Andrew McCallum
3:30 PM - 5:00 PM    Poster session 1
5:00 PM - 6:00 PM    Keynote 1 (Chair: Scott Wen-tau Yih)
   Keynote 1: Morten H. Christiansen
Language Acquisition as Learning to Process
Language happens in the here-and-now. If the linguistic input is not processed immediately, nothing can be learned from it. To successfully deal with the continual deluge of linguistic information, the brain must compress and recode the input as rapidly as possible. As a consequence, incoming language incrementally gets recoded into chunks of decreasing granularity, from sounds to constructions and beyond. Thus, units at different levels of linguistic analysis come for free as a consequence of the transient nature of language. The specific units change during development as the child learns to use language. To illustrate, I present results from a recent chunk-based computational model of early syntactic acquisition. I conclude that the immediacy of language processing provides a fundamental constraint on accounts of language acquisition, implying that acquisition involves learning to process, rather than inducing a grammar.
Friday June 27 2014
8:35 AM - 9:35 AM    Keynote 2 (Chair: Scott Wen-tau Yih)
   Keynote 2: Tom Mitchell
Never-Ending Language Learning
We will never really understand learning until we can build machines that learn many different things, over years, and become better learners over time.
We describe our research to build a Never-Ending Language Learner (NELL) that runs 24 hours per day, forever, learning to read the web. Each day NELL extracts (reads) more facts from the web, into its growing knowledge base of beliefs. Each day NELL also learns to read better than the day before. NELL has been running 24 hours/day for over four years now. The result so far is a collection of 70 million interconnected beliefs (e.g., servedWith(coffee, applePie)), NELL is considering at different levels of confidence, along with millions of learned phrasings, morphological features, and web page structures that NELL uses to extract beliefs from the web. NELL is also learning to reason over its extracted knowledge, and to automatically extend its ontology. Track NELL's progress at http://rtw.ml.cmu.edu, or follow it on Twitter at @CMUNELL.
9:35 AM - 10:30 AM    Session 4 (Chair: Ido Dagan)
   Focused Entailment Graphs for Open IE Propositions
Omer Levy, Ido Dagan, Jacob Goldberger
   Improved Pattern Learning for Bootstrapped Entity Extraction
Sonal Gupta and Christopher Manning
   Towards Temporal Scoping of Relational Facts based on Wikipedia Data
Avirup Sil and Silviu-Petru Cucerzan
10:30 AM - 11:00 AM    Coffee break
11:00 AM - 12:30 AM    Session 5 (Chair: Silviu-Petru Cucerzan)
   Distributed Word Representation Learning for Cross-Lingual Dependency Parsing
Min Xiao and Yuhong Guo
   Treebank Translation for Cross-Lingual Parser Induction
Jörg Tiedemann, Željko Agić, Joakim Nivre
   Weakly-Supervised Bayesian Learning of a CCG Supertagger
Dan Garrette, Chris Dyer, Jason Baldridge, Noah A. Smith
   Factored Markov Translation with Robust Modeling
Yang Feng, Trevor Cohn, Xinkai Du
   Hallucinating Phrase Translations for Low Resource MT
Ann Irvine and Chris Callison-Burch
12:30 AM - 2:00 PM    Lunch break
2:00 PM - 3:30 PM    Session 6 (Chair: Chris Dyer)
   Linguistic Regularities in Sparse and Explicit Word Representations
Omer Levy and Yoav Goldberg
   Probabilistic Modeling of Joint-context in Distributional Similarity
Oren Melamud, Ido Dagan, Jacob Goldberger, Idan Szpektor, Deniz Yuret
   A Rudimentary Lexicon and Semantics Help Bootstrap Phoneme Acquisition
Abdellah Fourtassi and Emmanuel Dupoux
   Best Paper Award announcement
   Bussiness meeting
3:30 PM - 5:00 PM    Poster session 2