We test whether distributional models can do one-shot learning of
definitional properties from text only. Using Bayesian models, we find that
first learning overarching structure in the known data, regularities in textual
contexts and in properties, helps one-shot learning, and that individual
context items can be highly informative. Our experiments show that our model
can learn properties from a single exposure when given an informative
utterance.
We test whether distributional models can do one-shot learning of
definitional properties from text only. Using Bayesian models, we find that
first learning overarching structure in the known data, regularities in textual
contexts and in properties, helps one-shot learning, and that individual
context items can be highly informative. Our experiments show that our model
can learn properties from a single exposure when given an informative
utterance.