ABSTRACT
Approaches to modeling semantic memory fall into two main classes: Those
that construct a model from a small and well-controlled artificial dataset (e.g.
Collins and Quillian, 1969; Elman, 1990; Rogers & McClelland, 2004) and
those that acquire semantic structure from a large text corpus of natural language
(e.g. Landauer & Dumais, 1997; Griffiths, Steyvers, & Tenenbaum, 2007; Jones
& Mewhort, 2007). We refer to the first class as the small-world approach1 and
the latter as the Big Data approach. The two approaches differ on the method by
which to study semantic memory and exemplify a fundamental point of theoretical
divergence in subfields across the cognitive sciences.