Three MIT researchers have investigated the tendency for languages to place words that work together as closely as possible. This property, dubbed the “dependency length minimization,” or DLM, was a trait that most of the 37 languages analyzed overwhelmingly catered to. Keeping related words syntactically close together means a lesser strain on working memory and the brain in general. This large-scale, quantitative study provides evidence for DLM as a universal property of language. “It was interesting because people had really only looked at it in one or two languages,” Edward Gibson, a professor of cognitive science and co-author of the paper told MIT News. “We thought it was probably true [more widely], but that’s pretty important to show. We’re not showing perfect optimization, but [DLM] is a factor that’s involved.”
The implications of the study bring attention to a common division in the field of linguistics, which is the extent to which language is genetically predetermined. It’s “an important source of evidence for a long-standing hypothesis about how word order is determined across the world’s languages,” Jennifer Culbertson, a researcher of evolutionary linguistics at the University of Edinburgh, told Ars Technica. “There are many proposed universal properties of language, but basically all of them are controversial,” she explained. Those who are of Noam Chomsky’s school of an underlying universal grammar argue that this discovery provides further evidence for a specific brain architecture related to language, while others suggest the study only reveals a common biological cognitive restraint. Either way, the broad reach of DLM reveals a universal property (read as: overwhelming tendency) of natural language construction.
#linguistics #language #langchat