Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color
Publikation: Bidrag til bog/antologi/rapport › Konferencebidrag i proceedings › Forskning › fagfællebedømt
Standard
Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color. / Abdou, Mostafa; Kulmizev, Artur ; Hershcovich, Daniel; Frank, Stella; Pavlick, Ellie ; Søgaard, Anders.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2021. s. 109–132.Publikation: Bidrag til bog/antologi/rapport › Konferencebidrag i proceedings › Forskning › fagfællebedømt
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Can Language Models Encode Perceptual Structure Without Grounding?
T2 - 2021 Conference on Empirical Methods in Natural Language Processing
AU - Abdou, Mostafa
AU - Kulmizev, Artur
AU - Hershcovich, Daniel
AU - Frank, Stella
AU - Pavlick, Ellie
AU - Søgaard, Anders
PY - 2021
Y1 - 2021
N2 - Pretrained language models have been shown to encode relational information, such as the relations between entities or concepts in knowledge-bases — (Paris, Capital, France). However, simple relations of this type can often be recovered heuristically and the extent to which models implicitly reflect topological structure that is grounded in world, such as perceptual structure, is unknown. To explore this question, we conduct a thorough case study on color. Namely, we employ a dataset of monolexemic color terms and color chips represented in CIELAB, a color space with a perceptually meaningful distance metric. Using two methods of evaluating the structural alignment of colors in this space with text-derived color term representations, we find significant correspondence. Analyzing the differences in alignment across the color spectrum, we find that warmer colors are, on average, better aligned to the perceptual color space than cooler ones, suggesting an intriguing connection to findings from recent work on efficient communication in color naming. Further analysis suggests that differences in alignment are, in part, mediated by collocationality and differences in syntactic usage, posing questions as to the relationship between color perception and usage and context.
AB - Pretrained language models have been shown to encode relational information, such as the relations between entities or concepts in knowledge-bases — (Paris, Capital, France). However, simple relations of this type can often be recovered heuristically and the extent to which models implicitly reflect topological structure that is grounded in world, such as perceptual structure, is unknown. To explore this question, we conduct a thorough case study on color. Namely, we employ a dataset of monolexemic color terms and color chips represented in CIELAB, a color space with a perceptually meaningful distance metric. Using two methods of evaluating the structural alignment of colors in this space with text-derived color term representations, we find significant correspondence. Analyzing the differences in alignment across the color spectrum, we find that warmer colors are, on average, better aligned to the perceptual color space than cooler ones, suggesting an intriguing connection to findings from recent work on efficient communication in color naming. Further analysis suggests that differences in alignment are, in part, mediated by collocationality and differences in syntactic usage, posing questions as to the relationship between color perception and usage and context.
U2 - 10.18653/v1/2021.conll-1.9
DO - 10.18653/v1/2021.conll-1.9
M3 - Article in proceedings
SP - 109
EP - 132
BT - Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
PB - Association for Computational Linguistics
Y2 - 7 November 2021 through 11 November 2021
ER -
ID: 299824244