Publication:
Lexical surprisal as a general predictor of reading time
I. F. Monsalve, S. Frank, G. Vigliocco • @European Chapter of the Association for Computational Linguistics • 23 April 2012
TLDR: The results show that lexicalized surprisal according to both models is a significant predictor of RT, outperforming its un-lexicalized counterparts.
Citations: 88
Abstract: Probabilistic accounts of language processing can be psychologically tested by comparing word-reading times (RT) to the conditional word probabilities estimated by language models. Using surprisal as a linking function, a significant correlation between unlexicalized surprisal and RT has been reported (e.g., Demberg and Keller, 2008), but success using lexicalized models has been limited. In this study, phrase structure grammars and recurrent neural networks estimated both lexicalized and unlexicalized surprisal for words of independent sentences from narrative sources. These same sentences were used as stimuli in a self-paced reading experiment to obtain RTs. The results show that lexicalized surprisal according to both models is a significant predictor of RT, outperforming its un-lexicalized counterparts.
Related Fields of Study
loading
Citations
Sort by
Previous
Next
Showing results 1 to 0 of 0
Previous
Next
References
Sort by
Previous
Next
Showing results 1 to 0 of 0
Previous
Next