Dr Anuenue Baker-Kukona

Job: Associate Professor in Quantitative Methods

Faculty: Health and Life Sciences

School/department: School of Applied Social Sciences

Address: De Montfort University, The Gateway, Leicester, LE1 9BH

T: +44 (0)116 250 6184

E: anuenue.baker-kukona@dmu.ac.uk

W: www.dmu.ac.uk/appliedsocialsciences

 

Personal profile

Dr Baker-Kukona is a cognitive psychologist. He joined De Montfort University in 2014, following a postdoc at the University of Dundee and PhD at the University of Connecticut. His research focuses on the psychology of language.

Language and cognition

Dr Baker-Kukona’s research investigates the moment-by-moment cognitive processes that support real-time language comprehension. Much of his research uses eye tracking: where we look when we process (spoken) words and sentences can reveal both how we comprehend language in real time and relate this information to the world around us. One aspect of comprehension Dr Baker-Kukona is interested in concerns spatial language: how do we learn and make sense of words and sentences that refer to spatial locations and events (e.g., Kamide, Lindsay, Scheepers, & Kukona, 2016; Kukona, Altmann, & Kamide, 2014)? Another aspect of comprehension he is interested in concerns prediction: how much thinking ahead do we do during language processing, and what do our predictions tell us about the language system (e.g., Kukona, Cho, Magnuson, & Tabor, 2014; Kukona, Fang, Aicher, Chen, & Magnuson, 2011)?

Individual differences

Language comprehension involves a complex array of processes and skills, and comprehenders show tremendous individual differences in language performance. Dr Baker-Kukona is interested in understanding how characteristics such as memory capacity, processing speed and vocabulary size relate to real-time comprehension processes (e.g., Magnuson, et al., 2011; Van Dyke, Johns, & Kukona, 2014).

Google Scholar

ResearchGate

Twitter

Research group affiliations

  • Psychology

Publications and outputs 

  • Spatial narrative context modulates semantic (but not visual) competition during discourse processing
    Spatial narrative context modulates semantic (but not visual) competition during discourse processing Williams, Glenn P.; Kukona, Anuenue; Kamide, Yuki Recent research highlights the influence of (e.g., task) context on conceptual retrieval. To assess whether conceptual representations are context-dependent rather than static, we investigated the influence of spatial narrative context on accessibility for lexical-semantic information by exploring competition effects. In two visual world experiments, participants listened to narratives describing semantically related (piano-trumpet; Experiment 1) or visually similar (bat-cigarette; Experiment 2) objects in the same or separate narrative locations while viewing arrays displaying these (‘target’ and ‘competitor’) objects and other distractors. Upon re-mention of the target, we analysed eye movements to the competitor. In Experiment 1, we observed semantic competition only when targets and competitors were described in the same location; in Experiment 2, we observed visual competition regardless of context. We interpret these results as consistent with context-dependent approaches, such that spatial narrative context dampens accessibility for semantic but not visual information in the visual world.
  • Individual differences in subphonemic sensitivity and phonological skills
    Individual differences in subphonemic sensitivity and phonological skills Li, Monica Y. C.; Braze, David; Kukona, Anuenue; Johns, Clinton L.; Tabor, Whitney; Van Dyke, Julie A.; Mencl, W. Einar; Shankweiler, Donald P.; Pugh, Kenneth R.; Magnuson, James S. Many studies have established a link between phonological abilities (indexed by phonological awareness and phonological memory tasks) and typical and atypical reading development. Individuals who perform poorly on phonological assessments have been mostly assumed to have underspecified (or “fuzzy”) phonological representations, with typical phonemic categories, but with greater category overlap due to imprecise encoding. An alternative posits that poor readers have overspecified phonological representations, with speech sounds perceived allophonically (phonetically distinct variants of a single phonemic category). On both accounts, mismatch between phonological categories and orthography leads to reading difficulty. Here, we consider the implications of these accounts for online speech processing. We used eye tracking and an individual differences approach to assess sensitivity to subphonemic detail in a community sample of young adults with a wide range of reading-related skills. Subphonemic sensitivity inversely correlated with meta-phonological task performance, consistent with overspecification. open access article
  • The influence of globally ungrammatical local syntactic constraints on real-time sentence comprehension: Evidence from the visual world paradigm and reading
    The influence of globally ungrammatical local syntactic constraints on real-time sentence comprehension: Evidence from the visual world paradigm and reading Kamide, Yuki; Kukona, Anuenue We investigated the influence of globally ungrammatical local syntactic constraints on sentence comprehension, as well as the corresponding activation of global and local representations. In Experiment 1, participants viewed visual scenes with objects like a carousel and motorbike while hearing sentences with noun phrase (NP) or verb phrase (VP) modifiers like “The girl who likes the man (from London/very much) will ride the carousel.” In both cases, “girl” and “ride” predicted carousel as the direct object; however, the locally coherent combination “the man from London will ride…” in NP cases alternatively predicted motorbike. During “ride,” local constraints, although ruled out by the global constraints, influenced prediction as strongly as global constraints: While motorbike was fixated less than carousel in VP cases, it was fixated as much as carousel in NP cases. In Experiment 2, these local constraints likewise slowed reading times. We discuss implications for theories of sentence processing. The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.
  • The real-time prediction and inhibition of linguistic outcomes: Effects of language and literacy skill
    The real-time prediction and inhibition of linguistic outcomes: Effects of language and literacy skill Kukona, Anuenue; Braze, David; Johns, Clint L.; Mencl, W. Einar; Van Dyke, Julie A.; Magnuson, James S.; Pugh, Kenneth R.; Shankweiler, Donald P.; Tabor, Whitney Recent studies have found considerable individual variation in language comprehenders’ predictive behaviors, as revealed by their anticipatory eye movements during language comprehension. The current study investigated the relationship between these predictive behaviors and the language and literacy skills of a diverse, community-based sample of young adults. We found that rapid automatized naming (RAN) was a key determinant of comprehenders’ prediction ability (e.g., as reflected in predictive eye movements to a WHITE CAKE on hearing “The boy will eat the white…”). Simultaneously, comprehension-based measures predicted participants’ ability to inhibit eye movements to objects that shared features with predictable referents but were implausible completions (e.g., as reflected in eye movements to a white but inedible WHITE CAR). These findings suggest that the excitatory and inhibitory mechanisms that support prediction during language processing are closely linked with specific cognitive abilities that support literacy. We show that a self-organizing cognitive architecture captures this pattern of results. The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.
  • Event processing in the visual world: Projected motion paths during spoken sentence comprehension
    Event processing in the visual world: Projected motion paths during spoken sentence comprehension Kamide, Yuki; Lindsay, Shane; Scheepers, Christoph; Kukona, Anuenue Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the movement of an agent to a goal while viewing visual scenes depicting the agent, goal, and empty space in between. Crucially, verbs suggested either upward (e.g., jump) or downward (e.g., crawl) paths. We found that in the rare event of fixating the empty space between the agent and goal, visual attention was biased upward or downward in line with the verb. In Experiment 2, visual scenes depicted a central obstruction, which imposed further constraints on the paths and increased the likelihood of fixating the empty space between the agent and goal. The results from this experiment corroborated and refined the previous findings. Specifically, eye-movement effects started immediately after hearing the verb and were in line with data from an additional mouse-tracking task that encouraged a more explicit spatial reenactment of the motion event. In revealing how event comprehension operates in the visual world, these findings suggest a mental simulation process whereby spatial details of motion events are mapped onto the world through visual attention. The strength and detectability of such effects in overt eye-movements is constrained by the visual world and the fact that perceivers rarely fixate regions of empty space.
  • Knowing what, where, and when: Event comprehension in language processing
    Knowing what, where, and when: Event comprehension in language processing Kukona, Anuenue; Altmann, Gerry T. M.; Kamide, Yuki We investigated the retrieval of location information, and the deployment of attention to these locations, following (described) event-related location changes. In two visual world experiments, listeners viewed arrays with containers like a bowl, jar, pan, and jug, while hearing sentences like “The boy will pour the sweetcorn from the bowl into the jar, and he will pour the gravy from the pan into the jug. And then, he will taste the sweetcorn”. At the discourse-final “sweetcorn”, listeners fixated context-relevant “Target” containers most (jar). Crucially, we also observed two forms of competition: listeners fixated containers that were not directly referred to but associated with “sweetcorn” (bowl), and containers that played the same role as Targets (goals of moving events; jug), more than distractors (pan). These results suggest that event-related location changes are encoded across representations that compete for comprehenders’ attention, such that listeners retrieve, and fixate, locations that are not referred to in the unfolding language, but related to them via object or role information.
  • Low working memory capacity is only spuriously related to poor reading comprehension
    Low working memory capacity is only spuriously related to poor reading comprehension Van Dyke, Julie A.; Johns, Clint L.; Kukona, Anuenue Accounts of comprehension failure, whether in the case of readers with poor skill or when syntactic complexity is high, have overwhelmingly implicated working memory capacity as the key causal factor. However, extant research suggests that this position is not well supported by evidence on the span of active memory during online sentence processing, nor is it well motivated by models that make explicit claims about the memory mechanisms that support language processing. The current study suggests that sensitivity to interference from similar items in memory may provide a better explanation of comprehension failure. Through administration of a comprehensive skill battery, we found that the previously observed association of working memory with comprehension is likely due to the collinearity of working memory with many other reading-related skills, especially IQ. In analyses which removed variance shared with IQ, we found that receptive vocabulary knowledge was the only significant predictor of comprehension performance in our task out of a battery of 24 skill measures. In addition, receptive vocabulary and non-verbal memory for serial order—but not simple verbal memory or working memory—were the only predictors of reading times in the region where interference had its primary affect. We interpret these results in light of a model that emphasizes retrieval interference and the quality of lexical representations as key determinants of successful comprehension.
  • Lexical interference effects in sentence processing: Evidence from the visual world paradigm and self-organizing models
    Lexical interference effects in sentence processing: Evidence from the visual world paradigm and self-organizing models Kukona, Anuenue; Cho, Pyeong Whan; Magnuson, James S.; Tabor, Whitney Psycholinguistic research spanning a number of decades has produced diverging results with regard to the nature of constraint integration in online sentence processing. For example, evidence that language users anticipatorily fixate likely upcoming referents in advance of evidence in the speech signal supports rapid context integration. By contrast, evidence that language users activate representations that conflict with contextual constraints, or only indirectly satisfy them, supports nonintegration or late integration. Here we report on a self-organizing neural network framework that addresses 1 aspect of constraint integration: the integration of incoming lexical information (i.e., an incoming word) with sentence context information (i.e., from preceding words in an unfolding utterance). In 2 simulations, we show that the framework predicts both classic results concerned with lexical ambiguity resolution (Swinney, 1979; Tanenhaus, Leiman, & Seidenberg, 1979), which suggest late context integration, and results demonstrating anticipatory eye movements (e.g., Altmann & Kamide, 1999), which support rapid context integration. We also report 2 experiments using the visual world paradigm that confirm a new prediction of the framework. Listeners heard sentences like "The boy will eat the white" while viewing visual displays with objects like a white cake (i.e., a predictable direct object of "eat"), white car (i.e., an object not predicted by "eat," but consistent with "white"), and distractors. In line with our simulation predictions, we found that while listeners fixated white cake most, they also fixated white car more than unrelated distractors in this highly constraining sentence (and visual) context.
  • Impulse processing: A dynamical systems model of incremental eye movements in the visual world paradigm
    Impulse processing: A dynamical systems model of incremental eye movements in the visual world paradigm Kukona, Anuenue; Tabor, Whitney The Visual World Paradigm (VWP) presents listeners with a challenging problem: They must integrate two disparate signals, the spoken language and the visual context, in support of action (e.g., complex movements of the eyes across a scene). We present Impulse Processing, a dynamical systems approach to incremental eye movements in the visual world that suggests a framework for integrating language, vision, and action generally. Our approach assumes that impulses driven by the language and the visual context impinge minutely on a dynamical landscape of attractors corresponding to the potential eye-movement behaviors of the system. We test three unique predictions of our approach in an empirical study in the VWP, and describe an implementation in an artificial neural network. We discuss the Impulse Processing framework in relation to other models of the VWP.
  • The time course of anticipatory constraint integration
    The time course of anticipatory constraint integration Kukona, Anuenue; Fang, Shin-Yi; Aicher, Karen A.; Chen, Helen; Magnuson, James S. Several studies have demonstrated that as listeners hear sentences describing events in a scene, their eye movements anticipate upcoming linguistic items predicted by the unfolding relationship between scene and sentence. While this may reflect active prediction based on structural or contextual expectations, the influence of local thematic priming between words has not been fully examined. In Experiment 1, we presented verbs (e.g., arrest) in active (Subject-Verb-Object) sentences with displays containing verb-related patients (e.g., crook) and agents (e.g., policeman). We examined patient and agent fixations following the verb, after the agent role had been filled by another entity, but prior to bottom-up specification of the object. Participants were nearly as likely to fixate agents "anticipatorily" as patients, even though the agent role was already filled. However, the patient advantage suggested simultaneous influences of both local priming and active prediction. In Experiment 2, using passive sentences (Object-Verb-Subject), we found stronger, but still graded influences of role prediction when more time elapsed between verb and target, and more syntactic cues were available. We interpret anticipatory fixations as emerging from constraint-based processes that involve both non-predictive thematic priming and active prediction.

Click here to view a full listing of Anuenue Baker-Kukona's publications and outputs.

Research interests/expertise

  • Language
  • Psycholinguistics
  • Sentence processing
  • Computational modelling
  • Eye movements
  • Statistics

Areas of teaching

  • Cognitive psychology
  • Research methods
  • Statistics

Qualifications

PhD, MA, BS

Courses taught

  • PSYC1090 Introductory research methods in psychology (Year 1)
  • PSYC1091 Core areas of psychology (Year 1)
  • PSYC2092 Cognitive psychology (Year 2)

Membership of professional associations and societies

  • British Psychological Society
  • Cognitive Science Society
  • Experimental Psychology Society
  • Higher Education Academy

Internally funded research project information

  • Eye tracking in the psychological sciences, Capital equipment, works and IT/AV, SR Research EyeLink 1000 Plus, 2015-2016, Sponsor.
  • Electroencephalography: Understanding brain and behaviour, Research Capital Investment Fund (RCIF2), BioSemi ActiveTwo, 2014-2015, Investigator, Coulthard, H., Hall, J., Lopes, B., Song, J., Van den Tol, A., & Yu, H.

Search Who's Who

 
News target area image
News

DMU is a dynamic university, read about what we have been up to in our latest news section.

Events target area image
Events

At DMU there is always something to do or see, check out our events for yourself.

Mission and vision target area image
Mission and vision

Read about our mission and vision and how these create a supportive and exciting learning environment.