Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
3,937 result(s) for "Lexical stress"
Sort by:
System for Automatic Assignment of Lexical Stress in Croatian
It is very popular today to integrate voice interfaces into IoT devices. The pronunciation and proper prosody of speech play a major role in the intelligibility and naturalness of synthesized voices. Each language has its own prosodic characteristics. In this paper, we present the results of a study aimed at testing the applicability of methods for modelling and predicting the prosodic features of the Croatian language. The extent to which their performance can be improved by incorporating linguistic features and linguistic peculiarities specific to the Croatian language was investigated. In the model learning process, tree classification was used to predict the lexical stress position and the type of stress in a word, and a lexicon of 1,011,785 word forms was used as the model learning set. Separate models were created for predicting the position and type of lexical stress. The results improved significantly after the rules for atonic words (clitics) were applied. A hybrid approach combining a rule-based approach and a modelling approach was also proposed. The final accuracy of assigning lexical stress using the hybrid approach was 95.3%.
A Cross-Language Study of Perception of Lexical Stress in English
This study investigates the question of whether language background affects the perception of lexical stress in English. Thirty native English speakers and 30 native Chinese learners of English participated in a stressed-syllable identification task and a discrimination task involving three types of stimuli (real words/pseudowords/hums). The results show that both language groups were able to identify and discriminate stress patterns. Lexical and segmental information affected the English and Chinese speakers in varying degrees. English and Chinese speakers showed different response patterns to trochaic vs. iambic stress across the three types of stimuli. An acoustic analysis revealed that two language groups used different acoustic cues to process lexical stress. The findings suggest that the different degrees of lexical and segmental effects can be explained by language background, which in turn supports the hypothesis that language background affects the perception of lexical stress in English.
Beat gestures influence which speech sounds you hear
Beat gestures—spontaneously produced biphasic movements of the hand—are among the most frequently encountered co-speech gestures in human communication. They are closely temporally aligned to the prosodic characteristics of the speech signal, typically occurring on lexically stressed syllables. Despite their prevalence across speakers of the world's languages, how beat gestures impact spoken word recognition is unclear. Can these simple ‘flicks of the hand' influence speech perception? Across a range of experiments, we demonstrate that beat gestures influence the explicit and implicit perception of lexical stress (e.g. distinguishing OBject from obJECT ), and in turn can influence what vowels listeners hear. Thus, we provide converging evidence for a manual McGurk effect: relatively simple and widely occurring hand movements influence which speech sounds we hear.
A USAGE-BASED THEORY OF GRAMMATICAL STATUS AND GRAMMATICALIZATION
This article proposes a new way of understanding grammatical status and grammaticalization as distinctive types of linguistic phenomena. The approach is usage-based and links up structural and functional, as well as synchronie and diachronic, aspects of the issue. The proposal brings a range of previously disparate phenomena into a motivated relationship, while certain well-entrenched criteria (such as 'closed paradigms') are shown to be incidental to grammatical status and grammaticalization. The central idea is that grammar is constituted by expressions that by linguistic convention are ancillary and as such discursively secondary in relation to other linguistic expressions, and that grammaticalization is the kind of change that gives rise to such expressions.
From “I dance” to “she danced” with a flick of the hands: Audiovisual stress perception in Spanish
When talking, speakers naturally produce hand movements (co-speech gestures) that contribute to communication. Evidence in Dutch suggests that the timing of simple up-and-down, non-referential “beat” gestures influences spoken word recognition: the same auditory stimulus was perceived as CONtent (noun, capitalized letters indicate stressed syllables) when a beat gesture occurred on the first syllable, but as conTENT (adjective) when the gesture occurred on the second syllable. However, these findings were based on a small number of minimal pairs in Dutch, limiting the generalizability of the findings. We therefore tested this effect in Spanish, where lexical stress is highly relevant in the verb conjugation system, distinguishing bailo , “I dance” with word-initial stress from bailó , “she danced” with word-final stress. Testing a larger sample (N = 100), we also assessed whether individual differences in working memory capacity modulated how much individuals relied on the gestures in spoken word recognition. The results showed that, similar to Dutch, Spanish participants were biased to perceive lexical stress on the syllable that visually co-occurred with a beat gesture, with the effect being strongest when the acoustic stress cues were most ambiguous. No evidence was found for by-participant effect sizes to be influenced by individual differences in phonological or visuospatial working memory. These findings reveal gestural-speech coordination impacts lexical stress perception in a language where listeners are regularly confronted with such lexical stress contrasts, highlighting the impact of gestures’ timing on prominence perception and spoken word recognition.
Orthographic Cues to Stress Affect Reading in Connected Text: Evidence From a Letter-Detection Task
In English, written word endings act as probabilistic cues to a word's lexical stress pattern. Readers are sensitive to these statistical associations between lexical stress and spelling, using word endings' spellings to guide stress placement when reading isolated words. However, we do not yet know if readers' use of endings as stress cues extends to the processing of words in connected text. In the present study, we explored this question with adult readers (N = 53). Participants read texts for comprehension while circling a specified target letter. This letter-detection task can speak to the processes involved in reading words, as readers detect fewer letters when lexical access is relatively quick. Adults detected fewer target letters in words whose endings accurately cued their stress patterns than in words whose endings gave misleading cues to stress (d = 0.41). This suggests that adult readers draw on written word endings as cues to stress during the everyday task of reading for comprehension. These findings clarify the scope of word endings' role as cues to stress in English. En anglais, les terminaisons des mots écrits agissent comme des indices probabilistes de la structure de l'accent lexical d'un mot. Les lecteurs sont sensibles à ces associations statistiques entre l'accent lexical et l'orthographe, et utilisent l'orthographe des fins de mots pour guider le placement de l'accent lorsqu'ils lisent des mots isolés. Cependant, nous ne savons pas encore si l'utilisation par les lecteurs des terminaisons comme indices d'accentuation s'étend au traitement des mots dans un texte cohérent. Dans la présente étude, nous avons exploré cette question avec des lecteurs adultes (N = 53). Les participants devaient lire des textes à des fins de compréhension tout en encerclant une lettre cible spécifique. Cette tâche de détection des lettres peut être révélatrice des processus impliqués dans la lecture des mots, car les lecteurs détectent moins de lettres lorsque l'accès au lexique est relativement rapide. Les adultes ont détecté moins de lettres cibles dans les mots dont la terminaison indiquait avec précision la structure de l'accent que dans les mots dont la terminaison donnait des indications trompeuses de l'accent (d = 0,41). Cela suggère que les lecteurs adultes utilisent les terminaisons des mots écrits comme indices d'accentuation au cours de tâches de lecture quotidiennes à des fins de compréhension. Ces résultats clarifient la portée du rôle des terminaisons de mots en tant qu'indices d'accentuation en anglais. Public Significance Statement Skilled reading is essential to full societal participation. In English, skilled reading involves working out where to put lexical stress (the pattern of emphasis on syllables in a word). Research shows that adults use letter patterns to decide where to put stress in a word (e.g., words ending in -et and -ette tend to have first- and second-syllable stress, respectively). These effects have emerged in studies on the processing of single words. Here we test adults' use of letter patterns to determine stress in a more ecologically valid context, reading of connected texts. We find similar, albeit smaller, effects, extending prior findings to a more naturalistic reading context.
Sensitivity to word endings as probabilistic orthographic cues to lexical stress among English as second language learners
Assigning stress to polysyllabic words is a crucial aspect of reading aloud in English. Previous research demonstrated that native English speakers are sensitive to word endings as probabilistic orthographic cues to stress assignment. However, little is known about if second language (L2) learners of English are sensitive to word endings as cues to lexical stress. The current study investigated whether native Chinese-speakers learning English as a second language (ESL) are sensitive to word endings as probabilistic orthographic cues to lexical stress. Our ESL learners demonstrated sensitivity to word endings as cues in a stress-assignment task and a naming task. With the increase in language proficiency, ESL learners responded more accurately in the stress-assignment task. Moreover, stress position and language proficiency moderated the strength of the sensitivity, with a trochaic bias and better proficiency leading to better sensitivity in the stress-assignment task. However, as language proficiency increased, participants’ naming speed became faster in iambic but slower in trochaic patterns, reflecting the learners’ fledgling knowledge about the specific stress patterns associated with varying orthographic cues, especially in a demanding naming task. Taken together, the evidence from our ESL learners fits in the proposed statistical learning mechanism, that is, L2 learners are able to implicitly extract statistical regularities from linguistic materials, the orthographic cues to lexical stress in our study. Stress position and language proficiency both play a role in developing this sensitivity.
An Imaging‐Guided Neural Model Explains Lexical Stress Alteration in Acquired Apraxia of Speech
Acquired apraxia of speech (AOS) is a disorder of speech motor planning/programming that is induced by a lesion to the left anterior ventral precentral sulcus. This study analyses neuroimaging data from AOS in order to propose and computationally test a mechanistic explanation of how the lesion leads to the characteristic of altered lexical stress in the disorder. Neuroimaging data from 31 participants with left hemisphere stroke (15 AOS) were reanalysed to guide a ‘lesioned’ version of the bilateral GODIVA neuro‐computational model of speech production. Structural MRI and resting‐state functional MRI measurements were used to decide the model's lesion extent and atypical neural processing, respectively. The ‘lesioned’ model was compared with a neurotypical model on the production of an exemplar utterance with different linguistic contexts. Analyses revealed the average lesion in the AOS participants extended over 22.25% of the left anterior ventral precentral sulcus. Functional connectivity in AOS was reduced between the lateral part of that region and the right motor cortex, as well as between the left and right motor cortices themselves. The version of the model that we altered in line with these findings produced lengthening of the second of two consecutive short syllables. The lengthened syllable was a word‐initial unstressed syllable, and consequently, its contrastiveness with the adjacent stressed syllable of the word was reduced. The agreement between simulation results and previously reported acoustic measurements from actual AOS patients lends support to our mechanistic explanation. In conclusion, simulations of the GODIVA model provided empirical support for a mechanistic explanation indicating permanent sub‐threshold cortical activity in AOS. As a result, the speech system becomes biased away from a motor control strategy based on motor programs and toward a strategy based on sensory feedback. This both lengthens brief syllables and interferes with the mechanism to shift between syllables, ultimately altering lexical stress. Analysis of the model's neural dynamics suggests the explanation can be generalised to various contexts where lexical stress is altered in AOS. This study reanalysed neuroimaging data from individuals with acquired apraxia of speech (AOS) to simulate lesion effects in the GODIVA neurocomputational speech model. The lesioned model reproduced characteristic lexical stress alterations in AOS, supporting a mechanistic explanation of the disorder involving an engaged feedback control system and altered transitions between syllables.
CONTRASTIVE FOCUS VS. DISCOURSE-NEW: EVIDENCE FROM PHONETIC PROMINENCE IN ENGLISH
The results of a production experiment show that English speakers distinguish elements under contrastive focus from elements that are merely new in the discourse. A novel paradigm eliciting both contrastively focused and merely discourse-new elements in the same sentence avoids differences in information structure and pitch accenting in the context surrounding the target elements that were confounds in previous studies on the topic. Elements under contrastive focus show greater duration, relative intensity, and F0 movement with respect to other elements in the utterance than elements that are new in the discourse but not under contrastive focus. We argue that the phonetic differences revealed here cannot be explained in terms of systematic manipulation of pitch-accent type or phrasal boundaries, and should instead be analyzed as differences in phraselevel phonological prominence for contrastively focused and merely discourse-new elements.
A Maximum Entropy Model of Phonotactics and Phonotactic Learning
The study of phonotactics is a central topic in phonology. We propose a theory of phonotactic grammars and a learning algorithm that constructs such grammars from positive evidence. Our grammars consist of constraints that are assigned numerical weights according to the principle of maximum entropy. The grammars assess possible words on the basis of the weighted sum of their constraint violations. The learning algorithm yields grammars that can capture both categorical and gradient phonotactic patterns. The algorithm is not provided with constraints in advance, but uses its own resources to form constraints and weight them. A baseline model, in which Universal Grammar is reduced to a feature set and an SPE-style constraint format, suffices to learn many phonotactic phenomena. In order for the model to learn nonlocal phenomena such as stress and vowel harmony, it must be augmented with autosegmental tiers and metrical grids. Our results thus offer novel, learning-theoretic support for such representations. We apply the model in a variety of learning simulations, showing that the learned grammars capture the distributional generalizations of these languages and accurately predict the findings of a phonotactic experiment.