Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
2,096 result(s) for "Linguistic units"
Sort by:
Lingual Unit’s Marker of Semantic Relations in Javanese Announcement Discourse
Javanese, a language with a rich cultural heritage, has a unique discourse structure when it comes to announcements. This study focusses on examining the linguistic units that mark the semantic relations in Javanese announcement discourse. The data collection used the listening method, and the data analysis employed the distributional method, utilizing the direct element technique. The result of this study shows that titles of Javanese announcement discourses are typically expressed with the lexical unit wara-wara, pariwara, or pengumuman 'announcement' as the linguistic variants. The message conveyed should be responded to by the recipient through action. Older groups tend to use Javanese krama to show respect in communication, while younger people often prefer Javanese ngoko or Indonesian. Semantic relations are mostly expressed explicitly, including phatic relations, gratitude, thanks, notifications, mediators, requests, temporal, emphasis, purpose, causality, additions, hope, specification, possibilities, and closing. The finding is important for the development of linguistic theory, input for the preparation of discourse books, and resources for the cultivation of the Javanese language.
Cortical encoding of acoustic and linguistic rhythms reflects L2 narrative comprehension
•Cortical activity tracks acoustic and linguistic rhythms during L2 comprehension.•Acoustic cues enhance L2 comprehension and neural tracking of linguistic rhythms.•Neural tracking of linguistic rather than acoustic rhythms reflects L2 comprehension.•Response power and phase of linguistic rhythm tracking predict L2 comprehension. Speech comprehension is a multistage process involving both acoustic encoding and linguistic processing. Accumulating evidence has demonstrated that low-frequency cortical activity can track perceived linguistic units (e.g., words) on top of basic acoustic features (e.g., speech envelope). However, it remains unclear how the neural tracking of acoustic and linguistic information relates to second language (L2) speech comprehension in narrative contexts. Here, we investigate neural tracking of narrative speech for L2 listeners using electroencephalography (EEG). Notably, we introduce amplitude modulation (AM) cues aligned with word rhythm onto the basic envelope of speech and employ a frequency-tagging paradigm to measure neural responses to word and AM rhythm separately. When narrative speech was presented to L2 listeners during a speech comprehension task, reliable neural tracking of word and AM rhythm was observed in low-frequency cortical activity. While the introduction of AM cues enhances both comprehension performance and word-tracking responses, listeners with high versus low comprehension performance exhibit differences in their word-tracking responses rather than AM-tracking responses. Furthermore, the power and phase associated with word-tracking responses jointly reflect individual comprehension performance of L2 listeners. Our results indicate that bottom-up acoustic cues and top-down linguistic knowledge predominantly modulate the low-frequency neural tracking of linguistic units, which contributes to speech comprehension in a nonnative language.
Neural dynamics differentially encode phrases and sentences during spoken language comprehension
Human language stands out in the natural world as a biological signal that uses a structured system to combine the meanings of small linguistic units (e.g., words) into larger constituents (e.g., phrases and sentences). However, the physical dynamics of speech (or sign) do not stand in a one-to-one relationship with the meanings listeners perceive. Instead, listeners infer meaning based on their knowledge of the language. The neural readouts of the perceptual and cognitive processes underlying these inferences are still poorly understood. In the present study, we used scalp electroencephalography (EEG) to compare the neural response to phrases (e.g., the red vase) and sentences (e.g., the vase is red), which were close in semantic meaning and had been synthesized to be physically indistinguishable. Differences in structure were well captured in the reorganization of neural phase responses in delta (approximately <2 Hz) and theta bands (approximately 2 to 7 Hz),and in power and power connectivity changes in the alpha band (approximately 7.5 to 13.5 Hz). Consistent with predictions from a computational model, sentences showed more power, more power connectivity, and more phase synchronization than phrases did. Theta–gamma phase–amplitude coupling occurred, but did not differ between the syntactic structures. Spectral–temporal response function (STRF) modeling revealed different encoding states for phrases and sentences, over and above the acoustically driven neural response. Our findings provide a comprehensive description of how the brain encodes and separates linguistic structures in the dynamics of neural responses. They imply that phase synchronization and strength of connectivity are readouts for the constituent structure of language. The results provide a novel basis for future neurophysiological research on linguistic structure representation in the brain, and, together with our simulations, support time-based binding as a mechanism of structure encoding in neural dynamics.
Failure to consolidate statistical learning in developmental dyslexia
Statistical learning (SL), the ability to pick up patterns in sensory input, serves as one of the building blocks of language acquisition. Although SL has been studied extensively in developmental dyslexia (DD), much less is known about the way SL evolves over time. The handful of studies examining this question were all limited to the acquisition of motor sequential knowledge or highly learned segmented linguistic units. Here we examined memory consolidation of statistical regularities in adults with DD and typically developed (TD) readers by using auditory SL requiring the segmentation of units from continuous input, which represents one of the earliest learning challenges in language acquisition. DD and TD groups were exposed to tones in a probabilistically determined sequential structure varying in difficulty and subsequently tested for recognition of novel short sequences that adhered to this statistical pattern in immediate and delayed-recall sessions separated by a night of sleep. SL performance of the DD group at the easy and hard difficulty levels was poorer than that of the TD group in the immediate-recall session. Importantly, DD participants showed a significant overnight deterioration in SL performance at the medium difficulty level compared to TD, who instead showed overnight stabilization of the learned information. These findings imply that SL difficulties in DD may arise not only from impaired initial learning but also due to a failure to consolidate statistically structured information into long-term memory. We hypothesize that these deficits disrupt the typical course of language acquisition in those with DD.
Compositionality in Different Modalities: A View from Usage-Based Linguistics
The field of linguistics concerns itself with understanding the human capacity for language. Compositionality is a key notion in this research tradition. Compositionality refers to the notion that the meaning of a complex linguistic unit is a function of the meanings of its constituent parts. However, the question as to whether compositionality is a defining feature of human language is a matter of debate: usage-based and constructionist approaches emphasize the pervasive role of idiomaticity in language, and argue that strict compositionality is the exception rather than the rule. We review the major discussion points on compositionality from a usage-based point of view, taking both spoken and signed languages into account. In addition, we discuss theories that aim at accounting for the emergence of compositional language through processes of cultural transmission as well as the debate of whether animal communication systems exhibit compositionality. We argue for a view that emphasizes the analyzability of complex linguistic units, providing a template for accounting for the multimodal nature of human language.
Can Informativity Effects Be Predictability Effects in Disguise?
Recent work in corpus linguistics has observed that informativity predicts articulatory reduction of a linguistic unit above and beyond the unit’s predictability in the local context, i.e., the unit’s probability given the current context. Informativity of a unit is the inverse of average (log-scaled) predictability and corresponds to its information content. Research in the field has interpreted effects of informativity as speakers being sensitive to the information content of a unit in deciding how much effort to put into pronouncing it or as accumulation of memories of pronunciation details in long-term memory representations. However, average predictability can improve the estimate of local predictability of a unit above and beyond the observed predictability in that context, especially when that context is rare. Therefore, informativity can contribute to explaining variance in a dependent variable like reduction above and beyond local predictability simply because informativity improves the (inherently noisy) estimate of local predictability. This paper shows how to estimate the proportion of an observed informativity effect that is likely to be artifactual, due entirely to informativity improving the estimates of predictability, via simulation. The proposed simulation approach can be used to investigate whether an effect of informativity is likely to be real, under the assumption that corpus probabilities are an unbiased estimate of probabilities driving reduction behavior, and how much of it is likely to be due to noise in predictability estimates, in any real dataset.
THE SHORT ANSWER: IMPLICATIONS FOR DIRECT COMPOSITIONALITY (AND VICE VERSA)
This article is concerned with the analysis of 'short' or 'fragment' answers to questions, and the relationship between these and the hypothesis of DIRECT COMPOSITIONALITY (DC) (e.g. Montague 1970). DC claims that the syntax and semantics work 'in tandem' to prove expressions well formed, while at the same time assigning them a meaning (a model-theoretic object). DC makes it difficult to state any kind of identity condition for 'ellipsis' and would hence lead one to suspect that short answers do not contain hidden linguistic material. This article argues that they indeed do not. Rather, as proposed in Groenendijk & Stokhof 1984, the question and short answer together form a linguistic unit, which I call a Qu-Ans, whose semantics gives the proposition that is understood as following from the pair. Three new arguments are adduced for the Qu-Ans analysis over one making use of silent linguistic material, and a core class of traditional arguments for silent linguistic material are answered. Moreover, it is shown that many of the traditional arguments for silent linguistic material themselves presuppose a non-DC architecture. If (as is claimed) these arguments do not hold, the Qu-Ans analysis of short answers actually supports the DC view, under which no use is made of logical form, and no use is made of representational constraints on structure.
Predicting relative intelligibility from inter-talker distances in a perceptual similarity space for speech
Researchers have generally assumed that listeners perceive speech compositionally, based on the combined processing of local acoustic–phonetic cues associated with individual linguistic units. Yet, these cue-based approaches have failed to fully account for variation in listeners’ identification of the words produced by a talker (i.e., variation in talker intelligibility). The current study adopts an alternative approach, estimating the perceptual representations used to process speech (the perceptual similarity space ) using the machine learning technique of self-supervised learning. We assessed intelligibility of 114 second-language (L2) English talkers and 25 L1 American English talkers through a speech-in-noise experiment (collecting data from ten L1 English listeners per talker, each transcribing 120 sentences). For each sample in a speech recording, we obtained a representation from a self-supervised learning model; the sequence of these representations forms a trajectory in the perceptual similarity space. The holistic distance between trajectories (two speakers’ productions of the same sentence) was analyzed. We found that for L2 talkers, the average distance between the trajectories of an L2 talker and the L1 American English talker group predicts relative intelligibility of a given L2 talker. Crucially, the distance measure predicted relative intelligibility among L2 talkers over and above a set of traditional acoustic–phonetic cues. Additionally, we found that the distance measure accounts for some of the relative intelligibility among L1 talkers. These results provide evidence that relative talker intelligibility is better captured with the perceptual similarity space approach, suggesting it is an appropriate tool to study variability in human speech production and perception.
Cortical tracking of lexical speech units in a multi-talker background is immature in school-aged children
Children have more difficulty perceiving speech in noise than adults. Whether this difficulty relates to an immature processing of prosodic or linguistic elements of the attended speech is still unclear. To address the impact of noise on linguistic processing per se, we assessed how babble noise impacts the cortical tracking of intelligible speech devoid of prosody in school-aged children and adults. Twenty adults and twenty children (7-9 years) listened to synthesized French monosyllabic words presented at 2.5 Hz, either randomly or in 4-word hierarchical structures wherein 2 words formed a phrase at 1.25 Hz, and 2 phrases formed a sentence at 0.625 Hz, with or without babble noise. Neuromagnetic responses to words, phrases and sentences were identified and source-localized. Children and adults displayed significant cortical tracking of words in all conditions, and of phrases and sentences only when words formed meaningful sentences. In children compared with adults, the cortical tracking was lower for all linguistic units in conditions without noise. In the presence of noise, the cortical tracking was similarly reduced for sentence units in both groups, but remained stable for phrase units. Critically, when there was noise, adults increased the cortical tracking of monosyllabic words in the inferior frontal gyri and supratemporal auditory cortices but children did not. This study demonstrates that the difficulties of school-aged children in understanding speech in a multi-talker background might be partly due to an immature tracking of lexical but not supra-lexical linguistic units.
System of methods of automated cognitive linguistic analysis of speech signals with noise
For the first time, the article presents a system of methods for automated cognitive linguistic analysis of speech signals with noise, in which, unlike existing ones, the latter is represented by a sequence of phoneme and morpheme codes identified by the criterion of the minimum of functional of relative entropy, the set of values of which is formed as a result of sequential comparison of the results of automated transcribing the studied signal with the reference phonetic alphabet of the target language. The presented system of methods made it possible, in particular, to substantiate the process of phonetic coding of language units through the analytical generalization of the results of automated transcribing of speech signals, to formalize the process of estimation of the degree of phonation variability of language units within the framework of the proposed system of methods, to formalize the interpretation of the concept of cognitive linguistic analysis of speech signals with noise in the frequency space, to propose an applied use of the obtained system of methods of cognitive linguistic analysis, for purification the speech signal from Gaussian noise. The adequacy of the theoretical apparatus and the functionality of the methods presented in the article has been proven empirically.