Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
8,116 result(s) for "Grammar theories"
Sort by:
An ATC instruction processing-based trajectory prediction algorithm designing
The radiotelephony communication is a voice communication mode between air traffic service unit and aircraft currently. The control instruction is a kind of unstructured data, so that the automatic systems cannot use understand its semantic. If control instruction is regarded as a sort of special “natural language,” methods such as syntax analysis and sematic analysis can be adopted to generate the structured instruction. The correct recognition of the language must be important for the control instruction. However, the control instruction in Chinese is different from the general use of Chinese language in form, resulting in prepositions becoming important for semantic analysis. This paper proposes a deep neural network-based Chinese language control construction algorithm for the trajectory prediction. In particular, analysis of sematic characteristics of control instruction is realized by using cognitive linguistics theory and construction grammar theory. The control instruction is then designed by the semantic ontology. Based on the deep neural networks by considering the word sequence of instruction as the inputs. The test results have demonstrated the effectiveness of the proposed algorithm with a developed entity extracting model. (The results are quantified using the BiLSTM-LAN-CRF in detail.)
A USAGE-BASED THEORY OF GRAMMATICAL STATUS AND GRAMMATICALIZATION
This article proposes a new way of understanding grammatical status and grammaticalization as distinctive types of linguistic phenomena. The approach is usage-based and links up structural and functional, as well as synchronie and diachronic, aspects of the issue. The proposal brings a range of previously disparate phenomena into a motivated relationship, while certain well-entrenched criteria (such as 'closed paradigms') are shown to be incidental to grammatical status and grammaticalization. The central idea is that grammar is constituted by expressions that by linguistic convention are ancillary and as such discursively secondary in relation to other linguistic expressions, and that grammaticalization is the kind of change that gives rise to such expressions.
Instance theory predicts categorization decisions in the absence of categorical structure: A computational analysis of artificial grammar learning without a grammar
Theories of categorization have historically focused on the stimulus characteristics to which people are sensitive. Artificial grammar learning (AGL) provides a clear example of this phenomenon, with theorists debating between knowledge of rules, fragments, whole strings, and so on as the basis of categorization decisions (i.e., stimulus-driven explanations). We argue that this focus loses sight of the more important question of how participants make categorization decisions on a mechanistic level (i.e., process-driven explanations). To address the problem, we derived predictions from an instance-based model of human memory in a pseudo-AGL task in which all study and test strings were generated randomly, a task that stimulus-driven explanations of AGL would have difficulty accommodating. We conducted a standard AGL experiment with human participants using the same strings. The model’s predictions corresponded to participants’ decisions well, even in the absence of any experimenter-generated structure and regardless of whether test stimuli contained any incidental structure. We argue that theories of categorization ought to continue shifting towards the goal of modeling categorization at the level of cognitive processes rather than primarily attempting to identify the stimulus characteristics to which participants are drawn.
Aligning Grammatical Theories and Language Processing Models
We address two important questions about the relationship between theoretical linguistics and psycholinguistics. First, do grammatical theories and language processing models describe separate cognitive systems, or are they accounts of different aspects of the same system? We argue that most evidence is consistent with the one-system view. Second, how should we relate grammatical theories and language processing models to each other?
From Usage to Grammar: The Mind's Response to Repetition
A usage-based view takes grammar to be the cognitive organization of one's experience with language. Aspects of that experience, for instance, the frequency of use of certain constructions or particular instances of constructions, have an impact on representation that is evidenced in speaker knowledge of conventionalized phrases and in language variation and change. It is shown that particular instances of constructions can acquire their own pragmatic, semantic, and phonological characteristics. In addition, it is argued that high-frequency instances of constructions undergo grammaticization processes (which produce further change), function as the central members of categories formed by constructions, and retain their old forms longer than lower-frequency instances under the pressure of newer formations. An exemplar model that accommodates both phonological and semantic representation is elaborated to describe the data considered.
Ideophonic sequences: Challenging the asymmetric syntactic structure hypothesis
This paper challenges the hypothesis that ideophonic sequences are syntactic structures built from lexical roots by the recursive operation Merge through the mediation of a functional head, taken to be either a coordinate (Corver 2015) or a determiner (Corver 2023). Drawing on Generative Grammar theory and new data from Brazilian Portuguese, we argue that the evidence for this hypothesis is weak at best. We first show that these sequences do not behave consistently as constituents. While they can stand alone, be coordinated, and resist intrusion, they fail to undergo movement and ellipsis. Taken together, this suggests that they are most likely linear sequences. They lack the formal features responsible for mediating grammatical interactions with the surrounding syntactic context and on which asymmetric structures are built. Moreover, the evidence on which Corver relies is mostly phonological and as such does not provide strong support for the conclusion that ideophonic sequences are asymmetric structures. Following the general tenets of Minimalism, we conclude that, in the absence of strong and uncontroversial evidence, it is best to assume that sequences formed by canonical ideophones are linear strings or symmetric structures distinct from ordinary syntactic phrases.
Radical substance-free phonology and feature learning
This article argues that phonological features have no substantive properties, instead, segments are assigned features by learning strategies set to the task of devising a computational system for a phonology that is consistent with the requirements of UG. I address two problems for such a substance-free model. The first is the Card-Grammar problem, which has been suggested to argue for universal substantive features, on the premise that, otherwise, language data cannot be stored in a fashion necessary to correct learning errors. The Card Grammar problem disappears, in a suitably modular theory of mind with learned interfaces, where the mind still can retain information not parsed in a particular grammar. The second problem is the need for a demonstration, not just an assertion, that a reasonable theory of grammar and learning which has no access to phonetic substance can yield a coherent system of feature assignments. This is accomplished by modeling the learning of features necessary for the phonology of Kerewe.
A systematic review of unsupervised approaches to grammar induction
This study systematically reviews existing approaches to unsupervised grammar induction in terms of their theoretical underpinnings, practical implementations and evaluation. Our motivation is to identify the influence of functional-cognitive schools of grammar on language processing models in computational linguistics. This is an effort to fill any gap between the theoretical school and the computational processing models of grammar induction. Specifically, the review aims to answer the following research questions: Which types of grammar theories have been the subjects of grammar induction? Which methods have been employed to support grammar induction? Which features have been used by these methods for learning? How were these methods evaluated? Finally, in terms of performance, how do these methods compare to one another? Forty-three studies were identified for systematic review out of which 33 described original implementations of grammar induction; three provided surveys and seven focused on theories and experiments related to acquisition and processing of grammar in humans. The data extracted from the 33 implementations were stratified into 7 different aspects of analysis: theory of grammar; output representation; how grammatical productivity is processed; how grammatical productivity is represented; features used for learning; evaluation strategy and implementation methodology. In most of the implementations considered, grammar was treated as a generative-formal system, autonomous and independent of meaning. The parser decoding was done in a non-incremental, head-driven fashion by assuming that all words are available for the parsing model and the output representation of the grammar learnt was hierarchical, typically a dependency or a constituency tree. However, the theoretical and experimental studies considered suggest that a usage-based, incremental, sequential system of grammar is more appropriate than the formal, non-incremental, hierarchical view of grammar. This gap between the theoretical as well as experimental studies on one hand and the computational implementations on the other hand should be addressed to enable further progress in computational grammar induction research.
A TEST OF THE RELATION BETWEEN WORKING-MEMORY CAPACITY AND SYNTACTIC ISLAND EFFECTS
The source of syntactic island effects has been a topic of considerable debate within linguistics and psycholinguistics. Explanations fall into three basic categories: grammatical theories, which posit specific grammatical constraints that exclude extraction from islands; grounded theories, which posit grammaticized constraints that have arisen to adapt to constraints on learning or parsing; and reductionist theories, which analyze island effects as emergent consequences of nongrammatical constraints on the sentence parser, such as limited processing resources. In this article we present two studies designed to test a fundamental prediction of one of the most prominent reductionist theories: that the strength of island effects should vary across speakers as a function of individual differences in processing resources. We tested over three hundred native speakers of English on four different island-effect types (whether, complex NP, subject, and adjunct islands) using two different acceptability rating tasks (seven-point scale and magnitude estimation) and two different measures of working-memory capacity (serial recall and n-back). We find no evidence of a relationship between working-memory capacity and island effects using a variety of statistical analysis techniques, including resampling simulations. These results suggest that island effects are more likely to be due to grammatical constraints or grounded grammaticized constraints than to limited processing resources.
A Multimodal Discourse Study of Visual Images in Select Online News Discourse on the 2023 General Elections in Nigeria
This multimodal discourse study examines visual images in selected online news discourse on the 2023 presidential elections in Nigeria to identify the various meanings which the images have been used to communicate. Two online newspapers, namely, Vanguard and Business Dayserved as the sources of data. Drawing insights from Kress & Leeuwen’s (2006) Visual Grammar Theory, ten images (five from each newspaper) were purposively sampled and subjected to critical analysis using four key components (participants, representation, interaction and composition) of the theory. The results showed that the analyzed visual images are representative of the major presidential candidates’ political, religious, and cultural affiliations; voters’ religious and cultural orientation; voters’ patience and tenacity in exercising their right to vote; the inadequate electoral system; the serenity and tranquility observed in certain polling locations; the presence of military personnel; and the millions of naira lost during election. Furthermore, the visual depictions explicitly summarized what was stated in writing and speaking. The findings corroborate the visual grammar theory and underline the importance of visual images as semiotic resources in transmitting various meanings.