Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
428 result(s) for "Syntax, Semantics, Information Structure"
Sort by:
The Syntax of Topic, Focus, and Contrast
This book addresses how core notions of information structure (topic, focus and contrast) are expressed in syntax. The authors propose that the syntactic effects of information structure come about as a result of mapping rules that are flexible enough to allow topics and foci to be expressed in a variety of positions, but strict enough to capture certain cross-linguistic generalisations about their distribution. In particular, the papers argue that only contrastive topics and contrastive foci undergo movement and that this is because such movement has the function of marking the scope of contrast. Several predications are derived from this proposal: such as that a focus cannot move across a topic – whether the latter is in situ or not. Syntactic and semantic evidence in support of this proposal is presented from a wide range of languages (including Dutch, English, Japanese, Korean and Russian) and theoretical consequences explored. The first chapter not only outlines its theoretical aims, but also provides an introduction to information structure. As a consequence, the book is accessible to advanced students as well as professional linguists.
Natural language processing: state of the art, current trends and challenges
Natural language processing (NLP) has recently gained much attention for representing and analyzing human language computationally. It has spread its applications in various fields such as machine translation, email spam detection, information extraction, summarization, medical, and question answering etc. In this paper, we first distinguish four phases by discussing different levels of NLP and components of Natural Language Generation followed by presenting the history and evolution of NLP. We then discuss in detail the state of the art presenting the various applications of NLP, current trends, and challenges. Finally, we present a discussion on some available datasets, models, and evaluation metrics in NLP.
THE SURFACE-COMPOSITIONAL SEMANTICS OF ENGLISH INTONATION
This article proposes a syntax and a semantics for intonation in English and some related languages. The semantics is 'surface-compositional', in the sense that syntactic derivation constructs information-structural logical form monotonically, without rules of structural revision, and without autonomous rules of 'focus projection'. This is made possible by the generalized notion of syntactic constituency afforded by combinatory categorial grammar (CCG)—in particular, the fact that its rules are restricted to string-adjacent type-driven combination. In this way, the grammar unites intonation structure and information structure with surface-syntactic derivational structure and Montague-style compositional semantics, even when they deviate radically from traditional surface structure. The article revises and extends earlier CCG-based accounts of intonational semantics, grounding hitherto informal notions like 'theme' and 'rheme' (a.k.a. 'topic' and 'comment', 'presupposition' and 'focus', etc.) and 'background' and 'contrast' (a.k.a. 'given' and 'new', 'focus', etc.) in a logic of speaker/hearer supposition and update, using a version of Rooth's alternative semantics. A CCG grammar fragment is defined that constrains language-specific intonation and its interpretation more narrowly than previous attempts.
Deconstructing information structure
The paper argues that a core part of what is traditionally referred to as ‘information structure’ can be deconstructed into genuine morphosyntactic features that are visible to syntactic operations, contribute to discourse-related expressive meanings, and just happen to be spelled out prosodically in Standard American and British English. We motivate two features, [FoC] and [G], and we track the fate of those features at and beyond the syntax-semantics and the syntax-phonology interfaces. [FoC] and [G] are responsible for two distinct obligatory strategies for establishing discourse coherence. A [G]-marked constituent signals a match with a discourse referent, whereas a [FoC]-marked constituent invokes alternatives and thereby signals a contrast. In Standard American and British English [FoC] aims for highest prosodic prominence in the intonational phrase, whereas [G] lacks phrase-level prosodic properties. There is no grammatical marking of newness: The apparent prosodic effects of newness are the result of default prosody.
Revisiting the information structure of English verbo-nominal prepositional phrases in predication
LINGUISTICA PRAGENSIA 2025 (35) 2 Revisiting the information structure of English prepositional phrases in predication Irena Headlandová Kalischová – Martin Adam (Masaryk University, Brno) FULL TEXT ABSTRACT (en) The theory of functional sentence perspective (FSP) is centred on communicative dynamism and its distribution among communicative units, i.e. individual sentence elements (Firbas, 1996; 1999). When a context-dependent subject is further specified by more dynamic elements, the sentence follows the Quality Scale; conversely, if a context-independent subject is the most dynamic element, it follows the Presentation Scale. This corpus-based study, building on Adam & Headlandová Kalischová (2009), examines English sentences with prepositional predications of the pattern BE + PREPOSITIONAL PHRASE (specifically, be at fault, be at large, be in full swing, be in place, be on guard, be on display). Unlike the 2009 study, which categorized these structures based on semantic interpretation and paraphrasing potential, the present analysis explores their textual, syntactic, and information-structure characteristics irrespective of typological classification. The aim is to determine whether, and under what circumstances, these predicates express existence or appearance on the scene. The findings suggest that the FSP status of verbo-nominal prepositional structures is best understood as a continuum, ranging from predominantly presentational to primarily qualitative, rather than a strict binary categorization. KEYWORDS (en) FSP, lexical semantics, prepositional, presentation, quality, scale, verbo-nominal DOI https://doi.org/10.14712/18059635.2025.2.3 REFERENCES Adam, M. (2013). Presentation sentences (syntax, semantics and FSP). Brno: Masaryk University. Adam, M. (2019). Presentational capacity of English transitive verbs: On some semantic and FSP aspects of SEIZE. Linguistica Pragensia, 29(2), 178–191. http://dx.doi.org/10.14712/18059635.2019.2.4 Adam, M., & Headlandová Kalischová, I. (2022). FSP status of English verbo-nominal structures Be + Prepositional Phrase. Linguistica Pragensia, 32(2), 214–234. https://doi.org/10.14712/18059635.2022.2.3 Bolinger, D. L. (1952). Linear modification. Publication of the Modern Language Association of America, 67, 1117–1144. Bouveret, M., & Fillmore, C. (2008). Matching verbo-nominal constructions in FrameNet with lexical functions in MTT. In E. Bernal & J. A. DeCesaris (Eds.), Proceedings of the Euralex (pp. 297–308). Barcelona: Universitat Pompeu Fabra. Büring, D. (2012). Semantics, intonation, and information structure. In G. Ramchand & C. Reiss (Eds.), The Oxford Handbook of Linguistic Interfaces (pp. 445–474). Oxford: Oxford Academic. https://doi.org/10.1093/oxfordhb/9780199247455.013.0015 Cambridge Dictionary. (n.d.). Be at fault. In Cambridge Dictionary. Retrieved March 18, 2024, from https://dictionary.cambridge.org/dictionary/english/be-at-fault Cambridge Dictionary. (n.d.). Be at large. In Cambridge Dictionary. Retrieved March 18, 2024, from https://dictionary.cambridge.org/dictionary/english/be-at-large Cambridge Dictionary. (n.d.). Guard. In Cambridge Dictionary. Retrieved March 18, 2024, from https://dictionary.cambridge.org/dictionary/english/guard Cambridge Dictionary. (n.d.). In place. In Cambridge Dictionary. Retrieved March 18, 2024, from https://dictionary.cambridge.org/dictionary/english/in-place Chafe, W. (1994). Discourse, consciousness, and time: The flow and displacement of conscious experience in speaking and writing. Chicago/London: The University of Chicago Press. Chamonikolasová, J. (2010). Communicative dynamism and prosodic prominence of English and Czech pronouns. In M. Procházka, P. Šaldová & M. Malá (Eds.), The Prague School and Theories of Structure (pp. 143–159). Göttingen: V&R Unipress. Chamonikolasová, J., & Adam, M. (2005). The presentation scale in the theory of functional sentence perspective. In J. Čermák, A. Klégr, M. Malá & P. Šaldová (Eds.), Patterns: A Festschrift for Libuše Dušková (pp. 59–69). Prague: Faculty of Arts, Charles University. Chamonikolasová, J., Adam, M., Headlandová Kalischová, I., Drápela, M., & Stehlíková, L. (2015). Creating a system of annotation for FSP. Linguistica Pragensia, 25(1), 9–18. Davidse, K., Njende, N. M., & O’Grady, G. (2023). Specificational and presentational there-clefts: Redefining the field of clefts. Cham: Palgrave Macmillan. https://doi.org/10.1007/978-3-031-32270-9 Dušková, L. (1999). Basic distribution of communicative dynamism vs. nonlinear indication of functional sentence perspective. Travaux du Cercle Linguistique de Prague, 3, 249–262. Dušková, L. (2008). Vztahy mezi sémantikou a aktuálním členěním z pohledu anglistických členů Pražského lingvistického kroužku [The relations between semantics and functional sentence perspective as seen by Anglicist members of the Prague Linguistic Circle]. Slovo a slovesnost, 69(1–2), 67–77. Dušková, L. (2012). Mluvnice současné angličtiny na pozadí češtiny [A Grammar of Contemporary English with Reference to Czech]. Prague: Academia. Dušková, L. (2015). From Syntax to Text: The Janus Face of Functional Sentence Perspective. Prague: Karolinum Press. Dušková, L. (2020). Dual semantics of intransitive verbs: Lexical semantics vs. presentative meaning. In V. Kloudová, M. Šemelík, A. Racochová & T. Koptík (Eds.), Spielräume der modernen linguistischen Forschung (pp. 25–54). Prague: Karolinum. Erteschik-Shir, N. (2007). Information Structure: The Syntax-Discourse Interface. Oxford: Oxford University Press. https://doi.org/10.1093/oso/9780199262588.001.0001 Firbas, J. (1964). On defining the theme in functional sentence analysis. Travaux Linguistiques de Prague, 1, 267–280. Firbas, J. (1975). On ‘existence/appearance on the scene’ in functional sentence perspective. Prague Studies in English, 16, 45–70. Firbas, J. (1992). Functional sentence perspective in written and spoken communication. Cambridge: Cambridge University Press. Firbas, J. (1995). On the thematic and the rhematic layers of a text. In B. Wårwick, S.-K. Tanskanen & R. Hiltunen (Eds.), Organization in Discourse: Proceedings from the Turku Conference, Anglicana Turkuensia 14 (pp. 59–72). Turku: University of Turku. Farlex. (n.d.). The Free Dictionary. Retrieved March 18, 2024, from https://www.thefreedictionary.com/ Hornby, A. S., & Turnbull, J. (2010). Oxford Advanced Learner’s Dictionary of Current English (8th ed.). Oxford: Oxford University Press. Huddleston, R., & Pullum, G. K. (2002). The Cambridge Grammar of the English Language. Cambridge: Cambridge University Press. Lambrecht, K. (1994). Information Structure and Sentence Form: Topic, Focus, and the Mental Representations of Discourse Referents. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511620607 Quirk, R., et al. (1985). A Comprehensive Grammar of the English Language. London: Longman. Sketch Engine. (n.d.). Sketch Engine [Corpus analysis tool]. Retrieved May 8, 2024, from https://ske.f.muni.cz/ Svoboda, A. (2005). Firbasian semantic scales and comparative studies. In J. Čermák, A. Klégr, M. Malá & P. Šaldová (Eds.), Patterns. A Festschrift for Libuše Dušková (pp. 217–229). Prague: Faculty of Arts, Charles University. USLegal. (n.d.). Damages at large. In USLegal. Retrieved May 22, 2024, from https://definitions.uslegal.com/ Vachek, J. (1994). A functional syntax of modern English. Brno: Masaryk University. Van Rompaey, T., Davidse, K., & Petré, P. (2015). Lexicalization and grammaticalization: The case of the verbo-nominal expressions be on the/one’s way/road. Functions of Language, 22(2), 232–263.
An experimental approach to linguistic representation
Within the cognitive sciences, most researchers assume that it is the job of linguists to investigate how language is represented, and that they do so largely by building theories based on explicit judgments about patterns of acceptability – whereas it is the task of psychologists to determine how language is processed, and that in doing so, they do not typically question the linguists' representational assumptions. We challenge this division of labor by arguing that structural priming provides an implicit method of investigating linguistic representations that should end the current reliance on acceptability judgments. Moreover, structural priming has now reached sufficient methodological maturity to provide substantial evidence about such representations. We argue that evidence from speakers' tendency to repeat their own and others' structural choices supports a linguistic architecture involving a single shallow level of syntax connected to a semantic level containing information about quantification, thematic relations, and information structure, as well as to a phonological level. Many of the linguistic distinctions often used to support complex (or multilevel) syntactic structure are instead captured by semantics; however, the syntactic level includes some specification of “missing” elements that are not realized at the phonological level. We also show that structural priming provides evidence about the consistency of representations across languages and about language development. In sum, we propose that structural priming provides a new basis for understanding the nature of language.
Learning Human-Written Commit Messages to Document Code Changes
Commit messages are important complementary information used in understanding code changes. To address message scarcity, some work is proposed for automatically generating commit messages. However, most of these approaches focus on generating summary of the changed software entities at the superficial level, without considering the intent behind the code changes (e.g., the existing approaches cannot generate such message: “fixing null pointer exception”). Considering developers often describe the intent behind the code change when writing the messages, we propose ChangeDoc, an approach to reuse existing messages in version control systems for automatical commit message generation. Our approach includes syntax, semantic, pre-syntax, and pre-semantic similarities. For a given commit without messages, it is able to discover its most similar past commit from a large commit repository, and recommend its message as the message of the given commit. Our repository contains half a million commits that were collected from SourceForge. We evaluate our approach on the commits from 10 projects. The results show that 21.5% of the recommended messages by ChangeDoc can be directly used without modification, and 62.8% require minor modifications. In order to evaluate the quality of the commit messages recommended by ChangeDoc, we performed two empirical studies involving a total of 40 participants (10 professional developers and 30 students). The results indicate that the recommended messages are very good approximations of the ones written by developers and often include important intent information that is not included in the messages generated by other tools.
Machine reading comprehension combined with semantic dependency for Chinese zero pronoun resolution
Pronoun-drop is a common phenomenon in Chinese, zero pronoun resolution aims to recover the pronoun and resolve the anaphora antecedent of the pronoun, which is important for NLP tasks such as machine translation, information extraction, etc. Most of the existing research methods predefined zero pronoun sets. Zero pronoun recovery is carried out through multi-classification, and then the coreference chains between each pronoun and the candidate antecedent are predicted in turn. However, most of the previous methods only focus on the relationship between pronouns and arguments, ignoring the deep semantic relationship between predicates and arguments as the core semantic component. In addition, the model takes the parse tree (gold tree) as a priori knowledge, which is costly in practical applications. In this paper, we propose a Machine Reading Comprehension model combined with Semantic Dependency (MRC-SD), which takes advantage of the feature that semantic dependencies can directly access deep semantic information across the surface syntactic structure of a sentence to capture the semantic relations between predicates and arguments, and enhance such semantic relations in the form of questions and answers through machine reading comprehension, to extract co-reference chains more accurately. In addition, we propose a method of combining semantic dependency with the language model to realize zero pronoun recovery from the deep semantic level. Experimental results on our self-constructed public opinion dataset (FS-PO) show that the MRC-SD model significantly outperforms the state-of-the-art zero-pronoun resolution model.
Cortical representation of the constituent structure of sentences
Linguistic analyses suggest that sentences are not mere strings of words but possess a hierarchical structure with constituents nested inside each other. We used functional magnetic resonance imaging (fMRI) to search for the cerebral mechanisms of this theoretical construct. We hypothesized that the neural assembly that encodes a constituent grows with its size, which can be approximately indexed by the number of words it encompasses. We therefore searched for brain regions where activation increased parametrically with the size of linguistic constituents, in response to a visual stream always comprising 12 written words or pseudowords. The results isolated a network of left-hemispheric regions that could be dissociated into two major subsets. Inferior frontal and posterior temporal regions showed constituent size effects regardless of whether actual content words were present or were replaced by pseudowords (jabberwocky stimuli). This observation suggests that these areas operate autonomously of other language areas and can extract abstract syntactic frames based on function words and morphological information alone. On the other hand, regions in the temporal pole, anterior superior temporal sulcus and temporo-parietal junction showed constituent size effect only in the presence of lexico-semantic information, suggesting that they may encode semantic constituents. In several inferior frontal and superior temporal regions, activation was delayed in response to the largest constituent structures, suggesting that nested linguistic structures take increasingly longer time to be computed and that these delays can be measured with fMRI.
Meaning before grammar: A review of ERP experiments on the neurodevelopmental origins of semantic processing
According to traditional linguistic theories, the construction of complex meanings relies firmly on syntactic structure-building operations. Recently, however, new models have been proposed in which semantics is viewed as being partly autonomous from syntax. In this paper, we discuss some of the developmental implications of syntax-based and autonomous models of semantics. We review event-related brain potential (ERP) studies on semantic processing in infants and toddlers, focusing on experiments reporting modulations of N400 amplitudes using visual or auditory stimuli and different temporal structures of trials. Our review suggests that infants can relate or integrate semantic information from temporally overlapping stimuli across modalities by 6 months of age. The ability to relate or integrate semantic information over time, within and across modalities, emerges by 9 months. The capacity to relate or integrate information from spoken words in sequences and sentences appears by 18 months. We also review behavioral and ERP studies showing that grammatical and syntactic processing skills develop only later, between 18 and 32 months. These results provide preliminary evidence for the availability of some semantic processes prior to the full developmental emergence of syntax: non-syntactic meaning-building operations are available to infants, albeit in restricted ways, months before the abstract machinery of grammar is in place. We discuss this hypothesis in light of research on early language acquisition and human brain development.