Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
8,861 result(s) for "Sentence Structure"
Sort by:
What Makes Writing Great? First Experiments on Article Quality Prediction in the Science Journalism Domain
Great writing is rare and highly admired. Readers seek out articles that are beautifully written, informative and entertaining. Yet information-access technologies lack capabilities for predicting article quality at this level. In this paper we present first experiments on article quality prediction in the science journalism domain. We introduce a corpus of great pieces of science journalism, along with typical articles from the genre. We implement features to capture aspects of great writing, including surprising, visual and emotional content, as well as general features related to discourse organization and sentence structure. We show that the distinction between great and typical articles can be detected fairly accurately, and that the entire spectrum of our features contribute to the distinction.
Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora’s Box Has Been Opened
Artificial intelligence (AI) has advanced substantially in recent years, transforming many industries and improving the way people live and work. In scientific research, AI can enhance the quality and efficiency of data analysis and publication. However, AI has also opened up the possibility of generating high-quality fraudulent papers that are difficult to detect, raising important questions about the integrity of scientific research and the trustworthiness of published papers. The aim of this study was to investigate the capabilities of current AI language models in generating high-quality fraudulent medical articles. We hypothesized that modern AI models can create highly convincing fraudulent papers that can easily deceive readers and even experienced researchers. This proof-of-concept study used ChatGPT (Chat Generative Pre-trained Transformer) powered by the GPT-3 (Generative Pre-trained Transformer 3) language model to generate a fraudulent scientific article related to neurosurgery. GPT-3 is a large language model developed by OpenAI that uses deep learning algorithms to generate human-like text in response to prompts given by users. The model was trained on a massive corpus of text from the internet and is capable of generating high-quality text in a variety of languages and on various topics. The authors posed questions and prompts to the model and refined them iteratively as the model generated the responses. The goal was to create a completely fabricated article including the abstract, introduction, material and methods, discussion, references, charts, etc. Once the article was generated, it was reviewed for accuracy and coherence by experts in the fields of neurosurgery, psychiatry, and statistics and compared to existing similar articles. The study found that the AI language model can create a highly convincing fraudulent article that resembled a genuine scientific paper in terms of word usage, sentence structure, and overall composition. The AI-generated article included standard sections such as introduction, material and methods, results, and discussion, as well a data sheet. It consisted of 1992 words and 17 citations, and the whole process of article creation took approximately 1 hour without any special training of the human user. However, there were some concerns and specific mistakes identified in the generated article, specifically in the references. The study demonstrates the potential of current AI language models to generate completely fabricated scientific articles. Although the papers look sophisticated and seemingly flawless, expert readers may identify semantic inaccuracies and errors upon closer inspection. We highlight the need for increased vigilance and better detection methods to combat the potential misuse of AI in scientific research. At the same time, it is important to recognize the potential benefits of using AI language models in genuine scientific writing and research, such as manuscript preparation and language editing.
The Structure Sentence Of The Reason Sentence In New Approach Japanese Intermediate Course Book : Semantics Study
In Japanese, there are many sentence structures and words that have synonymous meanings, that in Japanese is called Ruigigo, In its use because it has the same meaning, ruigigo is often misused in sentences, sometimes it is also difficult to find the right equivalent of the word in Indonesian.In New Approach Japanese Intermediate Course book, there are many ruigigo that can be used as research objects, one of which is the structure of sentences and words that have a meaning because. From the research that has been done, the author found 10 sentence stuctures related to words that have meaning of because.The purpose of author is to conduct a semantic study on sentence stuctures which states the reason in the New Approach Japanese Intermediate Course book is to help Japanese learners to understand meaning of sentence structures and words that related reasons.
You That Read Wrong Again! A Transposed-Word Effect in Grammaticality Judgments
We report a novel transposed-word effect in speeded grammaticality judgments made about five-word sequences. The critical ungrammatical test sequences were formed by transposing two adjacent words from either a grammatical base sequence (e.g., “The white cat was big” became “The white was cat big”) or an ungrammatical base sequence (e.g., “The white cat was slowly” became “The white was cat slowly”). These were intermixed with an equal number of correct sentences for the purpose of the grammaticality judgment task. In a laboratory experiment (N = 57) and an online experiment (N = 94), we found that ungrammatical decisions were harder to make when the ungrammatical sequence originated from a grammatically correct base sequence. This provides the first demonstration that the encoding of word order retains a certain amount of uncertainty. We further argue that the novel transposed-word effect reflects parallel processing of words during written sentence comprehension combined with top-down constraints from sentence-level structures.
Comprehending irony via sentence-final particles by Chinese children with autism spectrum disorders
We examined the effects of sentence-final particles (SFPs) in comprehending different types of irony by Chinese-speaking children with autism spectrum disorders (ASDs). We tested 15 children with ASDs, along with another 15 typically developing (TD) children. In our test, by manipulating the use of the prototypical SFP /a/, participants were required to judge the speaker's attitude and real intention in ironic utterances of 16 stories and to further explain the language phenomenon. The results of a three-way analysis of variance showed a significant difference between the two groups: first, children with ASDs performed significantly worse than did their TD counterparts; second, while TD children relied more on SFPs to understand irony of compliment, children with ASDs only performed better with SFPs in comprehending irony of criticism. The differences are discussed in relation to theory of mind, the frequency of utterance, and rules of cognition.
Sentence-Structure Priming in Young Children Who Do and Do Not Stutter
The purpose of this study was to use an age-appropriate version of the sentence-structure priming paradigm (e.g., K. Bock, 1990; K. Bock, H. Loebell, & R. Morey, 1992) to assess experimentally the syntactic processing abilities of children who stutter (CWS) and children who do not stutter (CWNS). Participants were 16 CWS and 16 CWNS between the ages of 3;3 (years; months) and 5;5, matched for gender and age (±4 months). All participants had speech, language, and hearing development within normal limits, with the exception of stuttering for CWS. All children participated in a sentence-structure priming task where they were shown and asked to describe, on a computer screen, black-on-white line drawings of children, adults, and animals performing activities that could be appropriately described using simple active affirmative declarative (SAAD) sentences (e.g., \"The man is walking the dog\"). Speech reaction time (SRT) was measured from the onset of the picture presentation to the onset of the child's verbal response in the absence and presence of priming sentences, counterbalanced for order. Main findings indicated that CWS exhibited slower SRTs in the absence of priming sentences and greater syntactic-priming effects than CWNS. These findings suggest that CWS may have difficulty rapidly, efficiently planning and/or retrieving sentence-structure units, difficulties that may contribute to their inabilities to establish fluent speech-language production.
How Much Do Language Models Copy From Their Training Data? Evaluating Linguistic Novelty in Text Generation Using RAVEN
Current language models can generate high-quality text. Are they simply copying text they have seen before, or have they learned generalizable linguistic abstractions? To tease apart these possibilities, we introduce RAVEN, a suite of analyses for assessing the novelty of generated text, focusing on sequential structure ( -grams) and syntactic structure. We apply these analyses to four neural language models trained on English (an LSTM, a Transformer, Transformer-XL, and GPT-2). For local structure—e.g., individual dependencies—text generated with a standard sampling scheme is substantially less novel than our baseline of human-generated text from each model’s test set. For larger-scale structure—e.g., overall sentence structure—model-generated text is as novel or even more novel than the human-generated baseline, but models still sometimes copy substantially, in some cases duplicating passages over 1,000 words long from the training set. We also perform extensive manual analysis, finding evidence that GPT-2 uses both compositional and analogical generalization mechanisms and showing that GPT-2’s novel text is usually well-formed morphologically and syntactically but has reasonably frequent semantic issues (e.g., being self-contradictory).
Sentence-Based Text Analysis for Customer Reviews
Firms collect an increasing amount of consumer feedback in the form of unstructured consumer reviews. These reviews contain text about consumer experiences with products and services that are different from surveys that query consumers for specific information. A challenge in analyzing unstructured consumer reviews is in making sense of the topics that are expressed in the words used to describe these experiences. We propose a new model for text analysis that makes use of the sentence structure contained in the reviews and show that it leads to improved inference and prediction of consumer ratings relative to existing models using data from www.expedia.com and www.we8there.com . Sentence-based topics are found to be more distinguished and coherent than those identified from a word-based analysis. Data, as supplemental material, are available at https://doi.org/10.1287/mksc.2016.0993 .
Structural Persistence in Language Models: Priming as a Window into Abstract Language Representations
We investigate the extent to which modern neural language models are susceptible to structural priming, the phenomenon whereby the structure of a sentence makes the same structure more probable in a follow-up sentence. We explore how priming can be used to study the potential of these models to learn abstract structural information, which is a prerequisite for good performance on tasks that require natural language understanding skills. We introduce a novel metric and release , a large corpus where we control for various linguistic factors that interact with priming strength. We find that Transformer models indeed show evidence of structural priming, but also that the generalizations they learned are to some extent modulated by semantic information. Our experiments also show that the representations acquired by the models may not only encode abstract sequential structure but involve certain level of hierarchical syntactic information. More generally, our study shows that the priming paradigm is a useful, additional tool for gaining insights into the capacities of language models and opens the door to future priming-based investigations that probe the model’s internal states.