Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
274 result(s) for "Potts, Christopher"
Sort by:
Relevance-guided Supervision for OpenQA with ColBERT
Systems for Open-Domain Question Answering (OpenQA) generally depend on a for finding candidate passages in a large corpus and a for extracting answers from those passages. In much recent work, the retriever is a learned component that uses coarse-grained vector representations of questions and passages. We argue that this modeling choice is insufficiently expressive for dealing with the complexity of natural language questions. To address this, we define ColBERT-QA, which adapts the scalable neural retrieval model ColBERT to OpenQA. ColBERT creates fine-grained interactions between questions and passages. We propose an efficient weak supervision strategy that iteratively uses ColBERT to create its own training data. This greatly improves OpenQA retrieval on Natural Questions, SQuAD, and TriviaQA, and the resulting system attains state-of-the-art extractive OpenQA performance on all three datasets.
Postmortem memory of public figures in news and social media
Deceased public figures are often said to live on in collective memory. We quantify this phenomenon by tracking mentions of 2,362 public figures in English-language online news and social media (Twitter) 1 y before and after death. We measure the sharp spike and rapid decay of attention following death and model collective memory as a composition of communicative and cultural memory. Clustering reveals four patterns of postmortem memory, and regression analysis shows that boosts in media attention are largest for premortem popular anglophones who died a young, unnatural death; that long-term boosts are smallest for leaders and largest for artists; and that, while both the news and Twitter are triggered by young and unnatural deaths, the news additionally curates collective memory when old persons or leaders die. Overall, we illuminate the age-old question of who is remembered by society, and the distinct roles of news and social media in collective memory formation.
Femoral component sagittal alignment and its influence on short-term outcomes in total knee arthroplasty: a systematic review
Purpose Variations in femoral component sagittal alignment may influence clinical outcomes and survival rates following primary total knee arthroplasty (TKA). This study aims to systematically review the influence of femoral component sagittal alignment on short-term outcomes (≥ 6-month follow-up) and potentially characterize an optimal orientation. Methods A systematic review was carried out following the PRISMA guidelines to identify studies in PubMed, Cochrane Library, and SPORTDiscus with Full Text databases between January 2010 and April 2025. Eligible studies included adults undergoing primary TKA, quantified sagittal femoral component alignment, and reported at least one of the following outcomes with ≥ 6 months follow-up: patient-reported outcome measures (PROMs), range of motion (ROM), anterior knee pain, or revision/survivorship. Studies not stratifying outcomes by sagittal angle were excluded. After screening 2137 initial results, 10 studies met inclusion and exclusion criteria, with 9 being prospective/retrospective cohort studies and 1 Randomized Control Trial (RCT). Data extraction included study design, demographics, implant type, imaging/measurement method, sagittal alignment definitions, PROMs, ROM, and revision rates. Due to heterogeneity in alignment grouping and outcome reporting, a narrative synthesis was performed rather than a meta-analysis. Results Ten studies comprising 5,205 TKAs were included, with two robotic-assisted TKA cohorts and eight manual TKA cohorts. Mild flexion of the femoral component was frequently associated with improved short-term outcomes, but results differed by measurement method and outcome definition. Five of eight studies evaluating functional scores and five of six studies evaluating ROM reported improved outcomes with mild flexion compared with neutral or extended alignment. Only two studies explored failure rate, while anterior knee pain was only reported by one study. Conclusions Across studies evaluating sagittal alignment in primary TKA, mild flexion of the femoral component was more commonly associated with improved functional outcomes, ROM, implant survival, and pain compared with neutral or extended alignment. However, these associations were inconsistent, and certainty of evidence was generally low or very low for most outcomes, indicating additional RCTs with standardized femoral flexion angle groupings and outcome measurements are needed to further investigate optimal sagittal component positioning.
ReCOGS: How Incidental Details of a Logical Form Overshadow an Evaluation of Semantic Interpretation
Compositional generalization benchmarks for semantic parsing seek to assess whether models can accurately compute for novel sentences, but operationalize this in terms of (LF) prediction. This raises the concern that semantically irrelevant details of the chosen LFs could shape model performance. We argue that this concern is realized for the COGS benchmark (Kim and Linzen, ). COGS poses generalization splits that appear impossible for present-day models, which could be taken as an indictment of those models. However, we show that the negative results trace to incidental features of COGS LFs. Converting these LFs to semantically equivalent ones and factoring out capabilities unrelated to semantic interpretation, we find that even baseline models get traction. A recent variable-free translation of COGS LFs suggests similar conclusions, but we observe this format is not semantically equivalent; it is incapable of accurately representing some COGS meanings. These findings inform our proposal for ReCOGS, a modified version of COGS that comes closer to assessing the target semantic capabilities while remaining very challenging. Overall, our results reaffirm the importance of compositional generalization and careful benchmark task design.
Characterizing English Preposing in PP constructions
The English Preposing in PP construction (PiPP; e.g., H appy though / as we were ) is extremely rare but displays an intricate set of stable syntactic properties. How do people become proficient with this construction despite such limited evidence? It is tempting to posit innate learning mechanisms, but present-day large language models seem to learn to represent PiPPs as well, even though such models employ only very general learning mechanisms and experience very few instances of the construction during training. This suggests an alternative hypothesis on which knowledge of more frequent constructions helps shape knowledge of PiPPs. I seek to make this idea precise using model-theoretic syntax (MTS). In MTS, a grammar is essentially a set of constraints on forms. In this context, PiPPs can be seen as arising from a mix of construction-specific and general-purpose constraints, all of which seem inferable from general linguistic experience.
A case for deep learning in semantics: Response to Pater
Pater's (2019) target article builds a persuasive case for establishing stronger ties between theoretical linguistics and connectionism (deep learning). This commentary extends his arguments to semantics, focusing in particular on issues of learning, compositionality, and lexical meaning.
May I Cut in? Gene Editing Approaches in Human Induced Pluripotent Stem Cells
In the decade since Yamanaka and colleagues described methods to reprogram somatic cells into a pluripotent state, human induced pluripotent stem cells (hiPSCs) have demonstrated tremendous promise in numerous disease modeling, drug discovery, and regenerative medicine applications. More recently, the development and refinement of advanced gene transduction and editing technologies have further accelerated the potential of hiPSCs. In this review, we discuss the various gene editing technologies that are being implemented with hiPSCs. Specifically, we describe the emergence of technologies including zinc-finger nuclease (ZFN), transcription activator-like effector nuclease (TALEN), and clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9 that can be used to edit the genome at precise locations, and discuss the strengths and weaknesses of each of these technologies. In addition, we present the current applications of these technologies in elucidating the mechanisms of human development and disease, developing novel and effective therapeutic molecules, and engineering cell-based therapies. Finally, we discuss the emerging technological advances in targeted gene editing methods.
A case for deep learning in semantics
Pater’s (2019) target article builds a persuasive case for establishing stronger ties between theoretical linguistics and connectionism (deep learning). This commentary extends his arguments to semantics, focusing in particular on issues of learning, compositionality, and lexical meaning.
The Syntax and Semantics of As-Parentheticals
This paper is a detailed investigation of the syntax and semantics of a single type of cross-linguistically common parenthetical expression, here dubbed As-parentheticals (e.g., Ames, as you know, was a spy). I show that a treatment of such clauses as adverbial modifiers combines with a motivated semantic analysis to account for a wide range of ambiguities concerning negation in particular, but also tense, modal, and adverbial operators. I provide a principled explanation for the impossibility of variable binding into, and extraction from, As-parentheticals, and argue that this construction yields novel support for the view (of Ladusaw 1992, and others) that negative DPs like no one are actually non-negated indefinites licensed by an abstract, clause-level negation. Overall, the analysis shows that parentheticals, in addition to being a rich source of puzzles in their own right, provide a useful probe into clause structure in general.
A data-driven framework for mapping domains of human neurobiology
Functional neuroimaging has been a mainstay of human neuroscience for the past 25 years. Interpretation of functional magnetic resonance imaging (fMRI) data has often occurred within knowledge frameworks crafted by experts, which have the potential to amplify biases that limit the replicability of findings. Here, we use a computational approach to derive a data-driven framework for neurobiological domains that synthesizes the texts and data of nearly 20,000 human neuroimaging articles. Across multiple levels of domain specificity, the structure–function links within domains better replicate in held-out articles than those mapped from dominant frameworks in neuroscience and psychiatry. We further show that the data-driven framework partitions the literature into modular subfields, for which domains serve as generalizable prototypes of structure–function patterns in single articles. The approach to computational ontology we present here is the most comprehensive characterization of human brain circuits quantifiable with fMRI and may be extended to synthesize other scientific literatures. Beam et al. created a data-driven mapping of human brain function, drawing on full texts and coordinate data reported in neuroimaging studies. This validated framework outperformed leading and widely used knowledge frameworks, namely Research Domain Criteria (RDoC) and the Diagnostic and Statistical Manual of Mental Disorders (DSM).