Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
697 result(s) for "Collins, Evan T."
Sort by:
The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings
What can be done to combat political misinformation? One prominent intervention involves attaching warnings to headlines of news stories that have been disputed by third-party fact-checkers. Here we demonstrate a hitherto unappreciated potential consequence of such a warning: an implied truth effect , whereby false headlines that fail to get tagged are considered validated and thus are seen as more accurate. With a formal model, we demonstrate that Bayesian belief updating can lead to such an implied truth effect. In Study 1 ( n = 5,271 MTurkers), we find that although warnings do lead to a modest reduction in perceived accuracy of false headlines relative to a control condition (particularly for politically concordant headlines), we also observed the hypothesized implied truth effect: the presence of warnings caused untagged headlines to be seen as more accurate than in the control. In Study 2 ( n = 1,568 MTurkers), we find the same effects in the context of decisions about which headlines to consider sharing on social media. We also find that attaching verifications to some true headlines—which removes the ambiguity about whether untagged headlines have not been checked or have been verified—eliminates, and in fact slightly reverses, the implied truth effect. Together these results contest theories of motivated reasoning while identifying a potential challenge for the policy of using warning tags to fight misinformation—a challenge that is particularly concerning given that it is much easier to produce misinformation than it is to debunk it. This paper was accepted by Elke Weber, judgment and decision making.
Diagnostics and correction of batch effects in large‐scale proteomic studies: a tutorial
Advancements in mass spectrometry‐based proteomics have enabled experiments encompassing hundreds of samples. While these large sample sets deliver much‐needed statistical power, handling them introduces technical variability known as batch effects. Here, we present a step‐by‐step protocol for the assessment, normalization, and batch correction of proteomic data. We review established methodologies from related fields and describe solutions specific to proteomic challenges, such as ion intensity drift and missing values in quantitative feature matrices. Finally, we compile a set of techniques that enable control of batch effect adjustment quality. We provide an R package, \"proBatch\", containing functions required for each step of the protocol. We demonstrate the utility of this methodology on five proteomic datasets each encompassing hundreds of samples and consisting of multiple experimental designs. In conclusion, we provide guidelines and tools to make the extraction of true biological signal from large proteomic studies more robust and transparent, ultimately facilitating reliable and reproducible research in clinical proteomics and systems biology. Graphical Abstract In mass spectrometry‐based proteomics, handling large sample sets introduces technical variability known as batch effects. This tutorial provides guidelines and tools for the assessment, normalization, and batch correction of proteomics data.
Identification of phagocytosis regulators using magnetic genome-wide CRISPR screens
Phagocytosis is required for a broad range of physiological functions, from pathogen defense to tissue homeostasis, but the mechanisms required for phagocytosis of diverse substrates remain incompletely understood. Here, we developed a rapid magnet-based phenotypic screening strategy, and performed eight genome-wide CRISPR screens in human cells to identify genes regulating phagocytosis of distinct substrates. After validating select hits in focused miniscreens, orthogonal assays and primary human macrophages, we show that (1) the previously uncharacterized gene NHLRC2 is a central player in phagocytosis, regulating RhoA-Rac1 signaling cascades that control actin polymerization and filopodia formation, (2) very-long-chain fatty acids are essential for efficient phagocytosis of certain substrates and (3) the previously uncharacterized Alzheimer’s disease–associated gene TM2D3 can preferentially influence uptake of amyloid-β aggregates. These findings illuminate new regulators and core principles of phagocytosis, and more generally establish an efficient method for unbiased identification of cellular uptake mechanisms across diverse physiological and pathological contexts. Eight genome-wide CRISPR screens identify genes required for substrate-specific phagocytosis. The study highlights roles for NHLRC2 in filopodia formation, very-long-chain fatty acids in substrate-specific phagocytosis and TM2D3 in uptake of amyloid-β aggregates.
Quantum approximate optimization of non-planar graph problems on a planar superconducting processor
Faster algorithms for combinatorial optimization could prove transformative for diverse areas such as logistics, finance and machine learning. Accordingly, the possibility of quantum enhanced optimization has driven much interest in quantum technologies. Here we demonstrate the application of the Google Sycamore superconducting qubit quantum processor to combinatorial optimization problems with the quantum approximate optimization algorithm (QAOA). Like past QAOA experiments, we study performance for problems defined on the planar connectivity graph native to our hardware; however, we also apply the QAOA to the Sherrington–Kirkpatrick model and MaxCut, non-native problems that require extensive compilation to implement. For hardware-native problems, which are classically efficient to solve on average, we obtain an approximation ratio that is independent of problem size and observe that performance increases with circuit depth. For problems requiring compilation, performance decreases with problem size. Circuits involving several thousand gates still present an advantage over random guessing but not over some efficient classical algorithms. Our results suggest that it will be challenging to scale near-term implementations of the QAOA for problems on non-native graphs. As these graphs are closer to real-world instances, we suggest more emphasis should be placed on such problems when using the QAOA to benchmark quantum processors.It is hoped that quantum computers may be faster than classical ones at solving optimization problems. Here the authors implement a quantum optimization algorithm over 23 qubits but find more limited performance when an optimization problem structure does not match the underlying hardware.
Dupilumab in Adults and Adolescents with Eosinophilic Esophagitis
Dupilumab, a fully human monoclonal antibody, blocks interleukin-4 and interleukin-13 signaling, which have key roles in eosinophilic esophagitis. We conducted a three-part, phase 3 trial in which patients 12 years of age or older underwent randomization in a 1:1 ratio to receive subcutaneous dupilumab at a weekly dose of 300 mg or placebo (Part A) or in a 1:1:1 ratio to receive 300 mg of dupilumab either weekly or every 2 weeks or weekly placebo (Part B) up to week 24. Eligible patients who completed Part A or Part B continued the trial in Part C, in which those who completed Part A received dupilumab at a weekly dose of 300 mg up to week 52 (the Part A-C group); Part C that included the eligible patients from Part B is ongoing. The two primary end points at week 24 were histologic remission (≤6 eosinophils per high-power field) and the change from baseline in the Dysphagia Symptom Questionnaire (DSQ) score (range, 0 to 84, with higher values indicating more frequent or more severe dysphagia). In Part A, histologic remission occurred in 25 of 42 patients (60%) who received weekly dupilumab and in 2 of 39 patients (5%) who received placebo (difference, 55 percentage points; 95% confidence interval [CI], 40 to 71; P<0.001). In Part B, histologic remission occurred in 47 of 80 patients (59%) with weekly dupilumab, in 49 of 81 patients (60%) with dupilumab every 2 weeks, and in 5 of 79 patients (6%) with placebo (difference between weekly dupilumab and placebo, 54 percentage points; 95% CI, 41 to 66 [P<0.001]; difference between dupilumab every 2 weeks and placebo, 56 percentage points; 95% CI, 43 to 69 [not significant per hierarchical testing]). The mean (±SD) DSQ scores at baseline were 33.6±12.41 in Part A and 36.7±11.22 in Part B; the scores improved with weekly dupilumab as compared with placebo, with differences of -12.32 (95% CI, -19.11 to -5.54) in Part A and -9.92 (95% CI, -14.81 to -5.02) in Part B (both P<0.001) but not with dupilumab every 2 weeks (difference in Part B, -0.51; 95% CI, -5.42 to 4.41). Serious adverse events occurred in 9 patients during the Part A or B treatment period (in 7 who received weekly dupilumab, 1 who received dupilumab every 2 weeks, and 1 who received placebo) and in 1 patient in the Part A-C group during the Part C treatment period who received placebo in Part A and weekly dupilumab in Part C. Among patients with eosinophilic esophagitis, subcutaneous dupilumab administered weekly improved histologic outcomes and alleviated symptoms of the disease. (Funded by Sanofi and Regeneron Pharmaceuticals; ClinicalTrials.gov number, NCT03633617.).
Wearable materials with embedded synthetic biology sensors for biomolecule detection
Integrating synthetic biology into wearables could expand opportunities for noninvasive monitoring of physiological status, disease states and exposure to pathogens or toxins. However, the operation of synthetic circuits generally requires the presence of living, engineered bacteria, which has limited their application in wearables. Here we report lightweight, flexible substrates and textiles functionalized with freeze-dried, cell-free synthetic circuits, including CRISPR-based tools, that detect metabolites, chemicals and pathogen nucleic acid signatures. The wearable devices are activated upon rehydration from aqueous exposure events and report the presence of specific molecular targets by colorimetric changes or via an optical fiber network that detects fluorescent and luminescent outputs. The detection limits for nucleic acids rival current laboratory methods such as quantitative PCR. We demonstrate the development of a face mask with a lyophilized CRISPR sensor for wearable, noninvasive detection of SARS-CoV-2 at room temperature within 90 min, requiring no user intervention other than the press of a button. Wearable materials are endowed with synthetic biology circuits to detect biomolecules, including SARS-CoV-2 RNA.
Resolving catastrophic error bursts from cosmic rays in large arrays of superconducting qubits
Scalable quantum computing can become a reality with error correction, provided that coherent qubits can be constructed in large arrays 1 , 2 . The key premise is that physical errors can remain both small and sufficiently uncorrelated as devices scale, so that logical error rates can be exponentially suppressed. However, impacts from cosmic rays and latent radioactivity violate these assumptions. An impinging particle can ionize the substrate and induce a burst of quasiparticles that destroys qubit coherence throughout the device. High-energy radiation has been identified as a source of error in pilot superconducting quantum devices 3 – 5 , but the effect on large-scale algorithms and error correction remains an open question. Elucidating the physics involved requires operating large numbers of qubits at the same rapid timescales necessary for error correction. Here, we use space- and time-resolved measurements of a large-scale quantum processor to identify bursts of quasiparticles produced by high-energy rays. We track the events from their initial localized impact as they spread, simultaneously and severely limiting the energy coherence of all qubits and causing chip-wide failure. Our results provide direct insights into the impact of these damaging error bursts and highlight the necessity of mitigation to enable quantum computing to scale. Cosmic rays flying through superconducting quantum devices create bursts of excitations that destroy qubit coherence. Rapid, spatially resolved measurements of qubit error rates make it possible to observe the evolution of the bursts across a chip.
Time-crystalline eigenstate order on a quantum processor
Quantum many-body systems display rich phase structure in their low-temperature equilibrium states 1 . However, much of nature is not in thermal equilibrium. Remarkably, it was recently predicted that out-of-equilibrium systems can exhibit novel dynamical phases 2 – 8 that may otherwise be forbidden by equilibrium thermodynamics, a paradigmatic example being the discrete time crystal (DTC) 7 , 9 – 15 . Concretely, dynamical phases can be defined in periodically driven many-body-localized (MBL) systems via the concept of eigenstate order 7 , 16 , 17 . In eigenstate-ordered MBL phases, the entire many-body spectrum exhibits quantum correlations and long-range order, with characteristic signatures in late-time dynamics from all initial states. It is, however, challenging to experimentally distinguish such stable phases from transient phenomena, or from regimes in which the dynamics of a few select states can mask typical behaviour. Here we implement tunable controlled-phase (CPHASE) gates on an array of superconducting qubits to experimentally observe an MBL-DTC and demonstrate its characteristic spatiotemporal response for generic initial states 7 , 9 , 10 . Our work employs a time-reversal protocol to quantify the impact of external decoherence, and leverages quantum typicality to circumvent the exponential cost of densely sampling the eigenspectrum. Furthermore, we locate the phase transition out of the DTC with an experimental finite-size analysis. These results establish a scalable approach to studying non-equilibrium phases of matter on quantum processors. A study establishes a scalable approach to engineer and characterize a many-body-localized discrete time crystal phase on a superconducting quantum processor.
Eosinophil Depletion with Benralizumab for Eosinophilic Esophagitis
In a randomized, placebo-controlled trial involving patients 12 to 65 years of age with eosinophilic esophagitis, benralizumab reduced esophageal eosinophil counts but did not reduce dysphagia symptoms.
Suppressing quantum errors by scaling a surface code logical qubit
Practical quantum computing will require error rates well below those achievable with physical qubits. Quantum error correction 1 , 2 offers a path to algorithmically relevant error rates by encoding logical qubits within many physical qubits, for which increasing the number of physical qubits enhances protection against physical errors. However, introducing more qubits also increases the number of error sources, so the density of errors must be sufficiently low for logical performance to improve with increasing code size. Here we report the measurement of logical qubit performance scaling across several code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find that our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, in terms of both logical error probability over 25 cycles and logical error per cycle ((2.914 ± 0.016)% compared to (3.028 ± 0.023)%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7 × 10 −6 logical error per cycle floor set by a single high-energy event (1.6 × 10 −7 excluding this event). We accurately model our experiment, extracting error budgets that highlight the biggest challenges for future systems. These results mark an experimental demonstration in which quantum error correction begins to improve performance with increasing qubit number, illuminating the path to reaching the logical error rates required for computation. A study demonstrating increasing error suppression with larger surface code logical qubits, implemented on a superconducting quantum processor.