Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
777 result(s) for "Artifact identification"
Sort by:
ArtUnmasked: A Multimodal Classifier for Real, AI, and Imitated Artworks
Differentiating AI-generated, real, or imitated artworks is becoming a tedious and computationally challenging problem in digital art analysis. AI-generated art has become nearly indistinguishable from human-made works, posing a significant threat to copyrighted content. This content is appearing on online platforms, at exhibitions, and in commercial galleries, thereby escalating the risk of copyright infringement. This sudden increase in generative images raises concerns like authenticity, intellectual property, and the preservation of cultural heritage. Without an automated, comprehensible system to determine whether an artwork has been AI-generated, authentic (real), or imitated, artists are prone to the reduction of their unique works. Institutions also struggle to curate and safeguard authentic pieces. As the variety of generative models continues to grow, it becomes a cultural necessity to build a robust, efficient, and transparent framework for determining whether a piece of art or an artist is involved in potential copyright infringement. To address these challenges, we introduce ArtUnmasked, a practical and interpretable framework capable of (i) efficiently distinguishing AI-generated artworks from real ones using a lightweight Spectral Artifact Identification (SPAI), (ii) a TagMatch-based artist filtering module for stylistic attribution, and (iii) a DINOv3–CLIP similarity module with patch-level correspondence that leverages the one-shot generalization ability of modern vision transformers to determine whether an artwork is authentic or imitated. We also created a custom dataset of ∼24K imitated artworks to complement our evaluation and support future research. The complete implementation is available in our GitHub repository.
Single Channel EEG Artifact Identification Using Two-Dimensional Multi-Resolution Analysis
As a diagnostic monitoring approach, electroencephalogram (EEG) signals can be decoded by signal processing methodologies for various health monitoring purposes. However, EEG recordings are contaminated by other interferences, particularly facial and ocular artifacts generated by the user. This is specifically an issue during continuous EEG recording sessions, and is therefore a key step in using EEG signals for either physiological monitoring and diagnosis or brain–computer interface to identify such artifacts from useful EEG components. In this study, we aim to design a new generic framework in order to process and characterize EEG recording as a multi-component and non-stationary signal with the aim of localizing and identifying its component (e.g., artifact). In the proposed method, we gather three complementary algorithms together to enhance the efficiency of the system. Algorithms include time–frequency (TF) analysis and representation, two-dimensional multi-resolution analysis (2D MRA), and feature extraction and classification. Then, a combination of spectro-temporal and geometric features are extracted by combining key instantaneous TF space descriptors, which enables the system to characterize the non-stationarities in the EEG dynamics. We fit a curvelet transform (as a MRA method) to 2D TF representation of EEG segments to decompose the given space to various levels of resolution. Such a decomposition efficiently improves the analysis of the TF spaces with different characteristics (e.g., resolution). Our experimental results demonstrate that the combination of expansion to TF space, analysis using MRA, and extracting a set of suitable features and applying a proper predictive model is effective in enhancing the EEG artifact identification performance. We also compare the performance of the designed system with another common EEG signal processing technique—namely, 1D wavelet transform. Our experimental results reveal that the proposed method outperforms 1D wavelet.
Pendiente de recta tangente: elemento de conexión de la derivada como límite o razón de cambio mediado por GeoGebra
The study aims to report how university students connect various notions related to the concept of derivative by mobilizing the notion of slope of tangent line mediated by GeoGebra. The derivative as slope, limit or rate of change, are notions that are connected by performing transformations between their representations, and by using GeoGebra, schemes for using this artifact can be identified that help this process; therefore, aspects of the Theory of Semiotic Representation Records and the Instrumental Approach are considered. In the experimental part, the students made conversions between the representations of the derivative as the limit of a quotient of variations and as an instantaneous rate of change, connected from the notion of slope of tangent line. Keywords: .Derivative; conversions; Instrumental Approach; GeoGebra. 1.Introducción La derivada es un concepto que puede ser aplicado en la modelización de situaciones en diferentes áreas.
MULTI-seq: sample multiplexing for single-cell RNA sequencing using lipid-tagged indices
Sample multiplexing facilitates scRNA-seq by reducing costs and identifying artifacts such as cell doublets. However, universal and scalable sample barcoding strategies have not been described. We therefore developed MULTI-seq: multiplexing using lipid-tagged indices for single-cell and single-nucleus RNA sequencing. MULTI-seq reagents can barcode any cell type or nucleus from any species with an accessible plasma membrane. The method involves minimal sample processing, thereby preserving cell viability and endogenous gene expression patterns. When cells are classified into sample groups using MULTI-seq barcode abundances, data quality is improved through doublet identification and recovery of cells with low RNA content that would otherwise be discarded by standard quality-control workflows. We use MULTI-seq to track the dynamics of T-cell activation, perform a 96-plex perturbation experiment with primary human mammary epithelial cells and multiplex cryopreserved tumors and metastatic sites isolated from a patient-derived xenograft mouse model of triple-negative breast cancer.
SQANTI3: curation of long-read transcriptomes for accurate identification of known and novel isoforms
SQANTI3 is a tool designed for the quality control, curation and annotation of long-read transcript models obtained with third-generation sequencing technologies. Leveraging its annotation framework, SQANTI3 calculates quality descriptors of transcript models, junctions and transcript ends. With this information, potential artifacts can be identified and replaced with reliable sequences. Furthermore, the integrated functional annotation feature enables subsequent functional iso-transcriptomics analyses. SQANTI3 offers a flexible tool for quality control, curation and annotation of long-read RNA sequencing data.
A Systematic Review of Techniques for Artifact Detection and Artifact Category Identification in Electroencephalography from Wearable Devices
Wearable electroencephalography (EEG) enables brain monitoring in real-world environments beyond clinical settings; however, the relaxed constraints of the acquisition setup often compromise signal quality. This review examines methods for artifact detection and for the identification of artifact categories (e.g., ocular) and specific sources (e.g., eye blink) in wearable EEG. A systematic search was conducted across six databases using the query: (“electroencephalographic” OR “electroencephalography” OR “EEG”) AND (“Artifact detection” OR “Artifact identification” OR “Artifact removal” OR “Artifact rejection”) AND “wearable”. Following PRISMA guidelines, 58 studies were included. Artifacts in wearable EEG exhibit specific features due to dry electrodes, reduced scalp coverage, and subject mobility, yet only a few studies explicitly address these peculiarities. Most pipelines integrate detection and removal phases but rarely separate their impact on performance metrics, mainly accuracy (71%) when the clean signal is the reference and selectivity (63%), assessed with respect to physiological signal. Wavelet transforms and ICA, often using thresholding as a decision rule, are among the most frequently used techniques for managing ocular and muscular artifacts. ASR-based pipelines are widely applied for ocular, movement, and instrumental artifacts. Deep learning approaches are emerging, especially for muscular and motion artifacts, with promising applications in real-time settings. Auxiliary sensors (e.g., IMUs) are still underutilized despite their potential in enhancing artifact detection under ecological conditions. Only two studies addressed artifact category identification. A mapping of validated pipelines per artifact type and a survey of public datasets are provided to support benchmarking and reproducibility.
Single-cell genome sequencing of human neurons identifies somatic point mutation and indel enrichment in regulatory elements
Accurate somatic mutation detection from single-cell DNA sequencing is challenging due to amplification-related artifacts. To reduce this artifact burden, an improved amplification technique, primary template-directed amplification (PTA), was recently introduced. We analyzed whole-genome sequencing data from 52 PTA-amplified single neurons using SCAN2, a new genotyper we developed to leverage mutation signatures and allele balance in identifying somatic single-nucleotide variants (SNVs) and small insertions and deletions (indels) in PTA data. Our analysis confirms an increase in nonclonal somatic mutation in single neurons with age, but revises the estimated rate of this accumulation to 16 SNVs per year. We also identify artifacts in other amplification methods. Most importantly, we show that somatic indels increase by at least three per year per neuron and are enriched in functional regions of the genome such as enhancers and promoters. Our data suggest that indels in gene-regulatory elements have a considerable effect on genome integrity in human neurons. Single-cell DNA sequencing data are generated from human neurons using primary template-directed amplification and analyzed using SCAN2, an improved genotyping tool. Indels are enriched in neuronal regulatory elements and may be deleterious.
Large-scale analysis of whole genome sequencing data from formalin-fixed paraffin-embedded cancer specimens demonstrates preservation of clinical utility
Whole genome sequencing (WGS) provides comprehensive, individualised cancer genomic information. However, routine tumour biopsies are formalin-fixed and paraffin-embedded (FFPE), damaging DNA, historically limiting their use in WGS. Here we analyse FFPE cancer WGS datasets from England’s 100,000 Genomes Project, comparing 578 FFPE samples with 11,014 fresh frozen (FF) samples across multiple tumour types. We use an approach that characterises rather than discards artefacts. We identify three artefactual signatures, including one known (SBS57) and two previously uncharacterised (SBS FFPE, ID FFPE), and develop an “FFPEImpact” score that quantifies sample artefacts. Despite inferior sequencing quality, FFPE-derived data identifies clinically-actionable variants, mutational signatures and permits algorithmic stratification. Matched FF/FFPE validation cohorts shows good concordance while acknowledging SBS, ID and copy-number artefacts. While FF-derived WGS data remains the gold standard, FFPE-samples can be used for WGS if required, using analytical advancements developed here, potentially democratising whole cancer genomics to many. Formalin fixation is commonly used in tissue storage; however, this process has traditionally limited downstream whole genome sequencing usage. Here, the authors identify artefactual signatures in FFPE-derived sequencing data and demonstrate the preservation of clinical utility, thus enabling FFPE whole genome sequencing when required.
Insights into the Problem of Alarm Fatigue with Physiologic Monitor Devices: A Comprehensive Observational Study of Consecutive Intensive Care Unit Patients
Physiologic monitors are plagued with alarms that create a cacophony of sounds and visual alerts causing \"alarm fatigue\" which creates an unsafe patient environment because a life-threatening event may be missed in this milieu of sensory overload. Using a state-of-the-art technology acquisition infrastructure, all monitor data including 7 ECG leads, all pressure, SpO(2), and respiration waveforms as well as user settings and alarms were stored on 461 adults treated in intensive care units. Using a well-defined alarm annotation protocol, nurse scientists with 95% inter-rater reliability annotated 12,671 arrhythmia alarms. A total of 2,558,760 unique alarms occurred in the 31-day study period: arrhythmia, 1,154,201; parameter, 612,927; technical, 791,632. There were 381,560 audible alarms for an audible alarm burden of 187/bed/day. 88.8% of the 12,671 annotated arrhythmia alarms were false positives. Conditions causing excessive alarms included inappropriate alarm settings, persistent atrial fibrillation, and non-actionable events such as PVC's and brief spikes in ST segments. Low amplitude QRS complexes in some, but not all available ECG leads caused undercounting and false arrhythmia alarms. Wide QRS complexes due to bundle branch block or ventricular pacemaker rhythm caused false alarms. 93% of the 168 true ventricular tachycardia alarms were not sustained long enough to warrant treatment. The excessive number of physiologic monitor alarms is a complex interplay of inappropriate user settings, patient conditions, and algorithm deficiencies. Device solutions should focus on use of all available ECG leads to identify non-artifact leads and leads with adequate QRS amplitude. Devices should provide prompts to aide in more appropriate tailoring of alarm settings to individual patients. Atrial fibrillation alarms should be limited to new onset and termination of the arrhythmia and delays for ST-segment and other parameter alarms should be configurable. Because computer devices are more reliable than humans, an opportunity exists to improve physiologic monitoring and reduce alarm fatigue.
Towards Unified Defense for Face Forgery and Spoofing Attacks via Dual Space Reconstruction Learning
Real-world face recognition systems are vulnerable to diverse face attacks, ranging from digitally manipulated artifacts to physically crafted spoofing attacks. Existing works primarily focus on using an image classification network to address one type of attack but disregarding another. However, face recognition systems in real-world scenarios always encounter diverse simultaneous attacks, rendering the aforementioned single-attack detecting solution ineffective. Besides, excessive reliance on a classifier might easily fail when encountering face attacks with unknown patterns, as the category-level difference learned by classification backbones cannot generalize well to new attacks. Considering that real data are captured from actual individuals, while attack samples are generated by various distinct techniques, our focus is on extracting compact representations of real faces. This approach allows us to identify the fundamental differences between genuine and attack images, enabling us to address both manipulated artifacts and spoofing attacks simultaneously. Concretely, we propose a dual space reconstruction learning framework that models the commonalities of genuine faces in both spatial and frequency domains. With the learned characteristics of real faces, the model is more likely to segregate diverse attack samples as outliers from genuine images. Besides, we introduce a dynamic filtering module that filters out the redundant information retained by the reconstruction and enhances the critical divergence between the real and the attack to achieve better classification features. Since the training samples only cover limited style variations, which hampers the generalization to unseen domains, we further design a consistency regularized training strategy that mimics distribution shifts during training and imposes specific constraints to encourage style-irrelevant features. Moreover, in view of the lack of accessible benchmarks for unified evaluation of the detection competence against both face forgery and spoofing attacks, we set up a new challenging benchmark, named UniAttack, to foster the exploration of effective solutions to face attack detection. Both qualitative and quantitative results from existing and proposed benchmarks unequivocally demonstrate the superiority of our methods over state-of-the-art approaches.