Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
7,014 result(s) for "Completeness"
Sort by:
Two necessary and sufficient conditions for the completeness of L1(c) space
We study the completeness of L1(c) space, where L1(c) is the space of integrable random variables (ε(| X |) < ∞), ε is a sublinear expectation and the metric of L1(c) is defined as ε(|X-Y|).
The Effect of Data Quality on Decision-Making. A Quasi Experimental Study
The current study aimed to investigate the relationship between data quality dimensions (completeness and timeliness) and decision-making efficiency. The researcher adopted the quasi-experimental approach to answer the research questions and hypotheses. The study participants consist of 60 subjects, distributed into 2 groups. The first group consisted of 20 sales executives from Saudi beverage manufacturing companies. The other group had 40 participants from Al-Imam university. The study experiment consisted of 4 scenarios, two for each dimension, that give the participants scenarios and ask them to make the best decision based on these data. The scenarios were applied through face-to-face meeting and time to take decisions was recorded by participants. For the first data quality dimension, completeness of data, both groups got scenarios where complete and incomplete data were offered, and they were asked to choose the best available option based on the offered data. For the second dimension, the groups were offered up-to-date data and obsolete data and were asked to choose the best decisions based on the scenarios. The results were analyzed for correlations to check If there is a correlation between the responses of the two groups. The study found strong evidence for a correlational relationship between data quality dimensions and decision-making efficiency at 0.05 and 0.01 significance level for the student and employee groups respectively. The study found that high-quality data leads to both better and faster decisions in both groups. There were no significant differences between occupation, or gender and time taken for making a decision. The current study highlights the importance of data quality dimensions. It urges organizations to use up-to-date data, and complete data sets to base their decisions.
The Effect of Data Quality on Decision-Making. A Quasi Experimental Study
The current study aimed to investigate the relationship between data quality dimensions (completeness and timeliness) and decision-making efficiency. The researcher adopted the quasi-experimental approach to answer the research questions and hypotheses. The study participants consist of 60 subjects, distributed into 2 groups. The first group consisted of 20 sales executives from Saudi beverage manufacturing companies. The other group had 40 participants from Al-Imam university. The study experiment consisted of 4 scenarios, two for each dimension, that give the participants scenarios and ask them to make the best decision based on these data. The scenarios were applied through face-to-face meeting and time to take decisions was recorded by participants. For the first data quality dimension, completeness of data, both groups got scenarios where complete and incomplete data were offered, and they were asked to choose the best available option based on the offered data. For the second dimension, the groups were offered up-to-date data and obsolete data and were asked to choose the best decisions based on the scenarios. The results were analyzed for correlations to check If there is a correlation between the responses of the two groups. The study found strong evidence for a correlational relationship between data quality dimensions and decision-making efficiency at 0.05 and 0.01 significance level for the student and employee groups respectively. The study found that high-quality data leads to both better and faster decisions in both groups. There were no significant differences between occupation, or gender and time taken for making a decision. The current study highlights the importance of data quality dimensions. It urges organizations to use up-to-date data, and complete data sets to base their decisions.
BUSCO Applications from Quality Assessments to Gene Prediction and Phylogenomics
Genomics promises comprehensive surveying of genomes and metagenomes, but rapidly changing technologies and expanding data volumes make evaluation of completeness a challenging task. Technical sequencing quality metrics can be complemented by quantifying completeness of genomic data sets in terms of the expected gene content of Benchmarking Universal Single-Copy Orthologs (BUSCO, http://busco.ezlab.org). The latest software release implements a complete refactoring of the code to make it more flexible and extendable to facilitate high-throughput assessments. The original six lineage assessment data sets have been updated with improved species sampling, 34 new subsets have been built for vertebrates, arthropods, fungi, and prokaryotes that greatly enhance resolution, and data sets are now also available for nematodes, protists, and plants. Here, we present BUSCO v3 with example analyses that highlight the wide-ranging utility of BUSCO assessments, which extend beyond quality control of genomics data sets to applications in comparative genomics analyses, gene predictor training, metagenomics, and phylogenomics.
BUSCO Update: Novel and Streamlined Workflows along with Broader and Deeper Phylogenetic Coverage for Scoring of Eukaryotic, Prokaryotic, and Viral Genomes
Methods for evaluating the quality of genomic and metagenomic data are essential to aid genome assembly procedures and to correctly interpret the results of subsequent analyses. BUSCO estimates the completeness and redundancy of processed genomic data based on universal single-copy orthologs. Here, we present new functionalities and major improvements of the BUSCO software, as well as the renewal and expansion of the underlying data sets in sync with the OrthoDB v10 release. Among the major novelties, BUSCO now enables phylogenetic placement of the input sequence to automatically select the most appropriate BUSCO data set for the assessment, allowing the analysis of metagenome-assembled genomes of unknown origin. A newly introduced genome workflow increases the efficiency and runtimes especially on large eukaryotic genomes. BUSCO is the only tool capable of assessing both eukaryotic and prokaryotic species, and can be applied to various data types, from genome assemblies and metagenomic bins, to transcriptomes and gene sets.
Expecting the Unexpected
As crowdsourced user-generated content becomes an important source of data for organizations, a pressing question is how to ensure that data contributed by ordinary people outside of traditional organizational boundaries is of suitable quality to be useful for both known and unanticipated purposes. This research examines the impact of different information quality management strategies, and corresponding data collection design choices, on key dimensions of information quality in crowdsourced user-generated content. We conceptualize a contributor-centric information quality management approach focusing on instance-based data collection. We contrast it with the traditional consumer-centric fitness-for-use conceptualization of information quality that emphasizes class-based data collection. We present laboratory and field experiments conducted in a citizen science domain that demonstrate trade-offs between the quality dimensions of accuracy, completeness (including discoveries), and precision between the two information management approaches and their corresponding data collection designs. Specifically, we show that instance-based data collection results in higher accuracy, dataset completeness, and number of discoveries, but this comes at the expense of lower precision. We further validate the practical value of the instance-based approach by conducting an applicability check with potential data consumers (scientists, in our context of citizen science). In a follow-up study, we show, using human experts and supervised machine learning techniques, that substantial precision gains on instance-based data can be achieved with post-processing. We conclude by discussing the benefits and limitations of different information quality and data collection design choices for information quality in crowdsourced user-generated content.
CheckV assesses the quality and completeness of metagenome-assembled viral genomes
Millions of new viral sequences have been identified from metagenomes, but the quality and completeness of these sequences vary considerably. Here we present CheckV, an automated pipeline for identifying closed viral genomes, estimating the completeness of genome fragments and removing flanking host regions from integrated proviruses. CheckV estimates completeness by comparing sequences with a large database of complete viral genomes, including 76,262 identified from a systematic search of publicly available metagenomes, metatranscriptomes and metaviromes. After validation on mock datasets and comparison to existing methods, we applied CheckV to large and diverse collections of metagenome-assembled viral sequences, including IMG/VR and the Global Ocean Virome. This revealed 44,652 high-quality viral genomes (that is, >90% complete), although the vast majority of sequences were small fragments, which highlights the challenge of assembling viral genomes from short-read metagenomes. Additionally, we found that removal of host contamination substantially improved the accurate identification of auxiliary metabolic genes and interpretation of viral-encoded functions. The quality of viral genomes assembled from metagenome data is assessed by CheckV.
Causal Identification from Counterfactual Data: Completeness and Bounding Results
Previous work establishing completeness results for counterfactual identification has been circumscribed to the setting where the input data belongs to observational or interventional distributions (Layers 1 and 2 of Pearl's Causal Hierarchy), since it was generally presumed impossible to obtain data from counterfactual distributions, which belong to Layer 3. However, recent work (Raghavan & Bareinboim, 2025) has formally characterized a family of counterfactual distributions which can be directly estimated via experimental methods - a notion they call counterfactual realizabilty. This leaves open the question of what additional counterfactual quantities now become identifiable, given this new access to (some) Layer 3 data. To answer this question, we develop the CTFIDU+ algorithm for identifying counterfactual queries from an arbitrary set of Layer 3 distributions, and prove that it is complete for this task. Building on this, we establish the theoretical limit of which counterfactuals can be identified from physically realizable distributions, thus implying the fundamental limit to exact causal inference in the non-parametric setting. Finally, given the impossibility of identifying certain critical types of counterfactuals, we derive novel analytic bounds for such quantities using realizable counterfactual data, and corroborate using simulations that counterfactual data helps tighten the bounds for non-identifiable quantities in practice.
The r-dynamic chromatic number of corona of order two of any graphs with complete graph
The corona of two graphs G and H is a graph G ⊙ H formed from one copy of G and |V (G)| copies of H where the ith vertex of G is adjacent to every vertex in the ith copy of H. For any integer l ≤ 2, we define the graph G ⊙l H recursively from G ⊙ H as G ⊙l H = (G ⊙l-1 H) ⊙ H. Graph G ⊙l H is also named as l-corona product of G and H. The r-dynamic chromatic number is the minimum k such that graph G has an r-dynamic k-coloring, denoted by χr(G). An r-dynamic coloring of a graph G is a proper k-coloring of graph G such that the neighbors of any vertex v receive at least min{r, d(v)} different colors. In this paper, we analyze the r-dynamic chromatic number of corona order two graph, denoted by G ⊙2 H; G is complete graph and H are any graph.
NP-Completeness Proofs of Puzzles using the T-Metacell Framework
Pencil puzzles are puzzles that can be solved by writing down solutions on a paper, using only logical reasoning. In this paper, we utilize the \"T-metacell\" framework developed by Tang and the MIT Hardness Group to prove the NP-completeness of four new pencil puzzles: Grand Tour, Entry Exit, Zahlenschlange, and Yagit. Additionally, the first three are also proven to be ASP-complete. The results demonstrate how versatile the framework is, offering new insights into the computational complexity of problems with various constraints.