Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
97
result(s) for
"Sansone, Susanna-Assunta"
Sort by:
Discovering and linking public omics data sets using the Omics Discovery Index
2017
[...]public deposition of omics data is on the increase. In 2016, a group of researchers, publishers and research funders published the first guidelines to make data “findable, accessible, interoperable and re-usable” (FAIR; https://www.force11.org/group/fairgroup/fairprinciples)3. [See PDF for image] Figure 2 Distributions of OmicsDI data sets. (a) Distribution of data sets per omics type and organism category including model organisms, non-model organisms (excluding human) and human. (b) The data set view showing the other related omics data sets, including the ontology-highlighting option to extract the most relevant terms in the metadata. (c) Pearson-correlation plot of the metadata similarity score and the biological similarity score, across transcriptomics (T), proteomics (P) and metabolomics (M) data sets. (d) The shared molecules box shows all data sets with a biological similarity score of more than 0.5, with a slider allowing a user to increase the cutoff value (here set to 0.81). [...]the publication associated with a data set can be used to link data sets that are deposited in different repositories.
Journal Article
The OBO Foundry: coordinated evolution of ontologies to support biomedical data integration
by
Bard, Jonathan
,
Whetzel, Patricia L
,
Ceusters, Werner
in
Agriculture
,
Bioinformatics
,
Biological and medical sciences
2007
The value of any kind of data is greatly enhanced when it exists in a form that allows it to be integrated with other data. One approach to integration is through the annotation of multiple bodies of data using common controlled vocabularies or 'ontologies'. Unfortunately, the very success of this approach has led to a proliferation of ontologies, which itself creates obstacles to integration. The Open Biomedical Ontologies (OBO) consortium is pursuing a strategy to overcome this problem. Existing OBO ontologies, including the Gene Ontology, are undergoing coordinated reform, and new ontologies are being created on the basis of an evolving set of shared principles governing ontology development. The result is an expanding family of ontologies designed to be interoperable and logically well formed and to incorporate accurate representations of biological reality. We describe this OBO Foundry initiative and provide guidelines for those who might wish to become involved.
Journal Article
Machine actionable metadata models
by
Sansone, Susanna-Assunta
,
Rocca-Serra, Philippe
,
Gonzalez-Beltran, Alejandra
in
Check lists
,
Metadata
2022
Community-developed minimum information checklists are designed to drive the rich and consistent reporting of metadata, underpinning the reproducibility and reuse of the data. These reporting guidelines, however, are usually in the form of narratives intended for human consumption. Modular and reusable machine-readable versions are also needed. Firstly, to provide the necessary quantitative and verifiable measures of the degree to which the metadata descriptors meet these community requirements, a requirement of the FAIR Principles. Secondly, to encourage the creation of standards-driven templates for metadata authoring, especially when describing complex experiments that require multiple reporting guidelines to be used in combination or extended. We present new functionalities to support the creation and improvements of machine-readable models. We apply the approach to an exemplar set of reporting guidelines in Life Science and discuss the challenges. Our work, targeted to developers of standards and those familiar with standards, promotes the concept of compositional metadata elements and encourages the creation of community-standards which are modular and interoperable from the onset.
Journal Article
COordination of Standards in MetabOlomicS (COSMOS): facilitating integrated metabolomics data access
by
Marin, Silvia
,
Luchinat, Claudio
,
Ebbels, Timothy
in
Biochemistry
,
Bioinformatics
,
Biomedical and Life Sciences
2015
Metabolomics has become a crucial phenotyping technique in a range of research fields including medicine, the life sciences, biotechnology and the environmental sciences. This necessitates the transfer of experimental information between research groups, as well as potentially to publishers and funders. After the initial efforts of the metabolomics standards initiative, minimum reporting standards were proposed which included the concepts for metabolomics databases. Built by the community, standards and infrastructure for metabolomics are still needed to allow storage, exchange, comparison and re-utilization of metabolomics data. The Framework Programme 7 EU Initiative ‘coordination of standards in metabolomics’ (COSMOS) is developing a robust data infrastructure and exchange standards for metabolomics data and metadata. This is to support workflows for a broad range of metabolomics applications within the European metabolomics community and the wider metabolomics and biomedical communities’ participation. Here we announce our concepts and efforts asking for re-engagement of the metabolomics community, academics and industry, journal publishers, software and hardware vendors, as well as those interested in standardisation worldwide (addressing missing metabolomics ontologies, complex-metadata capturing and XML based open source data exchange format), to join and work towards updating and implementing metabolomics standards.
Journal Article
Towards a new vision of PaNET: enhancing reasoning capabilities for better photon and neutron data discovery
by
Collins, Stephen P.
,
Millar, Paul
,
Koumoutsos, Giannis
in
data catalogues
,
ExPaNDS
,
experimental techniques
2025
The Photon and Neutron Experimental Techniques (PaNET) ontology was released in 2021 as an ontology for two major European research infrastructure communities. It provides a standardized taxonomy of experimental techniques employed across the photon and neutron scientific domain, and is part of a wider effort to apply the FAIR (findable, accessible, interoperable, reusable) principles within the community. Specifically, it is used to enhance the quality of metadata in photon and neutron data catalogue services. However, PaNET currently relies on a manual definition approach, which is time consuming and incomplete. A new structure of PaNET is proposed to address this by including logical frameworks that enable automatic reasoning as opposed to the manual approach in the original ontology, resulting in over a hundred new technique subclass relationships that are currently missing in PaNET. These new relationships, which are evaluated by the PaNET working group and other domain experts, will improve data catalogue searches by connecting users to more relevant datasets, thereby enhancing data discoverability. In addition, the results of this work serve as a validation mechanism for PaNET, as the very process of building the logical frameworks, as well as any incorrect inferences made by the reasoner, has exposed existing issues within the original ontology. This evolution of PaNET builds on the initial foundational work, which established the infrastructure and incorporated domain expert knowledge, by introducing logical frameworks that provide enhanced reasoning capabilities, and is proposed to address the limitations of the current version of the ontology. This under‐the‐hood development has resulted in a more complete and robust knowledge representation system for use in data catalogue services within the photon and neutron community.
Journal Article
Orchestrating and sharing large multimodal data for transparent and reproducible research
2021
Reproducibility is essential to open science, as there is limited relevance for findings that can not be reproduced by independent research groups, regardless of its validity. It is therefore crucial for scientists to describe their experiments in sufficient detail so they can be reproduced, scrutinized, challenged, and built upon. However, the intrinsic complexity and continuous growth of biomedical data makes it increasingly difficult to process, analyze, and share with the community in a FAIR (findable, accessible, interoperable, and reusable) manner. To overcome these issues, we created a cloud-based platform called ORCESTRA (
orcestra.ca
), which provides a flexible framework for the reproducible processing of multimodal biomedical data. It enables processing of clinical, genomic and perturbation profiles of cancer samples through automated processing pipelines that are user-customizable. ORCESTRA creates integrated and fully documented data objects with persistent identifiers (DOI) and manages multiple dataset versions, which can be shared for future studies.
It is no secret that a significant part of scientific research is difficult to reproduce. Here, the authors present a cloud-computing platform called ORCESTRA that facilitates reproducible processing of multimodal biomedical data using customizable pipelines and well-documented data objects.
Journal Article
The Genomic Standards Consortium
2011
A vast and rich body of information has grown up as a result of the world's enthusiasm for 'omics technologies. Finding ways to describe and make available this information that maximise its usefulness has become a major effort across the 'omics world. At the heart of this effort is the Genomic Standards Consortium (GSC), an open-membership organization that drives community-based standardization activities, Here we provide a short history of the GSC, provide an overview of its range of current activities, and make a call for the scientific community to join forces to improve the quality and quantity of contextual information about our public collections of genomes, metagenomes, and marker gene sequences.
Journal Article
FAIR in action - a flexible framework to guide FAIRification
2023
The COVID-19 pandemic has highlighted the need for FAIR (Findable, Accessible, Interoperable, and Reusable) data more than any other scientific challenge to date. We developed a flexible, multi-level, domain-agnostic FAIRification framework, providing practical guidance to improve the FAIRness for both existing and future clinical and molecular datasets. We validated the framework in collaboration with several major public-private partnership projects, demonstrating and delivering improvements across all aspects of FAIR and across a variety of datasets and their contexts. We therefore managed to establish the reproducibility and far-reaching applicability of our approach to FAIRification tasks.
Journal Article
Measures for interoperability of phenotypic data: minimum information requirements and formatting
by
Mazurek, Cezary
,
Scholz, Uwe
,
Seren, Ümit
in
Bioinformatics
,
Biological Techniques
,
Biomedical and Life Sciences
2016
Background
Plant phenotypic data shrouds a wealth of information which, when accurately analysed and linked to other data types, brings to light the knowledge about the mechanisms of life. As phenotyping is a field of research comprising manifold, diverse and time-consuming experiments, the findings can be fostered by reusing and combining existing datasets. Their correct interpretation, and thus replicability, comparability and interoperability, is possible provided that the collected observations are equipped with an adequate set of metadata. So far there have been no common standards governing phenotypic data description, which hampered data exchange and reuse.
Results
In this paper we propose the guidelines for proper handling of the information about plant phenotyping experiments, in terms of both the recommended content of the description and its formatting. We provide a document called “Minimum Information About a Plant Phenotyping Experiment”, which specifies what information about each experiment should be given, and a Phenotyping Configuration for the ISA-Tab format, which allows to practically organise this information within a dataset. We provide examples of ISA-Tab-formatted phenotypic data, and a general description of a few systems where the recommendations have been implemented.
Conclusions
Acceptance of the rules described in this paper by the plant phenotyping community will help to achieve findable, accessible, interoperable and reusable data.
Journal Article