Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
3,123
result(s) for
"Databases as Topic - trends"
Sort by:
Next-generation phenotyping: requirements and strategies for enhancing our understanding of genotype–phenotype relationships and its relevance to crop improvement
2013
More accurate and precise phenotyping strategies are necessary to empower high-resolution linkage mapping and genome-wide association studies and for training genomic selection models in plant improvement. Within this framework, the objective of modern phenotyping is to increase the accuracy, precision and throughput of phenotypic estimation at all levels of biological organization while reducing costs and minimizing labor through automation, remote sensing, improved data integration and experimental design. Much like the efforts to optimize genotyping during the 1980s and 1990s, designing effective phenotyping initiatives today requires multi-faceted collaborations between biologists, computer scientists, statisticians and engineers. Robust phenotyping systems are needed to characterize the full suite of genetic factors that contribute to quantitative phenotypic variation across cells, organs and tissues, developmental stages, years, environments, species and research programs. Next-generation phenotyping generates significantly more data than previously and requires novel data management, access and storage systems, increased use of ontologies to facilitate data integration, and new statistical tools for enhancing experimental design and extracting biologically meaningful signal from environmental and experimental noise. To ensure relevance, the implementation of efficient and informative phenotyping experiments also requires familiarity with diverse germplasm resources, population structures, and target populations of environments. Today, phenotyping is quickly emerging as the major operational bottleneck limiting the power of genetic analysis and genomic prediction. The challenge for the next generation of quantitative geneticists and plant breeders is not only to understand the genetic basis of complex trait variation, but also to use that knowledge to efficiently synthesize twenty-first century crop varieties.
Journal Article
Definition of Health 2.0 and Medicine 2.0: A Systematic Review
by
Van De Belt, Tom H
,
Berben, Sivera AA
,
Engelen, Lucien JLPG
in
Activities of daily living
,
Collaboration
,
Communication
2010
During the last decade, the Internet has become increasingly popular and is now an important part of our daily life. When new \"Web 2.0\" technologies are used in health care, the terms \"Health 2.0\" or \"Medicine 2.0\" may be used.
The objective was to identify unique definitions of Health 2.0/Medicine 2.0 and recurrent topics within the definitions.
A systematic literature review of electronic databases (PubMed, Scopus, CINAHL) and gray literature on the Internet using the search engines Google, Bing, and Yahoo was performed to find unique definitions of Health 2.0/Medicine 2.0. We assessed all literature, extracted unique definitions, and selected recurrent topics by using the constant comparison method.
We found a total of 1937 articles, 533 in scientific databases and 1404 in the gray literature. We selected 46 unique definitions for further analysis and identified 7 main topics.
Health 2.0/Medicine 2.0 are still developing areas. Many articles concerning this subject were found, primarily on the Internet. However, there is still no general consensus regarding the definition of Health 2.0/Medicine 2.0. We hope that this study will contribute to building the concept of Health 2.0/Medicine 2.0 and facilitate discussion and further research.
Journal Article
Ten quick tips for biocuration
2019
[...]ensuring quality and standardisation of data usually requires more effort. [...]heterogeneity of data and the rate at which they are produced make it difficult to develop novel analytics or maximise the impact of data on scientific or clinical decision-making. [...]curators’ or submitters’ mistakes may be revealed during the data reuse phase and need to be rectified. There are many different free database solutions available, covering many different use cases [17] [18]. Besides facilitating data sharing, centralised storage also aids backup procedures, which can be automated, ensuring that valuable curated data can be recovered should there be any technical problems (see also Tip 3). The tips above provide a point of entry for researchers wanting to start contributing to biocuration as part of their projects or in their research area in general. Because of its diverse nature, curation offers great potential for expanding one’s skill set and may benefit careers.
Journal Article
Literature mining for the biologist: from information retrieval to biological discovery
by
Jensen, Lars Juhl
,
Bork, Peer
,
Saric, Jasmin
in
Agriculture
,
Animal Genetics and Genomics
,
Biological and medical sciences
2006
Recent advances in tools for extracting facts from the scientific literature will soon enable the automatic annotation and analysis of the growing number of system-wide experimental data sets. Mining the literature is also rapidly becoming useful for both hypothesis generation and biological discovery.
Key Points
Literature-mining tools are becoming essential to researchers because of the growth of the scientific literature and the shift from studying individual genes and proteins to entire systems.
Currently, information-retrieval tools such as PubMed are by far the most commonly used literature-mining methods among biologists.
Methods for identifying the genes, proteins and other entities that are mentioned in the literature — known as entity recognition — are key components of most complex literature-mining systems.
Recently, methods for extracting biomedical facts from text have improved considerably. Such methods will probably soon become mainstream tools for the annotation and analysis of large-scale experimental data sets.
By combining facts that have been extracted from several papers, text-mining methods can discover both global trends and generate new hypotheses that are based on the existing literature.
To realize the full discovery potential of literature mining, it should be integrated with other data types. Protein networks are well suited for unifying large-scale experimental data with knowledge that has been extracted from the biomedical literature.
Data-integration methods have also been developed for ranking candidate genes for inherited diseases and for associating genes with phenotypic characteristics.
Bridging the gap between biologists and computational linguists will be crucial to the success of approaches that integrate literature mining with high-throughput experimental data. We hope that this review will inspire more biologists to become actively involved in the development of literature-mining tools.
For the average biologist, hands-on literature mining currently means a keyword search in PubMed. However, methods for extracting biomedical facts from the scientific literature have improved considerably, and the associated tools will probably soon be used in many laboratories to automatically annotate and analyse the growing number of system-wide experimental data sets. Owing to the increasing body of text and the open-access policies of many journals, literature mining is also becoming useful for both hypothesis generation and biological discovery. However, the latter will require the integration of literature and high-throughput data, which should encourage close collaborations between biologists and computational linguists.
Journal Article
Trial Registration at ClinicalTrials.gov between May and October 2005
by
Tse, Tony
,
Ide, Nicholas C
,
Zarin, Deborah A
in
Biological and medical sciences
,
Clinical trials
,
Clinical Trials as Topic - legislation & jurisprudence
2005
As of September 13, 2005, the International Committee of Medical Journal Editors has required that all trials submitted for publication be registered in a public database. This study examines the records registered in ClinicalTrials.gov from May 11 to October 11, 2005. The number of trials in the database approximately doubled, and the information about the trials became more specific.
As of September 13, 2005, the International Committee of Medical Journal Editors required that all trials submitted for publication be registered in a public database. This study examines the records registered in ClinicalTrials.gov from May to October 2005.
Concern about previously undisclosed safety problems with drugs such as paroxetine (Paxil, GlaxoSmithKline) and rofecoxib (Vioxx, Merck) has increased the public's desire for more complete information about clinical research studies.
1
,
2
The provision of basic information about clinical trial protocols in a publicly accessible registry and the public identification of all trials, whether or not their results are subsequently published, have been advocated as ways to address this issue.
3
–
6
Numerous groups have called for comprehensive registration by issuing statements or convening meetings to discuss policy and implementation details.
7
–
15
In the United States, the Food and Drug Administration (FDA) . . .
Journal Article
2020 computing: science in an exponential world
by
Szalay, Alexander
,
Gray, Jim
in
Automatic Data Processing - trends
,
Computers - trends
,
Computing Methodologies
2006
The amount of scientific data is doubling every year. Szalay and Gray discuss how scientific methods are evolving from paper notebooks to huge online databases.
Journal Article
The International Spinal Cord Injury Pain Basic Data Set
by
Siddall, P J
,
Cardenas, D D
,
Bryce, T
in
Activities of Daily Living - psychology
,
Anatomy
,
Biological and medical sciences
2008
Objective:
To develop a basic pain data set (International Spinal Cord Injury Basic Pain Data Set, ISCIPDS:B) within the framework of the International spinal cord injury (SCI) data sets that would facilitate consistent collection and reporting of pain in the SCI population.
Setting:
International.
Methods:
The ISCIPDS:B was developed by a working group consisting of individuals with published evidence of expertise in SCI-related pain regarding taxonomy, psychophysics, psychology, epidemiology and assessment, and one representative of the Executive Committee of the International SCI Standards and Data Sets. The members were appointed by four major organizations with an interest in SCI-related pain (International Spinal Cord Society, ISCoS; American Spinal Injury Association, ASIA; American Pain Society, APS and International Association for the Study of Pain, IASP). The initial ISCIPDS:B was revised based on suggestions from members of the Executive Committee of the International SCI Standards and Data Sets, the ISCoS Scientific Committee, ASIA and APS Boards, and the Neuropathic Pain Special Interest Group of the IASP, individual reviewers and societies and the ISCoS Council.
Results:
The final ISCIPDS:B contains core questions about clinically relevant information concerning SCI-related pain that can be collected by health-care professionals with expertise in SCI in various clinical settings. The questions concern pain severity, physical and emotional function and include a pain-intensity rating, a pain classification and questions related to the temporal pattern of pain for each specific pain problem. The impact of pain on physical, social and emotional function, and sleep is evaluated for each pain.
Journal Article
The NIFSTD and BIRNLex Vocabularies: Building Comprehensive Ontologies for Neuroscience
by
Martone, Maryann E.
,
Grethe, Jeffrey S.
,
Gupta, Amarnath
in
Academic Medical Centers - methods
,
Academic Medical Centers - trends
,
Animals
2008
A critical component of the Neuroscience Information Framework (NIF) project is a consistent, flexible terminology for describing and retrieving neuroscience-relevant resources. Although the original NIF specification called for a loosely structured controlled vocabulary for describing neuroscience resources, as the NIF system evolved, the requirement for a formally structured ontology for neuroscience with sufficient granularity to describe and access a diverse collection of information became obvious. This requirement led to the NIF standardized (NIFSTD) ontology, a comprehensive collection of common neuroscience domain terminologies woven into an ontologically consistent, unified representation of the biomedical domains typically used to describe neuroscience data (e.g., anatomy, cell types, techniques), as well as digital resources (tools, databases) being created throughout the neuroscience community. NIFSTD builds upon a structure established by the BIRNLex, a lexicon of concepts covering clinical neuroimaging research developed by the Biomedical Informatics Research Network (BIRN) project. Each distinct domain module is represented using the Web Ontology Language (OWL). As much as has been practical, NIFSTD reuses existing community ontologies that cover the required biomedical domains, building the more specific concepts required to annotate NIF resources. By following this principle, an extensive vocabulary was assembled in a relatively short period of time for NIF information annotation, organization, and retrieval, in a form that promotes easy extension and modification. We report here on the structure of the NIFSTD, and its predecessor BIRNLex, the principles followed in its construction and provide examples of its use within NIF.
Journal Article
The perspectives of computational chemistry modeling
2012
Issue Title: Special Issue: The next 25 years: Commemorating the 25th anniversary of the Journal of Computer-Aided Molecular Design The on-line tools for computational chemistry modeling will be increasingly used in the future. This will bring the advantages both for the authors and the readers.[PUBLICATION ABSTRACT]
Journal Article