Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
41
result(s) for
"findability"
Sort by:
Enabling reusability of plant phenomic datasets with MIAPPE 1.1
by
Athanasiadis, Ioannis N.
,
Scholz, Uwe
,
Arnaud, Elizabeth
in
Agricultural sciences
,
Agronomy
,
Coverage
2020
• Enabling data reuse and knowledge discovery is increasingly critical in modern science, and requires an effort towards standardising data publication practices. This is particularly challenging in the plant phenotyping domain, due to its complexity and heterogeneity.
• We have produced the MIAPPE 1.1 release, which enhances the existing MIAPPE standard in coverage, to support perennial plants, in structure, through an explicit data model, and in clarity, through definitions and examples.
• We evaluated MIAPPE 1.1 by using it to express several heterogeneous phenotyping experiments in a range of different formats, to demonstrate its applicability and the interoperability between the various implementations. Furthermore, the extended coverage is demonstrated by the fact that one of the datasets could not have been described under MIAPPE 1.0.
• MIAPPE 1.1 marks a major step towards enabling plant phenotyping data reusability, thanks to its extended coverage, and especially the formalisation of its data model, which facilitates its implementation in different formats. Community feedback has been critical to this development, and will be a key part of ensuring adoption of the standard.
Journal Article
Standardizing Survey Data Collection to Enhance Reproducibility: Development and Comparative Evaluation of the ReproSchema Ecosystem
by
Linkersdörfer, Janosch
,
Chen, Yibei
,
Kennedy, David
in
Analysis
,
Data Collection - methods
,
Data Collection - standards
2025
Inconsistencies in survey-based (eg, questionnaire) data collection across biomedical, clinical, behavioral, and social sciences pose challenges to research reproducibility. ReproSchema is an ecosystem that standardizes survey design and facilitates reproducible data collection through a schema-centric framework, a library of reusable assessments, and computational tools for validation and conversion. Unlike conventional survey platforms that primarily offer graphical user interface-based survey creation, ReproSchema provides a structured, modular approach for defining and managing survey components, enabling interoperability and adaptability across diverse research settings.
This study examines ReproSchema's role in enhancing research reproducibility and reliability. We introduce its conceptual and practical foundations, compare it against 12 platforms to assess its effectiveness in addressing inconsistencies in data collection, and demonstrate its application through 3 use cases: standardizing required mental health survey common data elements, tracking changes in longitudinal data collection, and creating interactive checklists for neuroimaging research.
We describe ReproSchema's core components, including its schema-based design; reusable assessment library with >90 assessments; and tools to validate data, convert survey formats (eg, REDCap [Research Electronic Data Capture] and Fast Healthcare Interoperability Resources), and build protocols. We compared 12 platforms-Center for Expanded Data Annotation and Retrieval, formr, KoboToolbox, Longitudinal Online Research and Imaging System, MindLogger, OpenClinica, Pavlovia, PsyToolkit, Qualtrics, REDCap, SurveyCTO, and SurveyMonkey-against 14 findability, accessibility, interoperability, and reusability (FAIR) principles and assessed their support of 8 survey functionalities (eg, multilingual support and automated scoring). Finally, we applied ReproSchema to 3 use cases-NIMH-Minimal, the Adolescent Brain Cognitive Development and HEALthy Brain and Child Development Studies, and the Committee on Best Practices in Data Analysis and Sharing Checklist-to illustrate ReproSchema's versatility.
ReproSchema provides a structured framework for standardizing survey-based data collection while ensuring compatibility with existing survey tools. Our comparison results showed that ReproSchema met 14 of 14 FAIR criteria and supported 6 of 8 key survey functionalities: provision of standardized assessments, multilingual support, multimedia integration, data validation, advanced branching logic, and automated scoring. Three use cases illustrating ReproSchema's flexibility include standardizing essential mental health assessments (NIMH-Minimal), systematically tracking changes in longitudinal studies (Adolescent Brain Cognitive Development and HEALthy Brain and Child Development), and converting a 71-page neuroimaging best practices guide into an interactive checklist (Committee on Best Practices in Data Analysis and Sharing).
ReproSchema enhances reproducibility by structuring survey-based data collection through a structured, schema-driven approach. It integrates version control, manages metadata, and ensures interoperability, maintaining consistency across studies and compatibility with common survey tools. Planned developments, including ontology mappings and semantic search, will broaden its use, supporting transparent, scalable, and reproducible research across disciplines.
Journal Article
Encontro e descoberta da informação em ambientes digitais
by
Salcedo, Diego A.
,
Bezerra, Vinícius Cabral Accioly
in
findability
,
interactive epistemography
,
iterative representation
2020
This paper presents findability and serendipity as fundamental concepts for retrieval and use of information in the post-custodial paradigm in Information Science. The information explosition in digital environments together with physical ones causes high probability of information anxiety, mainly when the desired or imagined information isn’t easily retrieved. The findability and the serendipity are concepts studied in na interdisciplinar way in the post-custodial paradigm of Information Science, also focused on the use of information. Through exploratory bibliographical research this article presents forms of organizing information in digital environments, so that the researcher can retrieve and feel himself participant of the classification community by facilitating the retrieval of nformation for him and other researchers. Therefore, iterative representation and interactive epistemography are proposed as conceptual tools for the retrieval of shared knowledge, in order to improve the active participation of information researchers and the reduction of information anxiety.
Journal Article
Information Architecture Strategies in the Classroom: How Do Increasingly Complex Digital Ecosystems in Higher Education Shape the Contours of Instructor-Student Communication?
2024
The proliferation of digital software is an increasingly accepted part of everyday life in higher education in the United States. While this software affords some opportunities, it can create confusing experiences for students as well. In this paper, I ask how might increasingly complex digital ecosystems in higher education shape the contours of instructor-student communication. To answer this question, I conducted an exploratory case study in the form of an online survey (n=83) and subsequent interviews (n=18) with user experience (UX) design students at a large public university in the southeastern United States. The research showed that students felt confusion regarding digital software protocols in their classes, how protocols varied from class to class, an inability to remember when and how to communicate with instructors outside of class, unsureness about where to locate information, and a preference for messaging applications over email. Research results suggest that instructor-student communication in higher education can be productively viewed through the lens of information architecture. In doing so, I argue for the need for instructors to implement strong information architecture strategies that help make sense of information in increasingly complex academic ecosystems.
Journal Article
Unique, Persistent, Resolvable: Identifiers as the Foundation of FAIR
by
Clark, Tim
,
Wimalaratne, Sarala M.
,
Soiland-Reyes, Stian
in
data infrastructures
,
Digital Object Identifier
,
FAIR data
2020
The FAIR principles describe characteristics intended to support access to and
reuse of digital artifacts in the scientific research ecosystem. Persistent,
globally unique identifiers, resolvable on the Web, and associated with a set of
additional descriptive metadata, are foundational to FAIR data. Here we describe
some basic principles and exemplars for their design, use and orchestration with
other system elements to achieve FAIRness for digital research objects.
Journal Article
Maximizing Information Yield From Pheromone-Baited Monitoring Traps: Estimating Plume Reach, Trapping Radius, and Absolute Density of Cydia pomonella (Lepidoptera: Tortricidae) in Michigan Apple
by
Brunner, J. F.
,
Miller, J. R.
,
Schenker, J. H.
in
Agricultural practices
,
Agricultural production
,
Animals
2017
Novel methods of data analysis were used to interpret codling moth (Cydia pomonella) catch data from central-trap, multiple-release experiments using a standard codlemone-baited monitoring trap in commercial apple orchards not under mating disruption. The main objectives were to determine consistency and reliability for measures of: 1) the trapping radius, composed of the trap's behaviorally effective plume reach and the maximum dispersive distance of a responder population; and 2) the proportion of the population present in the trapping area that is caught. Two moth release designs were used: 1) moth releases at regular intervals in the four cardinal directions, and 2) evenly distributed moth releases across entire approximately 18-ha orchard blocks using both high and low codling moth populations. For both release designs, at high populations, the mean proportion catch was 0.01, and for the even release of low populations, that value was approximately 0.02. Mean maximum dispersive distance for released codling moth males was approximately 260 m. Behaviorally effective plume reach for the standard codling moth trap was < 5 m, and total trapping area for a single trap was approximately 21 ha. These estimates were consistent across three growing seasons and are supported by extraordinarily high replication for this type of field experiment. Knowing the trapping area and mean proportion caught, catch number per single monitoring trap can be translated into absolute pest density using the equation: males per trapping area = catch per trapping area/proportion caught. Thus, catches of 1, 3, 10, and 30 codling moth males per trap translate to approximately 5, 14, 48, and 143 males/ha, respectively, and reflect equal densities of females, because the codling moth sex ratio is 1:1. Combined with life-table data on codling moth fecundity and mortality, along with data on crop yield per trapping area, this fundamental knowledge of how to interpret catch numbers will enable pest managers to make considerably more precise projections of damage and therefore more precise and reliable decisions on whether insecticide applications are justified. The principles and methods established here for estimating absolute codling moth density may be broadly applicable to pests generally and thereby could set a new standard for integrated pest management decisions based on trapping.
Journal Article
EFSA statement on the interpretation of FAIR principles for mechanistic effect models in the regulatory environmental risk assessment of pesticides
by
Linguadoca, Alberto
,
Focks, Andreas
,
Gibin, Davide
in
accessibility
,
Data management
,
Documentation
2025
Among its objectives, the European Food Safety Authority working group on effect models in environmental risk assessment has worked towards the development and maintenance of a framework to facilitate the assessment of effect models within the scope of the European Food Safety Authority's activities. To fulfil this objective, the working group self‐tasked to lay the groundwork for interpreting the FAIR (i.e. Findability, Accessibility, Interoperability, Reproducibility) guiding principles – originally developed for scientific data management and stewardship – for mechanistic effect models in the regulatory environmental risk assessment of pesticides. The working group identified three main areas in which the FAIR guiding principles may be applied, namely: (I) the data underlying a particular model (or model use), (II) the computer model and (III) the model assessment. The document explores existing resources that may support the implementation of the FAIR guiding principles and provides a specific interpretation of the principles for each of the areas relevant to mechanistic effect models. Important challenges and potential blockers are identified, but the working group argues that working towards more ‘FAIRness’ would ultimately lead to a more efficient review process and better integration of mechanistic effect models in the regulatory environmental risk assessment of pesticides, with benefits for all stakeholders. While the aim of the present exercise is to stimulate discussion within the modelling community and avoid being overly prescriptive, recommendations for future action are given to address some of the challenges.
Journal Article
Mapping materials, drawing Europe: Sample and data quality and accessibility in the biobanking infrastructure BBMRI-ERIC
2025
The European biobanking infrastructure BBMRI-ERIC was established in the context of European Union science policy to facilitate collaboration between European repositories of biological samples and associated data, or so-called biobanks. To allow the exchange of research materials, the infrastructure has created several platforms for quality management and making materials visible and accessible. In this article, I develop the metaphor of mapping to explore the workings of these platforms. I consider maps as devices for giving directions, as representations of science, and as (symbolic) outlines of the territory of Europe to explore how BBMRI-ERIC's activities in quality management and IT-platforms for making materials visible and accessible (aim to) facilitate circulation. Across the different platforms, efforts to harmonize and integrate European biobanking activities confront both scientific and European tendencies toward fragmentation. The map of a European biobanking community drawn by BBMRI-ERIC consequently both makes and unmakes Europe sketching a Europe of both harmonization and fragmentation. In laying bare these paradoxical effects of the construction of a European biobanking infrastructure, using mapping as a lens shows how, rather than realizing its much-cited role as a “facilitator” for research, BBMRI-ERIC embodies key tensions at the center of efforts to advance “data-based” (big) science.
Journal Article
Limits of social mobilization
2013
The Internet and social media have enabled the mobilization of large crowds to achieve time-critical feats, ranging from mapping crises in real time, to organizing mass rallies, to conducting search-and-rescue operations over large geographies. Despite significant success, selection bias may lead to inflated expectations of the efficacy of social mobilization for these tasks. What are the limits of social mobilization, and how reliable is it in operating at these limits? We build on recent results on the spatiotemporal structure of social and information networks to elucidate the constraints they pose on social mobilization. We use the DARPA Network Challenge as our working scenario, in which social media were used to locate 10 balloons across the United States. We conduct high-resolution simulations for referral-based crowdsourcing and obtain a statistical characterization of the population recruited, geography covered, and time to completion. Our results demonstrate that the outcome is plausible without the presence of mass media but lies at the limit of what time-critical social mobilization can achieve. Success relies critically on highly connected individuals willing to mobilize people in distant locations, overcoming the local trapping of diffusion in highly dense areas. However, even under these highly favorable conditions, the risk of unsuccessful search remains significant. These findings have implications for the design of better incentive schemes for social mobilization. They also call for caution in estimating the reliability of this capability.
Journal Article