Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
651
result(s) for
"survey metadata"
Sort by:
HBIM for Conservation: A New Proposal for Information Modeling
2019
Thanks to its capability of archiving and organizing all the information about a building, HBIM (Historical Building Information Modeling) is considered a promising resource for planned conservation of historical assets. However, its usage remains limited and scarcely adopted by the subjects in charge of conservation, mainly because of its rather complex 3D modeling requirements and a lack of shared regulatory references and guidelines as far as semantic data are concerned. In this study, we developed an HBIM methodology to support documentation, management, and planned conservation of historic buildings, with particular focus on non-geometric information: organized and coordinated storage and management of historical data, easy analysis and query, time management, flexibility, user-friendliness, and information sharing. The system is based on a standalone specific-designed database linked to the 3D model of the asset, built with BIM software, and it is highly adaptable to different assets. The database is accessible both with a developed desktop application, which acts as a plug-in for the BIM software, and through a web interface, implemented to ensure data sharing and easy usability by skilled and unskilled users. The paper describes in detail the implemented system, passing by semantic breaking down of the building, database design, as well as system architecture and capabilities. Two case studies, the Cathedral of Parma and Ducal Palace of Mantua (Italy), are then presented to show the results of the system’s application.
Journal Article
Application of Deep Learning Approach for the Classification of Buildings’ Degradation State in a BIM Methodology
by
Rodrigues, Hugo
,
Rocha, Eugénio
,
Matos, Raquel
in
Algorithms
,
Architecture
,
Artificial intelligence
2022
Currently, there is extensive research focused on automatic strategies for the segmentation and classification of 3D point clouds, which can accelerate the study of a landmark and integrate it with heterogeneous data and attributes, useful to facilitate the digital management of architectural heritage data. In this work, an automated image-based survey has been exploited a Region- Based Convolutional Neural Network. The training phase has been executed providing examples of images with the anomalies to be detected. At the same time, a laser scanning process was conducted to obtain a point cloud, which acts as a reference for the BIM process. In a final step, a process of projecting information from the images onto the BIM recreates the pathology shapes on the model’s objects, which generates a decision support system for the built environment. The innovation of this research concerns the development of a workflow in which it is possible to automatize the recognition and classification of defects in historical buildings, to finally interpolate this geometric and numerical information with a BIM methodology, obtaining a representation and quantification of the information adapted to the facility management process. The use of innovative techniques such as artificial intelligence algorithms and different plug-ins becomes the main strength of this project.
Journal Article
Consolidation and Standardization of Survey Operations at a Decentralized Federal Statistical Agency
2013
With tighter federal budgets on the horizon, the National Agricultural Statistics Service decided in 2009 to pursue three architectural transformations, primarily to provide savings in staff resource costs by enabling the centralization or regionalization of survey operations. The transformational initiatives involved: (1) centralizing and consolidating network services from 48 locations; (2) standardizing survey metadata and integrating survey data into easily accessible databases across all surveys; and (3) consolidating and generalizing survey applications for the agency’s diverse survey program. The three architectural transformations will be described as well as initial efforts to consolidate and standardize survey operations across the agency.
Journal Article
A RESTORATION ORIENTED HBIM SYSTEM FOR CULTURAL HERITAGE DOCUMENTATION: THE CASE STUDY OF PARMA CATHEDRAL
by
Roncella, R.
,
Bruno, N.
2018
The need to safeguard and preserve Cultural Heritage (CH) is increasing and especially in Italy, where the amount of historical buildings is considerable, having efficient and standardized processes of CH management and conservation becomes strategic. At the time being, there are no tools capable of fulfilling all the specific functions required by Cultural Heritage documentation and, due to the complexity of historical assets, there are no solution as flexible and customizable as CH specific needs require. Nevertheless, BIM methodology can represent the most effective solution, on condition that proper methodologies, tools and functions are made available. The paper describes an ongoing research on the implementation of a Historical BIM system for the Parma cathedral, aimed at the maintenance, conservation and restoration. Its main goal was to give a concrete answer to the lack of specific tools required by Cultural Heritage documentation: organized and coordinated storage and management of historical data, easy analysis and query, time management, 3D modelling of irregular shapes, flexibility, user-friendliness, etc. The paper will describe the project and the implemented methodology, focusing mainly on survey and modelling phases. In describing the methodology, critical issues about the creation of a HBIM will be highlighted, trying to outline a workflow applicable also in other similar contexts.
Journal Article
Digital cultural heritage standards: from silo to semantic web
2022
This paper is a survey of standards being used in the domain of digital cultural heritage with focus on the Metadata Encoding and Transmission Standard (METS) created by the Library of Congress in the United States of America. The process of digitization of cultural heritage requires silo breaking in a number of areas—one area is that of academic disciplines to enable the performance of rich interdisciplinary work. This lays the foundation for the emancipation of the second form of silo which are the silos of knowledge, both traditional and born digital, held in individual institutions, such as galleries, libraries, archives and museums. Disciplinary silo breaking is the key to unlocking these institutional knowledge silos. Interdisciplinary teams, such as developers and librarians, work together to make the data accessible as open data on the “semantic web”. Description logic is the area of mathematics which underpins many ontology building applications today. Creating these ontologies requires a human–machine symbiosis. Currently in the cultural heritage domain, the institutions’ role is that of provider of this open data to the national aggregator which in turn can make the data available to the trans-European aggregator known as Europeana. Current ingests to the aggregators are in the form of machine readable cataloguing metadata which is limited in the richness it provides to disparate object descriptions. METS can provide this richness.
Journal Article
Mental Health Analysis in Social Media Posts: A Survey
2023
The surge in internet use to express personal thoughts and beliefs makes it increasingly feasible for the social NLP research community to find and validate associations between
social media posts
and
mental health status
. Cross-sectional and longitudinal studies of social media data bring to fore the importance of real-time responsible AI models for mental health analysis. Aiming to classify the research directions for social computing and tracking advances in the development of machine learning (ML) and deep learning (DL) based models, we propose a comprehensive survey on
quantifying mental health on social media
. We compose a taxonomy for mental healthcare and highlight recent attempts in examining social well-being with personal writings on social media. We define all the possible research directions for mental healthcare and investigate a thread of handling online social media data for stress, depression and suicide detection for this work. The key features of this manuscript are (i) feature extraction and classification, (ii) recent advancements in AI models, (iii) publicly available dataset, (iv) new frontiers and future research directions. We compile this information to introduce young research and academic practitioners with the field of computational intelligence for mental health analysis on social media. In this manuscript, we carry out a quantitative synthesis and a qualitative review with the corpus of over 92 potential research articles. In this context, we release the collection of existing work on suicide detection in an easily accessible and updatable repository:
https://github.com/drmuskangarg/mentalhealthcare
.
Journal Article
Data sharing, management, use, and reuse: Practices and perceptions of scientists worldwide
by
Baird, Lynn
,
Olendorf, Robert
,
Borycz, Josh
in
Academic libraries
,
Academic publications
,
Adult
2020
With data becoming a centerpiece of modern scientific discovery, data sharing by scientists is now a crucial element of scientific progress. This article aims to provide an in-depth examination of the practices and perceptions of data management, including data storage, data sharing, and data use and reuse by scientists around the world.
The Usability and Assessment Working Group of DataONE, an NSF-funded environmental cyberinfrastructure project, distributed a survey to a multinational and multidisciplinary sample of scientific researchers in a two-waves approach in 2017-2018. We focused our analysis on examining the differences across age groups, sub-disciplines of science, and sectors of employment.
Most respondents displayed what we describe as high and mediocre risk data practices by storing their data on their personal computer, departmental servers or USB drives. Respondents appeared to be satisfied with short-term storage solutions; however, only half of them are satisfied with available mechanisms for storing data beyond the life of the process. Data sharing and data reuse were viewed positively: over 85% of respondents admitted they would be willing to share their data with others and said they would use data collected by others if it could be easily accessed. A vast majority of respondents felt that the lack of access to data generated by other researchers or institutions was a major impediment to progress in science at large, yet only about a half thought that it restricted their own ability to answer scientific questions. Although attitudes towards data sharing and data use and reuse are mostly positive, practice does not always support data storage, sharing, and future reuse. Assistance through data managers or data librarians, readily available data repositories for both long-term and short-term storage, and educational programs for both awareness and to help engender good data practices are clearly needed.
Journal Article
Factors influencing healthcare provider respondent fatigue answering a globally administered in-app survey
Respondent fatigue, also known as survey fatigue, is a common problem in the collection of survey data. Factors that are known to influence respondent fatigue include survey length, survey topic, question complexity, and open-ended question type. There is a great deal of interest in understanding the drivers of physician survey responsiveness due to the value of information received from these practitioners. With the recent explosion of mobile smartphone technology, it has been possible to obtain survey data from users of mobile applications (apps) on a question-by-question basis. The author obtained basic demographic survey data as well as survey data related to an anesthesiology-specific drug called sugammadex and leveraged nonresponse rates to examine factors that influenced respondent fatigue.
Primary data were collected between December 2015 and February 2017. Surveys and in-app analytics were collected from global users of a mobile anesthesia calculator app. Key independent variables were user country, healthcare provider role, rating of importance of the app to personal practice, length of time in practice, and frequency of app use. Key dependent variable was the metric of respondent fatigue.
Provider role and World Bank country income level were predictive of the rate of respondent fatigue for this in-app survey. Importance of the app to the provider and length of time in practice were moderately associated with fatigue. Frequency of app use was not associated. This study focused on a survey with a topic closely related to the subject area of the app. Respondent fatigue rates will likely change dramatically if the topic does not align closely.
Although apps may serve as powerful platforms for data collection, responses rates to in-app surveys may differ on the basis of important respondent characteristics. Studies should be carefully designed to mitigate fatigue as well as powered with the understanding of the respondent characteristics that may have higher rates of respondent fatigue.
Journal Article
Volunteer-run cameras as distributed sensors for macrosystem mammal research
by
McShea, William J
,
Costello, Robert
,
Forrester, Tavis
in
Biomedical and Life Sciences
,
camera trapping
,
Cameras
2016
CONTEXT: Variation in the abundance of animals affects a broad range of ecosystem processes. However, patterns of abundance for large mammals, and the effects of human disturbances on them are not well understood because we lack data at the appropriate scales. We created eMammal to effectively camera-trap at landscape scale. Camera traps detect animals with infrared sensors that trigger the camera to take a photo, a sequence of photos, or a video clip. Through photography, camera traps create records of wildlife from known locations and dates, and can be set in arrays to quantify animal distribution across a landscape. This allows linkage to other distributed networks of ecological data. OBJECTIVES: Through the eMammal program, we demonstrate that volunteer-based camera trapping can meet landscape scale spatial data needs, while also engaging the public in nature and science. We assert that camera surveys can be effectively scaled to a macrosystem level through citizen science, but only after solving challenges of data and volunteer management. METHOD: We present study design and technology solutions for landscape scale camera trapping to effectively recruit, train and retain volunteers while providing efficient data workflows and quality control. RESULTS: Our initial work with > 400 volunteers across six contiguous U.S. states has proven that citizen scientists can deploy these camera traps properly (94 % of volunteer deployments correct) and tag the photos accurately for most species (67–100 %). Using these tools we processed 2.6 million images over a 2 year period. The eMammal cyberinfrastructure made it possible to process far more data than any participating researcher had previously achieved. The core components include an upload application using a standard metadata format, an expert review tool to ensure data quality, and a curated data repository. CONCLUSION: Macrosystem scale monitoring of wildlife by volunteer-run camera traps can produce the data needed to address questions concerning broadly distributed mammals, and also help to raise public awareness on the science of conservation. This scale of data will allow for linkage of large mammals to ecosystem processes now measured through national programs.
Journal Article
Analysis Ready Data: Enabling Analysis of the Landsat Archive
by
Roy, David P.
,
Dwyer, John L.
,
Zhang, Hankui K.
in
Algorithms
,
analysis ready data
,
Archives & records
2018
Data that have been processed to allow analysis with a minimum of additional user effort are often referred to as Analysis Ready Data (ARD). The ability to perform large scale Landsat analysis relies on the ability to access observations that are geometrically and radiometrically consistent, and have had non-target features (clouds) and poor quality observations flagged so that they can be excluded. The United States Geological Survey (USGS) has processed all of the Landsat 4 and 5 Thematic Mapper (TM), Landsat 7 Enhanced Thematic Mapper Plus (ETM+), Landsat 8 Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) archive over the conterminous United States (CONUS), Alaska, and Hawaii, into Landsat ARD. The ARD are available to significantly reduce the burden of pre-processing on users of Landsat data. Provision of pre-prepared ARD is intended to make it easier for users to produce Landsat-based maps of land cover and land-cover change and other derived geophysical and biophysical products. The ARD are provided as tiled, georegistered, top of atmosphere and atmospherically corrected products defined in a common equal area projection, accompanied by spatially explicit quality assessment information, and appropriate metadata to enable further processing while retaining traceability of data provenance.
Journal Article