Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
61 result(s) for "López-de-Ipiña, Diego"
Sort by:
A cascading model for nudging employees towards energy-efficient behaviour in tertiary buildings
Energy-related occupant behaviour in the built environment is considered crucial when aiming towards Energy Efficiency (EE), especially given the notion that people are most often unaware and disengaged regarding the impacts of energy-consuming habits. In order to affect such energy-related behaviour, various approaches have been employed, being the most common the provision of recommendations towards more energy-efficient actions. In this work, the authors extend prior research findings in an effort to automatically identify the optimal Persuasion Strategy (PS), out of ten pre-selected by experts, tailored to a user (i.e., the context to trigger a message, allocate a task or providing cues to enact an action). This process aims to successfully influence the employees’ decisions about EE in tertiary buildings. The framework presented in this study utilizes cultural traits and socio-economic information. It is based on one of the largest survey datasets on this subject, comprising responses from 743 users collected through an online survey in four countries across Europe (Spain, Greece, Austria and the UK). The resulting framework was designed as a cascade of sequential data-driven prediction models. The first step employs a particular case of matrix factorisation to rank the ten PP in terms of preference for each user, followed by a random forest regression model that uses these rankings as a filtering step to compute scores for each PP and conclude with the best selection for each user. An ex-post assessment of the individual steps and the combined ensemble revealed increased accuracy over baseline non-personalised methods. Furthermore, the analysis also sheds light on important user characteristics to take into account for future interventions related to EE and the most effective persuasion strategies to adopt based on user data. Discussion and implications of the reported results are provided in the text regarding the flourishing field of personalisation to motivate pro-environmental behaviour change in tertiary buildings.
Development of Continuous Assessment of Muscle Quality and Frailty in Older Patients Using Multiparametric Combinations of Ultrasound and Blood Biomarkers: Protocol for the ECOFRAIL Study
Frailty resulting from the loss of muscle quality can potentially be delayed through early detection and physical exercise interventions. There is a demand for cost-effective tools for the objective evaluation of muscle quality, in both cross-sectional and longitudinal assessments. Literature suggests that quantitative analysis of ultrasound data captures morphometric, compositional, and microstructural muscle properties, while biological assays derived from blood samples are associated with functional information. This study aims to assess multiparametric combinations of ultrasound and blood-based biomarkers to offer a cross-sectional evaluation of the patient frailty phenotype and to track changes in muscle quality associated with supervised exercise programs. This prospective observational multicenter study will include patients aged 70 years and older who are capable of providing informed consent. We aim to recruit 100 patients from hospital environments and 100 from primary care facilities. Each patient will undergo at least two examinations (baseline and follow-up), totaling a minimum of 400 examinations. In hospital environments, 50 patients will be measured before/after a 16-week individualized and supervised exercise program, while another 50 patients will be followed up after the same period without intervention. Primary care patients will undergo a 1-year follow-up evaluation. The primary objective is to compare cross-sectional evaluations of physical performance, functional capacity, body composition, and derived scales of sarcopenia and frailty with biomarker combinations obtained from muscle ultrasound and blood-based assays. We will analyze ultrasound raw data obtained with a point-of-care device, along with a set of biomarkers previously associated with frailty, using quantitative real-time polymerase chain reaction and enzyme-linked immunosorbent assay. Additionally, we will examine the sensitivity of these biomarkers to detect short-term muscle quality changes and functional improvement after a supervised exercise intervention compared with usual care. At the time of manuscript submission, the enrollment of volunteers is ongoing. Recruitment started on March 1, 2022, and ends on June 30, 2024. The outlined study protocol will integrate portable technologies, using quantitative muscle ultrasound and blood biomarkers, to facilitate an objective cross-sectional assessment of muscle quality in both hospital and primary care settings. The primary objective is to generate data that can be used to explore associations between biomarker combinations and the cross-sectional clinical assessment of frailty and sarcopenia. Additionally, the study aims to investigate musculoskeletal changes following multicomponent physical exercise programs. ClinicalTrials.gov NCT05294757; https://clinicaltrials.gov/ct2/show/NCT05294757. DERR1-10.2196/50325.
An Image-Based Sensor System for Low-Cost Airborne Particle Detection in Citizen Science Air Quality Monitoring
Air pollution poses significant public health risks, necessitating accurate and efficient monitoring of particulate matter (PM). These organic compounds may be released from natural sources like trees and vegetation, as well as from anthropogenic, or human-made sources including industrial activities and motor vehicle emissions. Therefore, measuring PM concentrations is paramount to understanding people’s exposure levels to pollutants. This paper introduces a novel image processing technique utilizing photographs/pictures of Do-it-Yourself (DiY) sensors for the detection and quantification of PM10 particles, enhancing community involvement and data collection accuracy in Citizen Science (CS) projects. A synthetic data generation algorithm was developed to overcome the challenge of data scarcity commonly associated with citizen-based data collection to validate the image processing technique. This algorithm generates images by precisely defining parameters such as image resolution, image dimension, and PM airborne particle density. To ensure these synthetic images mimic real-world conditions, variations like Gaussian noise, focus blur, and white balance adjustments and combinations were introduced, simulating the environmental and technical factors affecting image quality in typical smartphone digital cameras. The detection algorithm for PM10 particles demonstrates robust performance across varying levels of noise, maintaining effectiveness in realistic mobile imaging conditions. Therefore, the methodology retains sufficient accuracy, suggesting its practical applicability for environmental monitoring in diverse real-world conditions using mobile devices.
TRIP: A Low-Cost Vision-Based Location System for Ubiquitous Computing
Sentient Computing provides computers with perception so that they can react and provide assistance to user activities. Physical spaces are made sentient when they are wired with networks of sensors capturing context data, which is communicated to computing devices spread through the environment. These devices interpret the information provided and react by performing the actions expected by the user. Among the types of context information provided by sensors, location has proven to be especially useful. Since location is an important context that changes whenever the user moves, a reliable location-tracking system is critical to many sentient applications. However, the sensor technologies used in indoor location tracking are expensive and complex to deploy, configure and maintain. These factors have prevented a wider adoption of Sentient Computing in our living and working spaces. This paper presents TRIP, a low-cost and easily deployable vision-based sensor technology addressing these issues. TRIP employs off-the-shelf hardware (low-cost CCD cameras and PCs) and printable 2-D circular markers for entity identification and location. The usability of TRIP is illustrated through the implementation of several sentient applications. [PUBLICATION ABSTRACT]
Analyzing Particularities of Sensor Datasets for Supporting Data Understanding and Preparation
Data scientists spend much time with data cleaning tasks, and this is especially important when dealing with data gathered from sensors, as finding failures is not unusual (there is an abundance of research on anomaly detection in sensor data). This work analyzes several aspects of the data generated by different sensor types to understand particularities in the data, linking them with existing data mining methodologies. Using data from different sources, this work analyzes how the type of sensor used and its measurement units have an important impact in basic statistics such as variance and mean, because of the statistical distributions of the datasets. The work also analyzes the behavior of outliers, how to detect them, and how they affect the equivalence of sensors, as equivalence is used in many solutions for identifying anomalies. Based on the previous results, the article presents guidance on how to deal with data coming from sensors, in order to understand the characteristics of sensor datasets, and proposes a parallelized implementation. Finally, the article shows that the proposed decision-making processes work well with a new type of sensor and that parallelizing with several cores enables calculations to be executed up to four times faster.
A Spatial Crowdsourcing Engine for Harmonizing Volunteers’ Needs and Tasks’ Completion Goals
This work addresses the task allocation problem in spatial crowdsensing with altruistic participation, tackling challenges like declining engagement and user fatigue from task overload. Unlike typical models relying on financial incentives, this context requires alternative strategies to sustain participation. This paper presents a new solution, the Volunteer Task Allocation Engine (VTAE), to address these challenges. This solution is not based on economic incentives, and it has two primary goals. The first one is to improve user experience by limiting the workload and creating a user-centric task allocation solution. The second goal is to create an equal distribution of tasks over the spatial locations to make the solution robust against the possible decrease in participation. Two approaches are used to test the performance of this solution against different conditions: computer simulations and a real-world experiment with real users, which include a qualitative evaluation. The simulations tested system performance in controlled environments, while the real-world experiment assessed the effectiveness and usability of the VTAE with real users. This research highlights the importance of user-centered design in citizen science applications with altruistic participation. The findings demonstrate that the VTAE algorithm ensures equitable task distribution across geographical areas while actively involving users in the decision-making process.
A Wearable Sensor Node for Measuring Air Quality Through Citizen Science Approach: Insights from the SOCIO-BEE Project
Air pollution is a major environmental and public health challenge, especially in urban areas where fine-grained air quality data are essential to effective interventions. Traditional monitoring networks, while accurate, often lack spatial resolution and public engagement. This study presents a novel wearable wireless sensor node (WSN) that was developed within the Horizon Europe SOCIO-BEE project to support air quality monitoring through citizen science (CS). The low-cost, body-mounted WSN measures NO2, O3, and PM2.5. Three pilot campaigns were conducted in Ancona (Italy), Maroussi (Greece), and Zaragoza (Spain), and involved diverse user groups—seniors, commuters, and students, respectively. PM2.5 sensor data were validated through two approaches: direct comparison with reference stations and spatial clustering analysis using K-means. The results show strong correlation with official PM2.5 data (R2 = 0.75), with an average absolute error of 0.54 µg/m3 and a statistical confidence interval of ±3.3 µg/m3. In Maroussi and Zaragoza, where no reference stations were available, the clustering approach yielded low intra-cluster coefficients of variation (CV = 0.50 ± 0.40 in Maroussi, CV = 0.28 ± 0.30 in Zaragoza), indicating that the measurements had high internal consistency and spatial homogeneity. Beyond technical validation, user engagement and perceptions were evaluated through pre-/post-campaign surveys. Across all pilots, over 70% of participants reported satisfaction with the system’s usability and inclusiveness. The findings demonstrate that wearable low-cost sensors, when supported by a structured engagement and data validation framework, can provide reliable, actionable air quality data, empowering citizens and informing evidence-based environmental policy.
Artificial Intelligence in Business-to-Customer Fashion Retail: A Literature Review
Many industries, including healthcare, banking, the auto industry, education, and retail, have already undergone significant changes because of artificial intelligence (AI). Business-to-Customer (B2C) e-commerce has considerably increased the use of AI in recent years. The purpose of this research is to examine the significance and impact of AI in the realm of fashion e-commerce. To that end, a systematic review of the literature is carried out, in which data from the Web Of Science and Scopus databases were used to analyze 219 publications on the subject. The articles were first categorized using AI techniques. In the realm of fashion e-commerce, they were divided into two categories. These categorizations allowed for the identification of research gaps in the use of AI. These gaps offer potential and possibilities for further research.
PyFF: A Fog-Based Flexible Architecture for Enabling Privacy-by-Design IoT-Based Communal Smart Environments
The advent of the Internet of Things (IoT) and the massive growth of devices connected to the Internet are reshaping modern societies. However, human lifestyles are not evolving at the same pace as technology, which often derives into users’ reluctance and aversion. Although it is essential to consider user involvement/privacy while deploying IoT devices in a human-centric environment, current IoT architecture standards tend to neglect the degree of trust that humans require to adopt these technologies on a daily basis. In this regard, this paper proposes an architecture to enable privacy-by-design with human-in-the-loop IoT environments. In this regard, it first distills two IoT use-cases with high human interaction to analyze the interactions between human beings and IoT devices in an environment which had not previously been subject to the Internet of People principles.. Leveraging the lessons learned in these use-cases, the Privacy-enabling Fog-based and Flexible (PyFF) human-centric and human-aware architecture is proposed which brings together distributed and intelligent systems are brought together. PyFF aims to maintain end-users’ privacy by involving them in the whole data lifecycle, allowing them to decide which information can be monitored, where it can be computed and the appropriate feedback channels in accordance with human-in-the-loop principles.
Measuring Software Timing Errors in the Presentation of Visual Stimuli in Cognitive Neuroscience Experiments
Because of the features provided by an abundance of specialized experimental software packages, personal computers have become prominent and powerful tools in cognitive research. Most of these programs have mechanisms to control the precision and accuracy with which visual stimuli are presented as well as the response times. However, external factors, often related to the technology used to display the visual information, can have a noticeable impact on the actual performance and may be easily overlooked by researchers. The aim of this study is to measure the precision and accuracy of the timing mechanisms of some of the most popular software packages used in a typical laboratory scenario in order to assess whether presentation times configured by researchers do not differ from measured times more than what is expected due to the hardware limitations. Despite the apparent precision and accuracy of the results, important issues related to timing setups in the presentation of visual stimuli were found, and they should be taken into account by researchers in their experiments.