Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
8 result(s) for "Coraci, Davide"
Sort by:
Online Implementation of a Soft Actor-Critic Agent to Enhance Indoor Temperature Control and Energy Efficiency in Buildings
Recently, a growing interest has been observed in HVAC control systems based on Artificial Intelligence, to improve comfort conditions while avoiding unnecessary energy consumption. In this work, a model-free algorithm belonging to the Deep Reinforcement Learning (DRL) class, Soft Actor-Critic, was implemented to control the supply water temperature to radiant terminal units of a heating system serving an office building. The controller was trained online, and a preliminary sensitivity analysis on hyperparameters was performed to assess their influence on the agent performance. The DRL agent with the best performance was compared to a rule-based controller assumed as a baseline during a three-month heating season. The DRL controller outperformed the baseline after two weeks of deployment, with an overall performance improvement related to control of indoor temperature conditions. Moreover, the adaptability of the DRL agent was tested for various control scenarios, simulating changes of external weather conditions, indoor temperature setpoint, building envelope features and occupancy patterns. The agent dynamically deployed, despite a slight increase in energy consumption, led to an improvement of indoor temperature control, reducing the cumulative sum of temperature violations on average for all scenarios by 75% and 48% compared to the baseline and statically deployed agent respectively.
Exploring the Potentialities of Deep Reinforcement Learning for Incentive-Based Demand Response in a Cluster of Small Commercial Buildings
Demand Response (DR) programs represent an effective way to optimally manage building energy demand while increasing Renewable Energy Sources (RES) integration and grid reliability, helping the decarbonization of the electricity sector. To fully exploit such opportunities, buildings are required to become sources of energy flexibility, adapting their energy demand to meet specific grid requirements. However, in most cases, the energy flexibility of a single building is typically too small to be exploited in the flexibility market, highlighting the necessity to perform analysis at a multiple-building scale. This study explores the economic benefits associated with the implementation of a Reinforcement Learning (RL) control strategy for the participation in an incentive-based demand response program of a cluster of commercial buildings. To this purpose, optimized Rule-Based Control (RBC) strategies are compared with a RL controller. Moreover, a hybrid control strategy exploiting both RBC and RL is proposed. Results show that the RL algorithm outperforms the RBC in reducing the total energy cost, but it is less effective in fulfilling DR requirements. The hybrid controller achieves a reduction in energy consumption and energy costs by respectively 7% and 4% compared to a manually optimized RBC, while fulfilling DR constraints during incentive-based events.
Engineering Disputed Concepts and the Meeting of Minds
Critical discussions can often require conceptual engineering, a process in which speakers are engaged in revising each other’s concepts. We show that the analysis of conceptual engineering can benefit from integrating argumentation theory with models of conceptual representation. Argumentation theory accounts for the argumentative moves of the discussants, allowing the detection of speakers’ conceptual disagreements, for which some fallacies can be seen as cues. Models of conceptual representation, such as Conceptual spaces and the theory of meeting of minds , allow us to study the cognitive side of engineering practices. However, when this integrated framework is applied to practical scenarios, conceptual engineering faces different challenges. In particular, assuming a psychological view about concepts, revisionary strategies are significantly narrowed down, if not impossible, in practice. These criticisms lead to a kind of dilemma for conceptual engineers, highlighting the necessity of further work on the definition of concept embraced by this research program.
An innovative heterogeneous transfer learning framework to enhance the scalability of deep reinforcement learning controllers in buildings with integrated energy systems
Deep Reinforcement Learning (DRL)-based control shows enhanced performance in the management of integrated energy systems when compared with Rule-Based Controllers (RBCs), but it still lacks scalability and generalisation due to the necessity of using tailored models for the training process. Transfer Learning (TL) is a potential solution to address this limitation. However, existing TL applications in building control have been mostly tested among buildings with similar features, not addressing the need to scale up advanced control in real-world scenarios with diverse energy systems. This paper assesses the performance of an online heterogeneous TL strategy, comparing it with RBC and offline and online DRL controllers in a simulation setup using EnergyPlus and Python. The study tests the transfer in both transductive and inductive settings of a DRL policy designed to manage a chiller coupled with a Thermal Energy Storage (TES). The control policy is pre-trained on a source building and transferred to various target buildings characterised by an integrated energy system including photovoltaic and battery energy storage systems, different building envelope features, occupancy schedule and boundary conditions (e.g., weather and price signal). The TL approach incorporates model slicing, imitation learning and fine-tuning to handle diverse state spaces and reward functions between source and target buildings. Results show that the proposed methodology leads to a reduction of 10% in electricity cost and between 10% and 40% in the mean value of the daily average temperature violation rate compared to RBC and online DRL controllers. Moreover, online TL maximises self-sufficiency and self-consumption by 9% and 11% with respect to RBC. Conversely, online TL achieves worse performance compared to offline DRL in either transductive or inductive settings. However, offline Deep Reinforcement Learning (DRL) agents should be trained at least for 15 episodes to reach the same level of performance as the online TL. Therefore, the proposed online TL methodology is effective, completely model-free and it can be directly implemented in real buildings with satisfying performance.
Comparison of two deep reinforcement learning algorithms towards an optimal policy for smart building thermal control
Heating, Ventilation, and Air Conditioning (HVAC) systems are the main providers of occupant comfort, and at the same time, they represent a significant source of energy consumption. Improving their efficiency is essential for reducing the environmental impact of buildings. However, traditional rule-based and model-based strategies are often inefficient in real-world applications due to the complex building thermal dynamics and the influence of heterogeneous disturbances, such as unpredictable occupant behavior. In order to address this issue, the performance of two state-of-the-art model-free Deep Reinforcement Learning (DRL) algorithms, Proximal Policy Optimization (PPO) and Soft Actor-Critic (SAC), has been compared when the percentage valve opening is managed in a thermally activated building system, modeled in a simulated environment from data collected in an existing office building in Switzerland. Results show that PPO reduced energy costs by 18% and decreased temperature violations by 33%, while SAC achieved a 14% reduction in energy costs and 64% fewer temperature violations compared to the onsite Rule-Based Controller (RBC).
The prediction of floods in Venice: methods, models and uncertainty (review article)
This paper reviews the state of the art in storm surge forecasting and its particular application in the northern Adriatic Sea. The city of Venice already depends on operational storm surge forecasting systems to warn the population and economy of imminent flood threats, as well as help to protect the extensive cultural heritage. This will be more important in the future, with the new mobile barriers called MOSE (MOdulo Sperimentale Elettromeccanico, Experimental Electromechanical Module) that will be completed by 2021. The barriers will depend on accurate storm surge forecasting to control their operation. In this paper, the physics behind the flooding of Venice is discussed, and the state of the art of storm surge forecasting in Europe is reviewed. The challenges for the surge forecasting systems are analyzed, especially in view of uncertainty. This includes consideration of selected historic extreme events that were particularly difficult to forecast. Four potential improvements are identified: (1) improve meteorological forecasts, (2) develop ensemble forecasting, (3) assimilation of water level measurements and (4) develop a multimodel approach.
Intensive Care Unit-Acquired Weakness and Positioning-Related Peripheral Nerve Injuries in COVID-19: A Case Series of Three Patients and the Latest Literature Review
A subgroup of COVID-19 patients requires intensive respiratory care. The prolonged immobilization and aggressive treatments predispose these patients to develop intensive care unit-acquired weakness (ICUAW). Furthermore, this condition could increase the chance of positioning-related peripheral nerve injuries. On the basis of the latest literature review, we describe a case series of three patients with COVID-19 who developed ICUAW complicated by positioning-related peripheral nerve injuries Every patient presented sensorimotor axonal polyneuropathy and concomitant myopathy in electrophysiological studies. Furthermore, muscle MRI helped the diagnosis of ICUAW, showing massive damage predominantly in the proximal muscles. Notably, nerve ultrasound detected positioning-related peripheral nerve injuries, even though the concomitant ICUAW substantially masked their clinical features. During the acute phase of severe COVID-19 infection, most medical attention tends to be assigned to critical care management, and neuromuscular complications such as ICUAW and positioning-related peripheral nerve injuries could be underestimated. Hence, when starting post-ICU care for COVID-19 cases, the combination of electrophysiological and imaging studies will aid appropriate evaluation on the patients with COVID-19-related ICUAW.
Reliability, Concurrent Validity, and Clinical Performances of the Shorter Version of the Roland Morris Disability Questionnaire in a Sample of Italian People with Non-Specific Low Back Pain
Background. Evaluating the psychometric and clinical performances of the RM-18, the shorter version of the Roland Morris Disability Questionnaire (RMQ), in Italian people with non-specific low back pain (NSLBP) as a time-saving and clinically useful method of assessing disability. Methods. This cross-sectional study included 74 people (52 females and 22 males, 53.03 ± 15.25 years old) with NSLBP. The RM-18, the RMQ, the Oswestry Disability Index (ODI), and a pain intensity numerical rating scale (NRS) were administered. Psychometric testing included reliability by internal consistency (Cronbach’s alpha) and test–retest measurement (Intraclass Correlation Coefficient, ICC2.1), and concurrent validity by comparing the RM-18 with the RMQ and the ODI (Pearson’s r correlation). Two separate regression analyses were performed to investigate the different impact of RM-18 and RMQ on NRS. Results. Cronbach’s α of RM-18 was 0.92 and ICC (2,1) = 0.96. Strong correlations were found with the RMQ and the ODI (r = 0.98; r = 0.78, respectively). The regression models showed that the RM-18 and the RMQ similarly impacted the NRS (p < 0.001). Conclusion. The RM-18 showed satisfactory psychometric testing and similarly impacted the NRS when compared to the RMQ. It can be recommended for clinical and research purposes in Italian people with NSLBP.