Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
211 result(s) for "Osman, Onur"
Sort by:
An intelligent job scheduling and real-time resource optimization for edge-cloud continuum in next generation networks
While cloud-edge infrastructures demand flexible and sophisticated resource management, 6G networks necessitate very low latency, great dependability, and broad connection. Cloud computing’s scalability and agility enable it to prioritize service delivery at various levels of detail while serving billions of users. However, due to resource inefficiencies, virtual machine (VM) issues, response delays, and deadline violations, real-time task scheduling is challenging in these settings. This study develops an AI-powered task scheduling system based on the newly published Unfair Semi-Greedy (USG) algorithm, Earliest Deadline First (EDF), and Enhanced Deadline Zero-Laxity (EDZL) algorithm. The system chooses the best scheduler based on load and work criticality by combining reinforcement learning adaptive logic with a dynamic resource table. Over 10,000 soft real-time task sets were utilized to evaluate the framework across various cloud-edge scenarios. When compared to solo EDF and EDZL solutions, the recommended hybrid method reduced average response times by up to 26.3% and deadline exceptions by 41.7%. The USG component achieved 98.6% task stimulability under saturated edge settings, indicating significant changes in workload. These findings suggest that the method might be useful for applications that need a speedy turnaround. This architecture is especially well-suited for autonomous systems, remote healthcare, and immersive media, all of which require low latency and dependability, and it may be extended to AI-native 6G networks.
Risk sensitive twin distributional critics with a lambda lower confidence bound for continuous control reinforcement learning
Off-policy actor–critic methods such as Twin Delayed Deep Deterministic Policy Gradient (TD3) are the workhorse of continuous-control reinforcement learning. However, they rely on scalar value estimates and offer no explicit way to control risk in temporal-difference targets. We introduce Twin Distributional Critics with λ-Lower Confidence Bound (TDC-λ), a TD3-style algorithm that learns two distributional critics and, for each transition, forms its target from a lower confidence bound of the form (μ − λσ) across critics. The risk parameter λ smoothly interpolates between a distributional TD3 limit and increasingly conservative targets. A single implementation supports either a deterministic actor or a tanh-squashed Gaussian policy, while evaluation always uses the deterministic mean action. We evaluate TDC-λ on five standard MuJoCo benchmarks HalfCheetah-v4, Hopper-v4, Ant-v4, Walker2d-v4, and Humanoid-v4 against strong entropy-regularized baselines. Across tasks, TDC-λ matches or improves final return while consistently reducing variance. Sweeping λ further shows that stronger penalties on high-variance critics improve robustness on challenging, high-dimensional domains. These results indicate that distributional critics combined with simple risk-sensitive target selection can substantially improve stability in off-policy reinforcement learning without sacrificing sample efficiency.
A Multi-Agent Advisory Board Reinforcement Learning Framework for Adaptive Cooperative Control
This study proposes Advisory Board Reinforcement Learning (AdvB-RL), a cooperative reinforcement-learning framework that integrates multiple advisory neural networks to guide policy optimization. Unlike conventional single-agent architectures, AdvB-RL maintains a set of independently trained advisory networks that contribute to action selection through a dynamic aggregation mechanism. This design preserves diverse experiential knowledge while improving learning stability and the exploration–exploitation balance. The framework is evaluated on three benchmark control tasks, namely LunarLander-v2, CartPole-v1, and MountainCar-v0, using advisory board sizes of 1, 5, and 10 members against a Double Deep Q-Network (DDQN) baseline. The best-performing configuration, 10 AdvB, achieved 270.02 ± 24.74 on LunarLander-v2 versus 227.92 ± 86.02 for DDQN, 497.79 ± 5.18 on CartPole-v1 versus 304.37 ± 144.04, and −103.16 ± 15.46 on MountainCar-v0 versus −130.71 ± 31.64, indicating higher returns together with markedly lower variability. Across the three environments, these results show that increasing the number of advisory members improves both reward consistency and overall robustness, with the 10-member setting providing the strongest performance. Within the tested configurations, the advisory board mechanism remains computationally feasible, while preliminary experiments beyond 10 advisors show diminishing returns relative to added complexity. Overall, AdvB-RL provides a robust and modular alternative to single-policy reinforcement learning for adaptive cooperative control.
Fuzzy time series for short-term residential load forecasting in smart grids
Load forecasting is used primarily to predict future loads for a particular system over a given time period. Short-term loads are typically treated as variable elements influenced by factors such as historical load information and weather datasets, including precipitation, wind speed, and temperature. A precise forecasting with an individual model is almost impossible. The primary challenge for utility companies worldwide is accurately forecasting energy consumption. Accurate short-term load forecasting (STLF) is a cornerstone of smart grid operation, enabling demand-side management (DSM), demand response programs, and efficient integration of distributed energy resources. This study proposes a fuzzy time series (FTS)-based methodology for residential electricity consumption forecasting at hourly, daily, and weekly scales. By addressing overfitting during data partitioning and refining the fuzzification process, our approach improves prediction accuracy compared to traditional FTS models. Simulation results using real consumption data demonstrate up to 40% improvement in hourly forecasting and up to 58% and 84% improvements in daily and weekly forecasts, respectively. These results highlight the potential of FTS-based models to enhance residential demand forecasting, reduce peak-demand uncertainty, and support grid operators in achieving more resilient, flexible, and sustainable smart grid systems.
Modified Glasgow Prognostıc Score May Be Useful to Predict Major Adverse Cardiac Events in Heart Failure Patients Undergone Cardiac Resynchronization Treatment
OBJECTIVEWhether modified Glasgow prognostic score predicts prognosis in patients with cardiac resynchronization therapy with defibrillation is unknown. Our aim was to investigate the association of modified Glasgow prognostic score with death and hospitalization in cardiac resynchronization therapy with defibrillation patients.METHODSA total of 306 heart failure with reduced ejection fraction patients who underwent cardiac resynchronization therapy with defibrillation implantation were categorized into 3 groups based on their modified Glasgow prognostic score categorical levels.C-reactive protein >10 mg/L or albumin <35 g/L was assigned 1 point each and the patients were classified into 0, 1, and 2 points, respectively. Remodeling was determined according to the clinical event and myocardial remodeling criteria. Major adverse cardiac events were defined as mortality and/or hospitalization for heart failure.RESULTSAge, New York Heart Association functional class, modified Glasgow prognostic score prior to cardiac resynchronization therapy with defibrillation, sodium levels, and left atrial diameter were higher in the major adverse cardiac events(+) group. Age, left atrial diameter, and higher modified Glasgow prognostic score were found to be predictors of heart failure hospitalization/death in multivariable penalized Cox regression analysis. Besides, patients with lower modified Glasgow prognostic score showed better reverse left ventricular remodeling demonstrated by increase in left ventricle ejection fraction and decline in left ventricle end systolic volume.CONCLUSIONModified Glasgow prognostic score prior to cardiac resynchronization therapy with defibrillation can be used as a predictor of long-term heart failure hospitalization and death in addition to age and left atrial diameter. These results can guide the patient selection for cardiac resynchronization therapy with defibrillation therapy and highlight the importance of nutritional status.
A deep dictionary clustering approach for unsupervised image retrieval using convolutional sparse coding
Medical image repositories have been rapidly growing due to the widespread use of imaging techniques, making manual annotation unfeasible. Efficient image retrieval systems are crucial for diagnosing diseases, planning treatments, and conducting medical research. This paper presents Deep Dictionary Clustering for Image Retrieval (DDicCIR), a novel framework that integrates deep learning with dictionary clustering for unsupervised medical image retrieval. The method employs DenseNet121 to extract image features, followed by a two-level dictionary learning process. In the first dictionary layer, the sparse representations are learned to group similar images, while the second layer refines these representations to capture higher-level abstractions and improve feature discrimination. An iterative clustering mechanism, based on k-means, updates the clusters until convergence, enhancing sparsity, reducing noise, and strengthening category separation. Experimental results on the NIH Chest X-ray and IRMA datasets show that DDicCIR achieves significant improvements in precision, recall, and mean average precision (mAP), demonstrating its effectiveness for medical image retrieval. The code is available on https://github.com/sucharithasu/DDicCIR .
The digital orchard: advanced data-driven technologies in apple breeding and genetic modification
The apple (Malus × domestica), a globally significant perennial fruit crop, faces immense pressure from climate change, evolving pathogens, and consumer demand for novel traits. Also, remains constrained by slow trait selection despite technological advances. Further, the traditional breeding methods are slow and resource-intensive, hampered by the apple’s long juvenile period and high heterozygosity. This systematic literature review (SLR) synthesizes the state of the art in advanced data-driven technologies for accelerating apple breeding and genetic modification. Following the PRISMA-EcoEvo protocol, 47 selected studies were analyzed from databases including Web of Science, Scopus, and PubMed. Our thematic synthesis reveals a paradigm shift towards a “digital breeding” model, characterized by the convergence of three core technological pillars. First, high-throughput phenotyping (HTP), which leverages sensor modalities such as RGB-D, hyperspectral imaging, and LiDAR, is automating the collection of trait data at an unprecedented scale. Second, machine learning (ML) and deep learning (DL) algorithms are being deployed for diverse applications, including cultivar identification with over 96% accuracy, non-destructive quality prediction, and genomic selection, thereby boosting predictive ability for key traits by up to 18%. Third, precise and efficient genome editing, predominantly using Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR)/CRISPR-associated protein 9 (Cas9), is enabling the rapid introduction of desirable traits, such as disease resistance, enhanced shelf life, and improved nutrient uptake. Demonstrated transgene-free editing protocols are accelerating the path to commercialization. We further explore the integration of these pillars through the agricultural internet of things (AIoT) and discuss emerging frontiers, including federated learning for data privacy, explainable AI (XAI) for model transparency, and the implications of recent regulatory frameworks. This review identifies critical research gaps, including the need for standardized open-access datasets and integrated end-to-end system validation. It concludes that the synergistic application of these technologies is poised to revolutionize the speed, precision, and resilience of apple improvement programs worldwide.
Prognostic value of the obstetric specific scoring systems and four years’ experience of a tertiary center
Background Obstetric sepsis is a significant and preventable cause of maternal mortality. The obstetric modified SOFA (omSOFA) score was created to diagnose sepsis in this population, and the sepsis in obstetrics score (SOS) was developed to predict the need for ICU care. This study investigated the prognostic value of these scores in patients treated in the ICU for obstetric sepsis, analyzed the patients’ characteristics, and examined the perinatal outcomes associated with sepsis. Methods The study was retrospectively designed, and patients hospitalized in the ICU due to maternal sepsis were evaluated. These patients were divided into two groups based on the length of ICU stay. An ICU stay exceeding 72 h is considered a prolonged stay. The data, including demographics, laboratory results and obstetric outcomes, were compared between the groups. Receiver operating characteristic (ROC) curve analyses were performed to investigate the performance of omSOFA and SOS scores in predicting prolonged ICU stay and septic shock. Results The study included 25 women with maternal sepsis. Sepsis developed during pregnancy in 40% of patients, after cesarean section in 28%, after abortion in 20%, and after vaginal delivery in 12%. Septic shock developed in 12% ( n  = 3) of patients during ICU hospitalization. SOS showed better performance in predicting the need for ICU longer than 72 h (cut-off: 3, AUC:0.865, CI:0.721-1.000, sensitivity:75%, specificity:100% PPV:100% NPV:92.7% p  = 0.003). At the same time, omSOFA performed better in predicting septic shock (cut-off: 5, AUC:0.985, CI: 0.941-1.000, sensitivity:100% specificity:95%, PPV:73.2%, NPV:100% p  = 0.007). Discussion Considering that prolonged stay in the ICU and development of septic shock are important determinants of morbidity and mortality, SOS and omSOFA scores could serve as valuable prognostic tools in obstetric sepsis cases.
Transferable CNN-based data mining approaches for medical imaging: application to spine DXA scans for osteoporosis detection
Osteoporosis is the leading cause of sudden bone fractures. This is a silent and deadly disease that can affect any part of the body, such as the spine, hips, and knee bones. To measure bone mineral density, dual-energy X-ray absorptiometry (DXA) scans help radiologists and other medical professionals identify early signs of osteoporosis in the spine. A proposed 21-layer convolutional neural network (CNN) model is implemented and validated to automatically detect osteoporosis in spine DXA images. The dataset contains 174 spine DXA images, including 114 affected by osteoporosis and the rest normal or non-fractured. To improve training, the dataset is expanded using various data augmentation techniques. The classification performance of the proposed model is compared with that of four popular pre-trained models: ResNet-50, Visual Geometry Group 16 (VGG-16), VGG-19, and InceptionV3. With an F1-score of 97.16%, recall of 95.41%, classification accuracy of 97.14%, and precision of 99.04%, the proposed model consistently outperforms competing approaches. The proposed paradigm would therefore be very valuable to radiologists and other medical professionals. The proposed approach's capacity to detect, monitor, and diagnose osteoporosis may reduce the risk of developing the condition.