Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
925 result(s) for "custom metrics"
Sort by:
Horizontal Pod Autoscaling in Kubernetes for Elastic Container Orchestration
Kubernetes, an open-source container orchestration platform, enables high availability and scalability through diverse autoscaling mechanisms such as Horizontal Pod Autoscaler (HPA), Vertical Pod Autoscaler and Cluster Autoscaler. Amongst them, HPA helps provide seamless service by dynamically scaling up and down the number of resource units, called pods, without having to restart the whole system. Kubernetes monitors default Resource Metrics including CPU and memory usage of host machines and their pods. On the other hand, Custom Metrics, provided by external software such as Prometheus, are customizable to monitor a wide collection of metrics. In this paper, we investigate HPA through diverse experiments to provide critical knowledge on its operational behaviors. We also discuss the essential difference between Kubernetes Resource Metrics (KRM) and Prometheus Custom Metrics (PCM) and how they affect HPA’s performance. Lastly, we provide deeper insights and lessons on how to optimize the performance of HPA for researchers, developers, and system administrators working with Kubernetes in the future.
NNBSVR: Neural Network-Based Semantic Vector Representations of ICD-10 codes
Automatically predicting ICD-10 codes from clinical notes using machine learning models can reduce the burden of manual coding. However, existing methods often overlook the semantic relationships between ICD-10 codes, resulting in inaccurate evaluations when clinically similar codes are considered completely different. Traditional evaluation metrics, which rely on equality-based matching, fail to capture the clinical relevance of predicted codes. This study introduces NNBSVR (Neural Network-Based Semantic Vector Representations), a novel approach for generating semantic-based vector representations of ICD-10 codes. Unlike traditional approaches that rely on exact code matching, NNBSVR incorporates contextual and hierarchical information to enhance both prediction accuracy and evaluation methods. We validate NNBSVR using intrinsic and extrinsic evaluation methods. Intrinsic evaluation assesses the vectors’ ability to reconstruct the ICD-10 hierarchy and identify clinically meaningful clusters. Extrinsic evaluation compares our relevancy-based approach, which includes customized evaluation metrics, to traditional equality-based metrics on an ICD-10 code prediction task using a 9.57 million clinical notes corpus. NNBSVR demonstrates significant improvements over equality-based metrics, achieving a 9.81% gain in micro-F1 score on the training set and a 12.73% gain on the test set. A manual review by medical experts on a sample of 10,000 predictions confirms an accuracy of 92.58%, further validating our approach. This study makes two significant contributions: first, the development of semantic-based vector representations that encapsulate ICD-10 code relationships and context; second, the customization of evaluation metrics to incorporate clinical relevance. By addressing the limitations of traditional equality-based evaluation metrics, NNBSVR enhances the automated assignment of ICD-10 codes in clinical settings, demonstrating superior performance over existing methods.
العُرف: دراسة أصولية اجتماعية
تتناول هذه الدراسة موضوع العُرف بوصفه قوّةً مُؤثِّرةً في بناء المجتمعات الإنسانية وتنميتها، وتبحث بمنهج تحليلي مقارن أهم ما انتهت إليه المعارف الإنسانية -سواء ما استند منها إلى مرجعية الوحي (القرآن والسنّة) وذلك عند المجتهدين الأصوليين، أو ما استند إلى المرجعية الوضعية كما يرى علماء القانون والاجتماع- في تفسيرها لنشأة الأعراف وتطوّرها، وإيجاد أدوات منهجية مناسبة في دراستها، وفي استكشاف القوانين الاجتماعية الحاكمة في ذلك؛ لغرض تسخيرها في تنمية المجتمعات على نحوٍ تتكامل فيه بين حاجاتها المادية ومتطلّباتها الروحية.
Reliability Testing of a Low-Cost, Multi-Purpose Arduino-Based Data Logger Deployed in Several Applications Such as Outdoor Air Quality, Human Activity, Motion, and Exhaust Gas Monitoring
This contribution shows the possibilities of applying a low-cost, multi-purpose data logger built around an Arduino Mega 2560 single-board computer. Most projects use this kind of hardware to develop single-purpose data loggers. In this work, a data logger with a more general hardware and software architecture was built to perform measurement campaigns in very different domains. The wide applicability of this data logger was demonstrated with short-term monitoring campaigns in relation to outdoor air quality, human activity in an office, motion of a journey on a bike, and exhaust gas monitoring of a diesel generator. In addition, an assessment process and corresponding evaluation framework are proposed to assess the credibility of low-cost scientific devices built in-house. The experiences acquired during the development of the system and the short measurement campaigns were used as inputs in the assessment process. The assessment showed that the system scores positively on most product-related targets. However, unexpected events affect the assessment over the longer term. This makes the development of low-cost scientific devices harder than expected. To assure stability and long-term performance of this type of design, continuous evaluation and regular engineering corrections are needed throughout longer testing periods.
Comparative analysis of YOLO models for green coffee bean detection and defect classification
The quality and uniformity of green coffee beans significantly influence the overall flavor and value of the product. In the coffee industry, automated flaws and bean-type identification offer numerous advantages. This study examines the effectiveness of multiple YOLO (You Only Look Once) models for identifying and classifying green coffee beans. Various YOLO variants, including YOLOv3, YOLOv4, YOLOv5, YOLOv7, YOLOv8, and custom models, are compared with a focus on computational efficiency, accuracy, and speed. Utilizing a dataset of 4,032 training and 506 testing images encompassing diverse bean types, defects, and lighting conditions, we assessed the performance. The bounding boxes generated by our models accurately encompass coffee beans, with the background typically uniform and set to black. Our analysis reveals the superior performance of the custom-YOLOv8n model, which we meticulously customized for green coffee bean detection. This model achieved high precision, recall, f1-score, and mAP, demonstrating its potential for real-world implementation in coffee bean quality control systems. The customization process involved fine-tuning the model to focus on significant features relevant to green coffee bean detection and employing specific labeling strategies. Customization allows you to fine-tune the model to focus on important features relevant to green coffee bean detection. This sensitivity ensures that the model can effectively distinguish between different bean types and detect even subtle defects. This paper clarifies our primary objective of evaluating YOLO models’ performance in identifying and categorizing green coffee beans, with potential implications for enhancing efficiency and consistency in the coffee industry. A succinct key sentence underscores the benefits of our research for readers seeking efficient YOLO model selection and implementation in agricultural systems.
Novel Custom Loss Functions and Metrics for Reinforced Forecasting of High and Low Day-Ahead Electricity Prices Using Convolutional Neural Network–Long Short-Term Memory (CNN-LSTM) and Ensemble Learning
Day-ahead electricity price forecasting (DAEPF) is vital for participants in energy markets, particularly in regions with high integration of renewable energy sources (RESs), where price volatility poses significant challenges. The accurate forecasting of high and low electricity prices is particularly essential, as market participants seek to optimize their strategies by selling electricity when prices are high and purchasing when prices are low to maximize profits and minimize costs. In Japan, the increasing integration of RES has caused day-ahead electricity prices to frequently fall to almost zero JPY/kWh during periods of high RES output, creating significant profitability challenges for electricity retailers. This paper introduces novel custom loss functions and metrics specifically designed to improve the forecasting accuracy of extreme prices (high and low prices) in DAEPF, with a focus on the Japanese wholesale electricity market, addressing the unique challenges posed by the volatility of RES. To implement this, we integrate these custom loss functions into a Convolutional Neural Network–Long Short-Term Memory (CNN-LSTM) model, augmented by an ensemble learning approach and multimodal features. The proposed custom loss functions and metrics were rigorously validated, demonstrating their effectiveness in accurately predicting high and low electricity prices, thereby indicating their practical application in enhancing the economic strategies of market participants.
Data-driven Decision Support in Custom Manufacturing Planning
In the design for manufacturing, the choice of the best processing activities and production scheduling plays an important role. The aim of the research is to create an automated process estimation system based on the processing plan of previously manufactured artifacts, which supports the scheduling software in the case of a custom manufacturing environment. We present a method and corresponding similarity metrics and evaluate the performance of our method on a set of real-life manufacturing plans and design data.
Exploring consistent ratios in morphometry of the proximal tibia: insights for knee arthroplasty
IntroductionThe current study, which delves into proximal tibia morphometric parameters in a Greek sample, not only analyzes whether specific linear distance ratios are consistent but also paves the way for a potential novel metric system for knee arthroplasty imaging studies using constant ratios. These findings could have significant implications for future enlarged research and clinical practice.MethodsA total of 38 dried tibiae were evaluated by two independent investigators. The following distances were measured with a digital Vernier sliding caliper: (1) the mediolateral distance of the proximal surface (A), (2) the anteroposterior distance of the proximal surface (B), (3) The longitudinal length of the bone (C), (4) the line connecting the anterior margin of the proximal surface with the highest peak of the tibia tuberosity (D), (5) the depth of the proximal margin of the medial articular facet (AF) (medial plateau) (E) and (6) the depth of the proximal margin of the lateral AF (lateral plateau) (F).ResultsThe A, B, C, D, E, and F mean distances were 71.3 mm, 47.4 mm, 340.2 mm, 37.1 mm, 42 mm, and 35.9 mm. Reliability analysis for each observer on all measurements revealed an interclass correlation (ICC) score of 0.975 (observer 1) and 0.971 (observer 2). The ratio A/B was 1.5, A/C was a constant 0.2, and D/C was 0.1. The ratio E/F was 1.2. The six measurements (A-F) showed excellent inter-observer reliability (all ICC values > 0.990).ConclusionsThe study established constant ratios of the studied linear distances around the proximal tibia. Considering these ratios, asymmetrical tibial components in knee arthroplasty seem to replicate the native anatomy more closely. Furthermore, the distance from the anterior margin of the proximal surface to the tibial tuberosity peak, constituting one-tenth of the longitudinal length of the tibia, shows promise as a metric system for imaging studies, especially in assessing lesions around tibial components.
Additive manufacturing and supply chain configuration: Modelling and performance evaluation
Purpose: the aim of the study is to compare the performance of different supply chain configurations adopting Additive Manufacturing. Five input factors have been varied with the aim of testing the response of the supply chain to different starting conditions. In order to evaluate the supply chain performance, a set of key performance indicators have been identified considering both manufacturing and logistic processes. Design/methodology/approach: A discrete event simulation model has been developed in order to reproduce the behavior of the players according to their role in the supply chain. Different supply chain configurations have been modelled to assess the performance of the solution combined with different input factors. Many scenarios have been tested with the aim of identifying suitable applications of the additive technology. Findings: in general, the decentralized configuration has better logistic performance than the centralized supply chain. In fact, it is more flexible, suitable for high service levels, and less affected by the variability of the demand. However, when the distances among players are very short and the average demand is low, the benefits in adopting a decentralized configuration are very limited. Concerning the performance of the production phase, the centralized structure allows providing a better capacity utilization, exploiting the potential of a High-cost machine with higher production camera volume and speed. Practical implications: the outcomes obtained allow deriving some useful guidelines, which could help practitioners to identify a suitable application of the additive technology. Originality/value: first, the model provides a quantitative evaluation. Moreover, the study analyzes the performance of the additive technology combined with different supply chain configurations. This is a strong point since it is well known that emerging manufacturing technologies can affect the structure and the performance of the whole supply chain.
Effective Deep Learning Models for the Semantic Segmentation of 3D Human MRI Kidney Images
Recent studies indicate that millions of individuals suffer from renal diseases, with renal carcinoma, a type of kidney cancer, emerging as both a chronic illness and a significant cause of mortality. Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) have become essential tools for diagnosing and assessing kidney disorders. However, accurate analysis of these medical images is critical for detecting and evaluating tumor severity. This study introduces an integrated hybrid framework that combines three complementary deep learning models for kidney tumor segmentation from MRI images. The proposed framework fuses a customized U-Net and Mask R-CNN using a weighted scheme to achieve semantic and instance-level segmentation. The fused outputs are further refined through edge detection using Stochastic Feature Mapping Neural Networks (SFMNN), while volumetric consistency is ensured through Improved Mini-Batch K-Means (IMBKM) clustering integrated with an Encoder-Decoder Convolutional Neural Network (EDCNN). The outputs of these three stages are combined through a weighted fusion mechanism, with optimal weights determined empirically. Experiments on MRI scans from the TCGA-KIRC dataset demonstrate that the proposed hybrid framework significantly outperforms standalone models, achieving a Dice Score of 92.5%, an IoU of 87.8%, a Precision of 93.1%, a Recall of 90.8%, and a Hausdorff Distance of 2.8 mm. These findings validate that the weighted integration of complementary architectures effectively overcomes key limitations in kidney tumor segmentation, leading to improved diagnostic accuracy and robustness in medical image analysis.