Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
118 result(s) for "Soylu, Ahmet"
Sort by:
Human activity recognition using wearable sensors, discriminant analysis, and long short-term memory-based neural structured learning
Healthcare using body sensor data has been getting huge research attentions by a wide range of researchers because of its good practical applications such as smart health care systems. For instance, smart wearable sensor-based behavior recognition system can observe elderly people in a smart eldercare environment to improve their lifestyle and can also help them by warning about forthcoming unprecedented events such as falls or other health risk, to prolong their independent life. Although there are many ways of using distinguished sensors to observe behavior of people, wearable sensors mostly provide reliable data in this regard to monitor the individual’s functionality and lifestyle. In this paper, we propose a body sensor-based activity modeling and recognition system using time-sequential information-based deep Neural Structured Learning (NSL), a promising deep learning algorithm. First, we obtain data from multiple wearable sensors while the subjects conduct several daily activities. Once the data is collected, the time-sequential information then go through some statistical feature processing. Furthermore, kernel-based discriminant analysis (KDA) is applied to see the better clustering of the features from different activity classes by minimizing inner-class scatterings while maximizing inter-class scatterings of the samples. The robust time-sequential features are then applied with Neural Structured Learning (NSL) based on Long Short-Term Memory (LSTM), for activity modeling. The proposed approach achieved around 99% recall rate on a public dataset. It is also compared to existing different conventional machine learning methods such as typical Deep Belief Network (DBN), Convolutional Neural Network (CNN), and Recurrent Neural Network (RNN) where they yielded the maximum recall rate of 94%. Furthermore, a fast and efficient explainable Artificial Intelligence (XAI) algorithm, Local Interpretable Model-Agnostic Explanations (LIME) is used to explain and check the machine learning decisions. The robust activity recognition system can be adopted for understanding peoples' behavior in their daily life in different environments such as homes, clinics, and offices.
Vision transformer and explainable transfer learning models for auto detection of kidney cyst, stone and tumor from CT-radiography
Renal failure, a public health concern, and the scarcity of nephrologists around the globe have necessitated the development of an AI-based system to auto-diagnose kidney diseases. This research deals with the three major renal diseases categories: kidney stones, cysts, and tumors, and gathered and annotated a total of 12,446 CT whole abdomen and urogram images in order to construct an AI-based kidney diseases diagnostic system and contribute to the AI community’s research scope e.g., modeling digital-twin of renal functions. The collected images were exposed to exploratory data analysis, which revealed that the images from all of the classes had the same type of mean color distribution. Furthermore, six machine learning models were built, three of which are based on the state-of-the-art variants of the Vision transformers EANet, CCT, and Swin transformers, while the other three are based on well-known deep learning models Resnet, VGG16, and Inception v3, which were adjusted in the last layers. While the VGG16 and CCT models performed admirably, the swin transformer outperformed all of them in terms of accuracy, with an accuracy of 99.30 percent. The F1 score and precision and recall comparison reveal that the Swin transformer outperforms all other models and that it is the quickest to train. The study also revealed the blackbox of the VGG16, Resnet50, and Inception models, demonstrating that VGG16 is superior than Resnet50 and Inceptionv3 in terms of monitoring the necessary anatomy abnormalities. We believe that the superior accuracy of our Swin transformer-based model and the VGG16-based model can both be useful in diagnosing kidney tumors, cysts, and stones.
Building Semantic Knowledge Graphs from (Semi-)Structured Data: A Review
Knowledge graphs have, for the past decade, been a hot topic both in public and private domains, typically used for large-scale integration and analysis of data using graph-based data models. One of the central concepts in this area is the Semantic Web, with the vision of providing a well-defined meaning to information and services on the Web through a set of standards. Particularly, linked data and ontologies have been quite essential for data sharing, discovery, integration, and reuse. In this paper, we provide a systematic literature review on knowledge graph creation from structured and semi-structured data sources using Semantic Web technologies. The review takes into account four prominent publication venues, namely, Extended Semantic Web Conference, International Semantic Web Conference, Journal of Web Semantics, and Semantic Web Journal. The review highlights the tools, methods, types of data sources, ontologies, and publication methods, together with the challenges, limitations, and lessons learned in the knowledge graph creation processes.
Projecting Türkiye’s CO2 Emissions Future: Multivariate Forecast of Energy–Economy–Environment Interactions and Anthropogenic Drivers
Global warming has become a top priority on the international environmental policy agenda. The recent rise in CO2 emissions observed in Türkiye has further emphasized the country’s critical role in addressing climate change. This study aims to estimate Türkiye’s CO2 emissions through 2030 and identify the key socioeconomic and environmental factors driving these emissions, using multiple linear regression (MLR) and time series analysis methods. Six primary variables are examined: population, gross domestic product (GDP), CO2 intensity, per capita energy consumption, total greenhouse gas (GHG) emissions, and forest area. This study introduces a new multivariate forecasting framework that integrates time series projections with multiple linear regression and elasticity-based sensitivity analysis, providing novel insight into the relative influence of key emission drivers compared to prior research. The results suggest that, if current policy trends persist, Türkiye’s CO2 emissions will increase substantially by 2030. Variables such as GHG emissions, energy consumption, and population growth are found to have an increasing effect on emissions, while the limited expansion of forest areas is insufficient to offset this trend. In contrast, the negative correlation between GDP and CO2 emissions suggests that economic growth can occur in alignment with environmental sustainability. The model’s validity is supported by a high R2 (0.99) value and low error rates. The findings indicate that Türkiye must reassess its current strategies and strengthen policies targeting renewable energy, energy efficiency, and carbon sinks to achieve its climate goals. The proposed framework provides a transparent basis for climate planning and policy prioritization in Türkiye.
Towards a Sustainable Workforce in Big Data Analytics: Skill Requirements Analysis from Online Job Postings Using Neural Topic Modeling
Big data analytics has become a cornerstone of modern industries, driving advancements in business intelligence, competitive intelligence, and data-driven decision-making. This study applies Neural Topic Modeling (NTM) using the BERTopic framework and N-gram-based textual content analysis to examine job postings related to big data analytics in real-world contexts. A structured analytical process was conducted to derive meaningful insights into workforce trends and skill demands in the big data analytics domain. First, expertise roles and tasks were identified by analyzing job titles and responsibilities. Next, key competencies were categorized into analytical, technical, developer, and soft skills and mapped to corresponding roles. Workforce characteristics such as job types, education levels, and experience requirements were examined to understand hiring patterns. In addition, essential tasks, tools, and frameworks in big data analytics were identified, providing insights into critical technical proficiencies. The findings show that big data analytics requires expertise in data engineering, machine learning, cloud computing, and AI-driven automation. They also emphasize the importance of continuous learning and skill development to sustain a future-ready workforce. By connecting academia and industry, this study provides valuable implications for educators, policymakers, and corporate leaders seeking to strengthen workforce sustainability in the era of big data analytics.
Smart Data Placement Using Storage-as-a-Service Model for Big Data Pipelines
Big data pipelines are developed to process data characterized by one or more of the three big data features, commonly known as the three Vs (volume, velocity, and variety), through a series of steps (e.g., extract, transform, and move), making the ground work for the use of advanced analytics and ML/AI techniques. Computing continuum (i.e., cloud/fog/edge) allows access to virtually infinite amount of resources, where data pipelines could be executed at scale; however, the implementation of data pipelines on the continuum is a complex task that needs to take computing resources, data transmission channels, triggers, data transfer methods, integration of message queues, etc., into account. The task becomes even more challenging when data storage is considered as part of the data pipelines. Local storage is expensive, hard to maintain, and comes with several challenges (e.g., data availability, data security, and backup). The use of cloud storage, i.e., storage-as-a-service (StaaS), instead of local storage has the potential of providing more flexibility in terms of scalability, fault tolerance, and availability. In this article, we propose a generic approach to integrate StaaS with data pipelines, i.e., computation on an on-premise server or on a specific cloud, but integration with StaaS, and develop a ranking method for available storage options based on five key parameters: cost, proximity, network performance, server-side encryption, and user weights/preferences. The evaluation carried out demonstrates the effectiveness of the proposed approach in terms of data transfer performance, utility of the individual parameters, and feasibility of dynamic selection of a storage option based on four primary user scenarios.
Complete Heart Block Following Anaphylaxis: A Case Report and Literature Review
Allergic reactions can range from mild symptoms to life-threatening anaphylaxis, and they may involve significant cardiovascular consequences. Anaphylaxis can cause disturbances in the conduction system, including complete heart block. This case report describes a patient who developed complete heart block following the administration of cefixime, a third-generation cephalosporin. The condition required the insertion of a pacemaker during the patient’s follow-up. This report underscores the importance of monitoring and managing cardiovascular effects in patients experiencing severe allergic reactions, particularly when using certain antibiotics like cefixime.
HPV infection in urology practice
Human papillomavirus (HPV) is the most common pathogen of sexually transmitted disease worldwide. While HPV is responsible for low-grade benign lesions in the anogenital area such as condyloma acuminatum, it is also strongly associated with cervical, anal, vulvar/vaginal, and penile carcinomas. In addition to being an oncogenic virus, HPV causes a substantial socioeconomic burden due to the recurrence of benign lesions, the lack of a definitive treatment option that provides a complete cure, and the high cost of treatment. The global incidence of HPV infection is rising, especially among young and sexually active individuals; as a result, in recent years these infections have also become increasingly conspicuous in urology practice, both as incidental findings and primary complaints. The aim of this review is to evaluate the pathogenesis, diagnosis, and treatment modalities of HPV infections in light of the current literature from the urologist’s perspective.
Cost modelling and optimisation for cloud: a graph-based approach
Cloud computing has become popular among individuals and enterprises due to its convenience, scalability, and flexibility. However, a major concern for many cloud service users is the rising cost of cloud resources. Since cloud computing uses a pay-per-use model, costs can add up quickly, and unexpected expenses can arise from a lack of visibility and control. The cost structure gets even more complicated when working with multi-cloud or hybrid environments. Businesses may spend much of their IT budget on cloud computing, and any savings can improve their competitiveness and financial stability. Hence, an efficient cloud cost management is crucial. To overcome this difficulty, new approaches and tools are being developed to provide greater oversight and command over cloud a graph-based approach for modelling cost elements and cloud resources and a potential way to solve the resulting constraint problem of cost optimisation. In this context, we primarily consider utilisation, cost, performance, and availability. The proposed approach is evaluated on three different user scenarios, and results indicate that it could be effective in cost modelling, cost optimisation, and scalability. This approach will eventually help organisations make informed decisions about cloud resource placement and manage the costs of software applications and data workflows deployed in single, hybrid, or multi-cloud environments.
Learning from Imbalanced Data: Integration of Advanced Resampling Techniques and Machine Learning Models for Enhanced Cancer Diagnosis and Prognosis
Background/Objectives: This study aims to evaluate the performance of various classification algorithms and resampling methods across multiple diagnostic and prognostic cancer datasets, addressing the challenges of class imbalance. Methods: A total of five datasets were analyzed, including three diagnostic datasets (Wisconsin Breast Cancer Database, Cancer Prediction Dataset, Lung Cancer Detection Dataset) and two prognostic datasets (Seer Breast Cancer Dataset, Differentiated Thyroid Cancer Recurrence Dataset). Nineteen resampling methods from three categories were employed, and ten classifiers from four distinct categories were utilized for comparison. Results: The results demonstrated that hybrid sampling methods, particularly SMOTEENN, achieved the highest mean performance at 98.19%, followed by IHT (97.20%) and RENN (96.48%). In terms of classifiers, Random Forest showed the best performance with a mean value of 94.69%, with Balanced Random Forest and XGBoost following closely. The baseline method (no resampling) yielded a significantly lower performance of 91.33%, highlighting the effectiveness of resampling techniques in improving model outcomes. Conclusions: This research underscores the importance of resampling methods in enhancing classification performance on imbalanced datasets, providing valuable insights for researchers and healthcare professionals. The findings serve as a foundation for future studies aimed at integrating machine learning techniques in cancer diagnosis and prognosis, with recommendations for further research on hybrid models and clinical applications.