Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
111 result(s) for "Humayun, Mamoona"
Sort by:
Information security handbook
\"This handbook provides a comprehensive collection of knowledge for emerging multidisciplinary research areas such as cybersecurity, IoT, Blockchain, Machine Learning, Data Science, and AI. This book brings together in one resource Information security across multiple domains. Information Security Handbook addresses the knowledge for emerging multidisciplinary research. It explores basic and high-level concepts, serves as a manual for industry, while also helping beginners to understand both basic and advanced aspects in security-related issues. The handbook explores security and privacy issues through IoT ecosystem and implications to the real world and at the same time explains the concepts of IoT-related technologies, trends, and future directions. University graduates and postgraduates, as well as research scholars, developers, and end-users, will find this handbook very useful\"-- Provided by publisher.
Industrial Revolution 5.0 and the Role of Cutting Edge Technologies
IR 4.0 emphasizes the interconnection of machines and systems to achieve optimal performance and productivity gains. IR 5.0 is said to take it a step further by fine-tuning the human-machine connection. IR 5.0 is more collaboration between the two: automated technology's ultra-fast accuracy combines with a human's intelligence and creativity. The driving force behind IR 5.0 is customer demand for customization and personalization, necessitating a greater human involvement in the production process. As IR 5.0 evolves, we may expect to see a slew of breakthroughs across various industries. However, just automating jobs or digitizing processes will not be enough; the finest and most successful businesses will be those that can combine the dual powers of technology and human ingenuity. IR 5.0 focuses on the use of modern cutting-edge technologies, namely, AI, IoT, big data, cloud computing, Blockchain, Digital twins, edge computing, collaborative robots, and 6G along with leveraging human creativity and intelligence. Wherever possible, IR 5.0 will change industrial processes worldwide by removing mundane, filthy, and repetitive activities from human workers. Intelligent robotics and systems will have unparalleled access to industrial supply networks and production floors. However, to understand and leverage the benefits of IR 5.0 better, there is a need to understand the role of modern CET in industrial revolution 5.0. To fill this gap, this article will examine IR 5.0 prospective, uses, supporting technologies, opportunities, and issues involved that need to be understood for leveraging the potentials of IR 5.0.
Computer-Vision- and Edge-Enabled Real-Time Assistance Framework for Visually Impaired Persons with LPWAN Emergency Signaling
In recent decades, various assistive technologies have emerged to support visually impaired individuals. However, there remains a gap in terms of solutions that provide efficient, universal, and real-time capabilities by combining robust object detection, robust communication, continuous data processing, and emergency signaling in dynamic environments. In many existing systems, trade-offs are made in range, latency, or reliability when applied in changing outdoor or indoor scenarios. In this study, we propose a comprehensive framework specifically tailored for visually impaired people, integrating computer vision, edge computing, and a dual-channel communication architecture including low-power wide-area network (LPWAN) technology. The system utilizes the YOLOv5 deep-learning model for the real-time detection of obstacles, paths, and assistive tools (such as the white cane) with high performance: precision 0.988, recall 0.969, and mAP 0.985. Implementation of edge-computing devices is introduced to offload computational load from central servers, enabling fast local processing and decision-making. The communications subsystem uses Wi-Fi as the primary link, while a LoRaWAN channel acts as a fail-safe emergency alert network. An IoT-based panic button is incorporated to transmit immediate location-tagged alerts, enabling rapid response by authorities or caregivers. The experimental results demonstrate the system’s low latency and reliable operations under varied real-world conditions, indicating significant potential to improve independent mobility and quality of life for visually impaired people. The proposed solution offers cost-effective and scalable architecture suitable for deployment in complex and challenging environments where real-time assistance is essential.
Deep Learning-Based Prediction of Diabetic Retinopathy Using CLAHE and ESRGAN for Enhancement
Vision loss can be avoided if diabetic retinopathy (DR) is diagnosed and treated promptly. The main five DR stages are none, moderate, mild, proliferate, and severe. In this study, a deep learning (DL) model is presented that diagnoses all five stages of DR with more accuracy than previous methods. The suggested method presents two scenarios: case 1 with image enhancement using a contrast limited adaptive histogram equalization (CLAHE) filtering algorithm in conjunction with an enhanced super-resolution generative adversarial network (ESRGAN), and case 2 without image enhancement. Augmentation techniques were then performed to generate a balanced dataset utilizing the same parameters for both cases. Using Inception-V3 applied to the Asia Pacific Tele-Ophthalmology Society (APTOS) datasets, the developed model achieved an accuracy of 98.7% for case 1 and 80.87% for case 2, which is greater than existing methods for detecting the five stages of DR. It was demonstrated that using CLAHE and ESRGAN improves a model’s performance and learning ability.
Role of Emerging IoT Big Data and Cloud Computing for Real Time Application
Although the Internet of things (IoT), cloud computing (CC), and Big Data (BGD) are three different approaches that have evolved independently of each other over time; however, with time, they are becoming increasingly interconnected. The convergence of IoT, CC, and BGD provides new opportunities in various real-time applications, including telecommunication, healthcare, business, education, science, and engineering. Together, these approaches are facing various challenges during data gathering, processing, and management. The focus of this research paper is to pinpoint the emerging trends in IoT, CC, and BGD. The convergence of these approaches and their impact on various real-time applications, benefits, and challenges associated with all these approaches, current industry trends, and future research directions with especial focus on the healthcare domain. The paper also provides a conceptual framework that integrates IoT, CC, and BGD and provides an IoT centric cloud infrastructure using BGD. Finally, this paper summarizes by providing directions for researchers and practitioners about how to leverage the benefits of combining these approaches.
Melanoma Detection Using Deep Learning-Based Classifications
One of the most prevalent cancers worldwide is skin cancer, and it is becoming more common as the population ages. As a general rule, the earlier skin cancer can be diagnosed, the better. As a result of the success of deep learning (DL) algorithms in other industries, there has been a substantial increase in automated diagnosis systems in healthcare. This work proposes DL as a method for extracting a lesion zone with precision. First, the image is enhanced using Enhanced Super-Resolution Generative Adversarial Networks (ESRGAN) to improve the image’s quality. Then, segmentation is used to segment Regions of Interest (ROI) from the full image. We employed data augmentation to rectify the data disparity. The image is then analyzed with a convolutional neural network (CNN) and a modified version of Resnet-50 to classify skin lesions. This analysis utilized an unequal sample of seven kinds of skin cancer from the HAM10000 dataset. With an accuracy of 0.86, a precision of 0.84, a recall of 0.86, and an F-score of 0.86, the proposed CNN-based Model outperformed the earlier study’s results by a significant margin. The study culminates with an improved automated method for diagnosing skin cancer that benefits medical professionals and patients.
Detection of Skin Cancer Based on Skin Lesion Images Using Deep Learning
An increasing number of genetic and metabolic anomalies have been determined to lead to cancer, generally fatal. Cancerous cells may spread to any body part, where they can be life-threatening. Skin cancer is one of the most common types of cancer, and its frequency is increasing worldwide. The main subtypes of skin cancer are squamous and basal cell carcinomas, and melanoma, which is clinically aggressive and responsible for most deaths. Therefore, skin cancer screening is necessary. One of the best methods to accurately and swiftly identify skin cancer is using deep learning (DL). In this research, the deep learning method convolution neural network (CNN) was used to detect the two primary types of tumors, malignant and benign, using the ISIC2018 dataset. This dataset comprises 3533 skin lesions, including benign, malignant, nonmelanocytic, and melanocytic tumors. Using ESRGAN, the photos were first retouched and improved. The photos were augmented, normalized, and resized during the preprocessing step. Skin lesion photos could be classified using a CNN method based on an aggregate of results obtained after many repetitions. Then, multiple transfer learning models, such as Resnet50, InceptionV3, and Inception Resnet, were used for fine-tuning. In addition to experimenting with several models (the designed CNN, Resnet50, InceptionV3, and Inception Resnet), this study’s innovation and contribution are the use of ESRGAN as a preprocessing step. Our designed model showed results comparable to the pretrained model. Simulations using the ISIC 2018 skin lesion dataset showed that the suggested strategy was successful. An 83.2% accuracy rate was achieved by the CNN, in comparison to the Resnet50 (83.7%), InceptionV3 (85.8%), and Inception Resnet (84%) models.
Traffic Management: Multi-Scale Vehicle Detection in Varying Weather Conditions Using YOLOv4 and Spatial Pyramid Pooling Network
Detecting and counting on road vehicles is a key task in intelligent transport management and surveillance systems. The applicability lies both in urban and highway traffic monitoring and control, particularly in difficult weather and traffic conditions. In the past, the task has been performed through data acquired from sensors and conventional image processing toolbox. However, with the advent of emerging deep learning based smart computer vision systems the task has become computationally efficient and reliable. The data acquired from road mounted surveillance cameras can be used to train models which can detect and track on road vehicles for smart traffic analysis and handling problems such as traffic congestion particularly in harsh weather conditions where there are poor visibility issues because of low illumination and blurring. Different vehicle detection algorithms focusing the same issue deal only with on or two specific conditions. In this research, we address detecting vehicles in a scene in multiple weather scenarios including haze, dust and sandstorms, snowy and rainy weather both in day and nighttime. The proposed architecture uses CSPDarknet53 as baseline architecture modified with spatial pyramid pooling (SPP-NET) layer and reduced Batch Normalization layers. We also augment the DAWN Dataset with different techniques including Hue, Saturation, Exposure, Brightness, Darkness, Blur and Noise. This not only increases the size of the dataset but also make the detection more challenging. The model obtained mean average precision of 81% during training and detected smallest vehicle present in the image
Detection of COVID-19 Based on Chest X-rays Using Deep Learning
The coronavirus disease (COVID-19) is rapidly spreading around the world. Early diagnosis and isolation of COVID-19 patients has proven crucial in slowing the disease’s spread. One of the best options for detecting COVID-19 reliably and easily is to use deep learning (DL) strategies. Two different DL approaches based on a pertained neural network model (ResNet-50) for COVID-19 detection using chest X-ray (CXR) images are proposed in this study. Augmenting, enhancing, normalizing, and resizing CXR images to a fixed size are all part of the preprocessing stage. This research proposes a DL method for classifying CXR images based on an ensemble employing multiple runs of a modified version of the Resnet-50. The proposed system is evaluated against two publicly available benchmark datasets that are frequently used by several researchers: COVID-19 Image Data Collection (IDC) and CXR Images (Pneumonia). The proposed system validates its dominance over existing methods such as VGG or Densnet, with values exceeding 99.63% in many metrics, such as accuracy, precision, recall, F1-score, and Area under the curve (AUC), based on the performance results obtained.
Autonomous Traffic System for Emergency Vehicles
An emergency can occur at any time. To overcome that emergency efficiently, we require seamless movement on the road to approach the destination within a limited time by using an Emergency Vehicle (EV). This paper proposes an emergency vehicle management solution (EVMS) to determine an efficient vehicle-passing sequence that allows the EV to cross a junction without any delay. The proposed system passes the EV and minimally affects the travel times of other vehicles on the junction. In the presence of an EV in the communication range, the proposed system prioritizes the EV by creating space for it in the lane adjacent to the shoulder lane. The shoulder lane is a lane that cyclists and motorcyclists will use in normal situations. However, when an EV enters the communication range, traffic from the adjacent lane will move to the shoulder lane. As the number of vehicles on the road increases rapidly, crossing the EV in the shortest possible time is crucial. The EVMS and algorithms are presented in this study to find the optimal vehicle sequence that gives EVs the highest priority. The proposed solution uses cutting-edge technologies (IoT Sensors, GPS, 5G, and Cloud computing) to collect and pass EVs’ information to the Roadside Units (RSU). The proposed solution was evaluated through mathematical modeling. The results show that the EVMS can reduce the travel times of EVs significantly without causing any performance degradation of normal vehicles.