Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
248 result(s) for "TensorFlow"
Sort by:
Deep learning with TensorFlow : take your machine learning knowledge to the next level with the power of TensorFlow 1.x
Deep learning is the step that comes after machine learning, and has more advanced implementations. Machine learning is not just for academics anymore, but is becoming a mainstream practice through wide adoption, and deep learning has taken the front seat. As a data scientist, if you want to explore data abstraction layers, this book will be your guide. This book shows how this can be exploited in the real world with complex raw data using TensorFlow 1.x. Throughout the book, you'll learn how to implement deep learning algorithms for machine learning systems and integrate them into your product offerings, including search, image recognition, and language processing. Additionally, you'll learn how to analyze and improve the performance of deep learning models. This can be done by comparing algorithms against benchmarks, along with machine intelligence, to learn from the information and determine ideal behaviors within a specific context. After finishing the book, you will be familiar with machine learning techniques, in particular the use of TensorFlow for deep learning, and will be ready to apply your knowledge to research or commercial projects. -- Publisher description.
Trends and Challenges in AIoT/IIoT/IoT Implementation
For the next coming years, metaverse, digital twin and autonomous vehicle applications are the leading technologies for many complex applications hitherto inaccessible such as health and life sciences, smart home, smart agriculture, smart city, smart car and logistics, Industry 4.0, entertainment (video game) and social media applications, due to recent tremendous developments in process modeling, supercomputing, cloud data analytics (deep learning, etc.), communication network and AIoT/IIoT/IoT technologies. AIoT/IIoT/IoT is a crucial research field because it provides the essential data to fuel metaverse, digital twin, real-time Industry 4.0 and autonomous vehicle applications. However, the science of AIoT is inherently multidisciplinary, and therefore, it is difficult for readers to understand its evolution and impacts. Our main contribution in this article is to analyze and highlight the trends and challenges of the AIoT technology ecosystem including core hardware (MCU, MEMS/NEMS sensors and wireless access medium), core software (operating system and protocol communication stack) and middleware (deep learning on a microcontroller: TinyML). Two low-powered AI technologies emerge: TinyML and neuromorphic computing, but only one AIoT/IIoT/IoT device implementation using TinyML dedicated to strawberry disease detection as a case study. So far, despite the very rapid progress of AIoT/IIoT/IoT technologies, several challenges remain to be overcome such as safety, security, latency, interoperability and reliability of sensor data, which are essential characteristics to meet the requirements of metaverse, digital twin, autonomous vehicle and Industry 4.0. applications.
Investigations of diabetic retinopathy using deep learning techniques
Diabetes is a chronic metabolic disorder that frequently leads to diabetic retinopathy (DR), a major cause of preventable vision loss worldwide. Early DR detection remains challenging due to the subtle appearance of microaneurysms, haemorrhages, and exudates in retinal fundus images. To address this challenge, this study presents an automated deep learning framework for four-stage DR classification using three benchmark datasets: Messidor, APTOS, and EyePACS. Fundus images were preprocessed using normalization, contrast enhancement, and extensive class-balanced augmentation to improve lesion visibility. A hybrid feature extraction approach was developed by integrating handcrafted gray-level co-occurrence matrix (GLCM) texture descriptors with deep convolutional neural network (CNN) representations to enhance discrimination of fine vascular abnormalities. DenseNet201 and ResNet152 were selected for strong hierarchical feature learning, while MobileNet supported efficient edge deployment. The proposed approach demonstrated strong multi-dataset performance. On the Messidor dataset, the CNN model achieved 91% accuracy, 0.92 precision, 0.89 recall, and a 0.91 F1-score; DenseNet121 achieved 90% accuracy, 0.89 precision, 0.90 recall, and a 0.90 F1-score; and ResNet101 achieved 84% accuracy, 0.90 precision, 0.80 recall, and a 0.85 F1-score. On the APTOS dataset, DenseNet201 achieved 95% accuracy, 0.90 precision, 0.95 recall, and a 0.93 F1-score, CNN achieved 93% accuracy, 0.91 precision, 0.92 recall, and a 0.92 F1-score, and ResNet152 achieved 89% accuracy, 0.87 precision, 0.80 recall, and a 0.84 F1-score. On the EyePACS dataset, MobileNet achieved 89% accuracy, 0.80 precision, 0.87 recall, and a 0.83 F1-score, outperforming InceptionV3 (80% accuracy, 0.78 precision, 0.77 recall, and a 0.78 F1-score) and VGG16 (72% accuracy, 0.70 precision, 0.71 recall, and a 0.71 F1-score). Finally, real-time deployment was validated using an NVIDIA Jetson Nano platform, demonstrating the feasibility of lightweight DR screening in telemedicine and mobile healthcare applications. The integration of GLCM–CNN feature fusion with embedded edge deployment represents a novel contribution not explored in previous DR studies, supporting scalable and accessible DR screening solutions.
IoT Based Sign Language Detection and Voice Conversion with Image Processing
The paper introduces an IoT-enabled system for real-time sign language recognition and voice conversion to improve communication for people with hearing or speech impairments. Using deep learning with TensorFlow, the model accurately detects hand gestures from American and Chinese Sign Language through a standard webcam, with OpenCV handling image processing and pyttsx3 converting recognized signs into speech. An ESP32 microcontroller transmits the interpreted data over Wi-Fi and hosts a mobile-friendly web page, eliminating the need for extra hardware or dedicated apps. This low-cost, efficient solution achieves high real-time accuracy, offering both audio and visual feedback, and showcases the effective integration of AI and IoT in bridging communication gaps.
Imtidad: A Reference Architecture and a Case Study on Developing Distributed AI Services for Skin Disease Diagnosis over Cloud, Fog and Edge
Several factors are motivating the development of preventive, personalized, connected, virtual, and ubiquitous healthcare services. These factors include declining public health, increase in chronic diseases, an ageing population, rising healthcare costs, the need to bring intelligence near the user for privacy, security, performance, and costs reasons, as well as COVID-19. Motivated by these drivers, this paper proposes, implements, and evaluates a reference architecture called Imtidad that provides Distributed Artificial Intelligence (AI) as a Service (DAIaaS) over cloud, fog, and edge using a service catalog case study containing 22 AI skin disease diagnosis services. These services belong to four service classes that are distinguished based on software platforms (containerized gRPC, gRPC, Android, and Android Nearby) and are executed on a range of hardware platforms (Google Cloud, HP Pavilion Laptop, NVIDIA Jetson nano, Raspberry Pi Model B, Samsung Galaxy S9, and Samsung Galaxy Note 4) and four network types (Fiber, Cellular, Wi-Fi, and Bluetooth). The AI models for the diagnosis include two standard Deep Neural Networks and two Tiny AI deep models to enable their execution at the edge, trained and tested using 10,015 real-life dermatoscopic images. The services are evaluated using several benchmarks including model service value, response time, energy consumption, and network transfer time. A DL service on a local smartphone provides the best service in terms of both energy and speed, followed by a Raspberry Pi edge device and a laptop in fog. The services are designed to enable different use cases, such as patient diagnosis at home or sending diagnosis requests to travelling medical professionals through a fog device or cloud. This is the pioneering work that provides a reference architecture and such a detailed implementation and treatment of DAIaaS services, and is also expected to have an extensive impact on developing smart distributed service infrastructures for healthcare and other sectors.
Efficient Distributed Image Recognition Algorithm of Deep Learning Framework TensorFlow
Deep learning requires training on massive data to get the ability to deal with unfamiliar data in the future, but it is not as easy to get a good model from training on massive data. Because of the requirements of deep learning tasks, a deep learning framework has also emerged. This article mainly studies the efficient distributed image recognition algorithm of the deep learning framework TensorFlow. This paper studies the deep learning framework TensorFlow itself and the related theoretical knowledge of its parallel execution, which lays a theoretical foundation for the design and implementation of the TensorFlow distributed parallel optimization algorithm. This paper designs and implements a more efficient TensorFlow distributed parallel algorithm, and designs and implements different optimization algorithms from TensorFlow data parallelism and model parallelism. Through multiple sets of comparative experiments, this paper verifies the effectiveness of the two optimization algorithms implemented in this paper for improving the speed of TensorFlow distributed parallel iteration. The results of research experiments show that the 12 sets of experiments finally achieved a stable model accuracy rate, and the accuracy rate of each set of experiments is above 97%. It can be seen that the distributed algorithm of using a suitable deep learning framework TensorFlow can be implemented in the goal of effectively reducing model training time without reducing the accuracy of the final model.