Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
4
result(s) for
"NanoDet"
Sort by:
Visual-Based Children and Pet Rescue from Suffocation and Incidence of Hyperthermia Death in Enclosed Vehicles
2023
Over the past several years, many children have died from suffocation due to being left inside a closed vehicle on a sunny day. Various vehicle manufacturers have proposed a variety of technologies to locate an unattended child in a vehicle, including pressure sensors, passive infrared motion sensors, temperature sensors, and microwave sensors. However, these methods have not yet reliably located forgotten children in the vehicle. Recently, visual-based methods have taken the attention of manufacturers after the emergence of deep learning technology. However, the existing methods focus only on the forgotten child and neglect a forgotten pet. Furthermore, their systems only detect the presence of a child in the car with or without their parents. Therefore, this research introduces a visual-based framework to reduce hyperthermia deaths in enclosed vehicles. This visual-based system detects objects inside a vehicle; if the child or pet are without an adult, a notification is sent to the parents. First, a dataset is constructed for vehicle interiors containing children, pets, and adults. The proposed dataset is collected from different online sources, considering varying illumination, skin color, pet type, clothes, and car brands for guaranteed model robustness. Second, blurring, sharpening, brightness, contrast, noise, perspective transform, and fog effect augmentation algorithms are applied to these images to increase the training data. The augmented images are annotated with three classes: child, pet, and adult. This research concentrates on fine-tuning different state-of-the-art real-time detection models to detect objects inside the vehicle: NanoDet, YOLOv6_1, YOLOv6_3, and YOLO7. The simulation results demonstrate that YOLOv6_1 presents significant values with 96% recall, 95% precision, and 95% F1.
Journal Article
NanoDet Model‐Based Tracking and Inspection of Net Cage Using ROV
2025
Open sea cage culture has become a major trend in mariculture, with strong wind resistance, wave resistance, anti‐current ability, high degree of intensification, breeding density, and high yield. However, damage to the cage triggers severe economic losses; hence, to adopt effective and timely measures in minimizing economic losses, it is crucial for farmers to identify and understand the damage to the cage without delay. Presently, the damage detection of nets is mainly achieved by the underwater operation of divers, which is highly risky, inefficient, expensive, and exhibits poor real‐time performance. Here, a remote‐operated vehicle (ROV)‐based autonomous net detection method is proposed. The system comprises two parts: the first part is sonar image target detection based on NanoDet. The sonar constantly collects data in the front and middle parts of the ROV, and the trained NanoDet model is embedded into the ROV control end, with the actual output of the angle and distance information between the ROV and net. The second part is the control part of the robot. The ROV tracks the net coat based on the angle and distance information of the target detection. In addition, when there are obstacles in front of the ROV, or it is far away from the net, the D‐STAR algorithm is adopted to realize local path planning. Experimental results indicate that the NanoDet target detection exhibits an average accuracy of 77.2% and a speed of approximately 10 fps, which satisfies the requirements of ROV tracking accuracy and speed. The average tracking error of ROV inspection is less than 0.5 m. The system addresses the problem of high risk and low efficiency of the manual detection of net damage in large‐scale marine cage culture and can further analyze and predict the images and videos returned from the net. https://youtu.be/NKcgPcej5sI .
Journal Article
Lightweight-CancerNet: a deep learning approach for brain tumor detection
2025
Detecting brain tumors in medical imaging is challenging, requiring precise and rapid diagnosis. Deep learning techniques have shown encouraging results in this field. However, current models require significant computer resources and are computationally demanding. To overcome these constraints, we suggested a new deep learning architecture named Lightweight-CancerNet, designed to detect brain tumors efficiently and accurately. The proposed framework utilizes MobileNet architecture as the backbone and NanoDet as the primary detection component, resulting in a notable mean average precision (mAP) of 93.8% and an accuracy of 98%. In addition, we implemented enhancements to minimize computing time without compromising accuracy, rendering our model appropriate for real-time object detection applications. The framework’s ability to detect brain tumors with different image distortions has been demonstrated through extensive tests combining two magnetic resonance imaging (MRI) datasets. This research has shown that our framework is both resilient and reliable. The proposed model can improve patient outcomes and facilitate decision-making in brain surgery while contributing to the development of deep learning in medical imaging.
Journal Article
A novel road attribute detection system for autonomous vehicles using sensor fusion
by
Thomas, Anoop
,
Antony, Jobin K.
,
Isaac, Ashish V.
in
Algorithms
,
Artificial Intelligence
,
Artificial neural networks
2025
The development of Society of Automotive Engineers (SAE) Level 5 Autonomous Vehicles (AVs), which are capable of navigating a variety of roads and weather situations on their own, is examined in this study. Through the application of sophisticated computational algorithms, such as Improved You Only Look Once (YOLO) V5, Single Shot multibox Detector (SSD), Mask- Region based Convolutional Neural Networks (RCNN), and Nanodet, which are based on Convolutional Neural Networks (CNN), the research aims to enhance perception, prediction, and decision-making for secure and efficient autonomous navigation. The focus is on detecting road attributes like humps and potholes. Comparative analysis is carried out on both standard and custom datasets, leading to the selection of algorithms for real-time implementation. The proposed system employs a high-resolution camera mounted on a vehicle, connected to Graphics Processing Unit (GPU) accelerated embedded board, and implemented on the Robot Operating System (ROS) based software platform. Data collected on a specific route serves as valuable input for autonomous navigation. Additionally, the paper delves into the fusion of camera and Light Detection and Ranging (LiDAR) sensor data, introducing novel software architecture to seamlessly integrate road attribute detection into existing AV navigation pipelines on the ROS platform.
Journal Article