Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
LanguageLanguage
-
SubjectSubject
-
Item TypeItem Type
-
DisciplineDiscipline
-
YearFrom:-To:
-
More FiltersMore FiltersIs Peer Reviewed
Done
Filters
Reset
3
result(s) for
"Naddaf-Sh, Amir-M."
Sort by:
Leveraging Segment Anything Model (SAM) for Weld Defect Detection in Industrial Ultrasonic B-Scan Images
by
Baburao, Vinay S.
,
Naddaf-Sh, Amir-M.
,
Zargarzadeh, Hassan
in
Algorithms
,
automated ultrasonic testing
,
Automation
2025
Automated ultrasonic testing (AUT) is a critical tool for infrastructure evaluation in industries such as oil and gas, and, while skilled operators manually analyze complex AUT data, artificial intelligence (AI)-based methods show promise for automating interpretation. However, improving the reliability and effectiveness of these methods remains a significant challenge. This study employs the Segment Anything Model (SAM), a vision foundation model, to design an AI-assisted tool for weld defect detection in real-world ultrasonic B-scan images. It utilizes a proprietary dataset of B-scan images generated from AUT data collected during automated girth weld inspections of oil and gas pipelines, detecting a specific defect type: lack of fusion (LOF). The implementation includes integrating knowledge from the B-scan image context into the natural image-based SAM 1 and SAM 2 through a fully automated, promptable process. As part of designing a practical AI-assistant tool, the experiments involve applying both vanilla and low-rank adaptation (LoRA) fine-tuning techniques to the image encoder and mask decoder of different variants of both models, while keeping the prompt encoder unchanged. The results demonstrate that the utilized method achieves improved performance compared to a previous study on the same dataset.
Journal Article
Clinical implementation of a bionic hand controlled with kineticomyographic signals
by
Moradi, Ali
,
Daliri, Mahla
,
Naddaf-Sh, Sadra
in
639/166/985
,
692/1537/805
,
Humanities and Social Sciences
2022
Sensing the proper signal could be a vital piece of the solution to the much evading attributes of prosthetic hands, such as robustness to noise, ease of connectivity, and intuitive movement. Towards this end, magnetics tags have been recently suggested as an alternative sensing mechanism to the more common EMG signals. Such sensing technology, however, is inherently invasive and hence only in simulation stages of magnet localization to date. Here, for the first time, we report on the clinical implementation of implanted magnetic tags for an amputee's prosthetic hand from both the medical and engineering perspectives. Specifically, the proposed approach introduces a flexor–extensor tendon transfer surgical procedure to implant the tags, artificial neural networks to extract human intention directly from the implanted magnet's magnetic fields -in short KineticoMyoGraphy (KMG) signals- rather than localizing them, and a game strategy to examine the proposed algorithms and rehabilitate the patient with his new prosthetic hand. The bionic hand's ability is then tested following the patient's intended gesture type and grade. The statistical results confirm the possible utility of surgically implanted magnetic tags as an accurate sensing interface for recognizing the intended gesture and degree of movement between an amputee and his bionic hand.
Journal Article
Automated Weld Defect Classification Enhanced by Synthetic Data Augmentation in Industrial Ultrasonic Images
by
Baburao, Vinay S
,
Amir-M, Naddaf-Sh
,
Zargarzadeh Hassan
in
automated ultrasonic testing
,
Automation
,
Classification
2025
Automated ultrasonic testing (AUT) serves as a vital method for evaluating critical infrastructure in industries such as oil and gas. However, a significant challenge in deploying artificial intelligence (AI)-based interpretation methods for AUT data lies in improving their reliability and effectiveness, particularly due to the inherent scarcity of real-world defective data. This study directly addresses data scarcity in a weld defect classification task, specifically concerning the detection of lack of fusion (LOF) defects in weld inspections using a proprietary industrial ultrasonic B-scan image dataset. This paper leverages state-of-the-art generative models, including Generative Adversarial Networks (GANs) and Denoising Diffusion Probabilistic Models (DDPM) (StyleGAN3, VQGAN with an unconditional transformer, and Stable Diffusion), to produce realistic B-scan images depicting LOF defects. The fine-tuned Transformer-based models, including ViT-Base, Swin-Tiny, and MobileViT-Small classifiers, on the regular B-scan image dataset are then applied to retain only high-confidence positive synthetic samples from each method. The impact of these synthetic images on the classification performance of a ResNet-50 model is evaluated, where it is fine-tuned with cumulative additions of synthetic images, ranging from 10 to 200 images. Its accuracy on the test set increases by 38.9% relative to the baseline with the addition of either 80 synthetic images using VQGAN with an unconditional transformer or 200 synthetic images by StyleGAN3 to the training set, and by 36.8% with the addition of 150 synthetic images by Stable Diffusion. This also outperforms Transformer-based vision models that are trained on regular training data. Concurrently, knowledge distillation experiments involve training ResNet-50 as a student model, leveraging the expertise of ViT-Base and Swin-Tiny as teacher models to demonstrate the effectiveness of adding the synthetic data to the training set, where the greatest enhancement is observed to be 34.7% relative to the baseline. This work contributes to advancing robust, AI-assisted tools for critical infrastructure inspection and offers practical pathways for enhancing available models in resource-constrained industrial environments.
Journal Article