Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
498
result(s) for
"MATLAB image processing"
Sort by:
Study and analysis of different segmentation methods for brain tumor MRI application
2023
Medical Resonance Imaging (MRI) is one of the preferred imaging methods for brain tumor diagnosis and getting detailed information on tumor type, location, size, identification, and detection. Segmentation divides an image into multiple segments and describes the separation of the suspicious region from pre-processed MRI images to make the simpler image that is more meaningful and easier to examine. There are many segmentation methods, embedded with detection devices, and the response of each method is different. The study article focuses on comparing the performance of several image segmentation algorithms for brain tumor diagnosis, such as Otsu’s, watershed, level set, K-means, HAAR Discrete Wavelet Transform (DWT), and Convolutional Neural Network (CNN). All of the techniques are simulated in MATLAB using online images from the Brain Tumor Image Segmentation Benchmark (BRATS) dataset-2018. The performance of these methods is analyzed based on response time and measures such as recall, precision, F-measures, and accuracy. The measured accuracy of Otsu’s, watershed, level set, K-means, DWT, and CNN methods is 71.42%, 78.26%, 80.45%, 84.34%, 86.95%, and 91.39 respectively. The response time of CNN is 2.519 s in the MATLAB simulation environment for the designed algorithm. The novelty of the work is that CNN has been proven the best algorithm in comparison to all other methods for brain tumor image segmentation. The simulated and estimated parameters provide the direction to researchers to choose the specific algorithm for embedded hardware solutions and develop the optimal machine-learning models, as the industries are looking for the optimal solutions of CNN and deep learning-based hardware models for the brain tumor.
Journal Article
A Novel Experiment Approach for Measurement Breakup Length, Cone Angle, Sheet Velocity, and Film Thickness in Swirl Air-Blast Atomizers
2024
Measuring the dynamic parameters of liquid fragments generated in the near-field of atomizing sprays poses a significant challenge due to the random nature of the fragments, the instability of the spray, and the limitations of current measuring technology. Precise determination of these parameters can aid in improving the control of the atomization process, which is necessary for providing suitable spray structures with appropriate flow rates and droplet size distributions for various applications such as those used in heat engines. In piston and gas turbine engines, controlling spray characteristics such as penetration, cone angle, particle size, and droplet size distribution is crucial to improve combustion efficiency and decrease exhaust emissions. This can be accomplished by adjusting the structural and/or operating parameters of the fuel supply system. This article aims to measure the breakup length, spray cone angle, axial velocity, breakup time, and liquid sheet film thickness for a swirl air-blast atomizer used in a gas-steam engine. The measurement was conducted using a shadowgraph imaging system developed specifically for this study, consisting of a high-speed camera, a lens, and a light source. While lasers are commonly used as light sources in the literature, this study utilized a special LED high-speed pulse light generator, which is cheaper, easier to handle, and provides a more uniform background. Images were processed using a MATLAB code developed for this study. Although the breakup zone is naturally random and the breakup location significantly varies with time, the novel method developed in this study helps quantify critical parameters under different operating conditions.
Journal Article
Research on structural performance evolution of clay based on distribution force model
2022
PurposeThe purpose of this paper is to explore the variation law between the clay microstructure and macro external force by using soil scanning electron microscope (SEM) images.Design/methodology/approachFirst, SEM images of clay were pre-processed by MATLAB, and quantitative statistical parameters such as directional probability entropy, fractal dimension and shape factor are extracted. Second, the distribution force model was proposed, considering that the microscopic parameters of soil particles were independent of each other, and the distribution coefficient was determined according to the analytic hierarchy process (AHP). Then, the fitted formula of quantitative statistical parameters based on the distribution force model was obtained by taking the macroscopic distribution force as independent variable and the microscopic parameters of soil particles as dependent variable. Finally, the correctness of corresponding fitting formula was verified.FindingsThe results showed that the change of external consolidation pressure has great influence on the directional probability entropy and fractal dimension, while the shape factor reflecting the regular degree of soil particle shape is less sensitive to the consolidation pressure. The fitting formula has high accuracy, and mostly the R value can reach more than 0.9. All the data have passed the test, which proves that the distribution force model proposed in this paper is rational.Originality/valueThe model can be used to connect the macroscopic stress of soil with the micro-structure deformation of soil particles through mathematical formula, which can provide reference for engineering practice.
Journal Article
The Image Processing Toolbox at a Glance
by
Marques, Oge
in
image manipulation techniques, MATLAB and IPT ‐ to open and read contents of image files in popular formats, imread
,
image processing toolbox (IPT), functions ‐ extending basic capability of MATLAB environment, specialized signal and image processing operations
,
MATLAB's IPT, built‐in function ‐ displaying information about image files (without opening them and storing their contents in the workspace)
2011
Book Chapter
Digital Video Processing Techniques and Applications
by
Marques, Oge
in
digital video processing techniques and applications ‐ motion estimation, frames in video sequence
,
object segmentation, and tracking using MATLAB ‐ and Image Processing Toolbox (IPT), in practical applications, as video surveillance systems
,
video enhancement and noise reduction ‐ video sequences processed using filtering techniques, enhancing image quality
2011
Book Chapter
Practical Image and Video Processing Using MATLAB
2011
UP-TO-DATE, TECHNICALLY ACCURATE COVERAGE OF ESSENTIAL TOPICS IN IMAGE AND VIDEO PROCESSING
This is the first book to combine image and video processing with a practical MATLAB®-oriented approach in order to demonstrate the most important image and video techniques and algorithms. Utilizing minimal math, the contents are presented in a clear, objective manner, emphasizing and encouraging experimentation.
The book has been organized into two parts. Part I: Image Processing begins with an overview of the field, then introduces the fundamental concepts, notation, and terminology associated with image representation and basic image processing operations. Next, it discusses MATLAB® and its Image Processing Toolbox with the start of a series of chapters with hands-on activities and step-by-step tutorials. These chapters cover image acquisition and digitization; arithmetic, logic, and geometric operations; point-based, histogram-based, and neighborhood-based image enhancement techniques; the Fourier Transform and relevant frequency-domain image filtering techniques; image restoration; mathematical morphology; edge detection techniques; image segmentation; image compression and coding; and feature extraction and representation.
Part II: Video Processing presents the main concepts and terminology associated with analog video signals and systems, as well as digital video formats and standards. It then describes the technically involved problem of standards conversion, discusses motion estimation and compensation techniques, shows how video sequences can be filtered, and concludes with an example of a solution to object detection and tracking in video sequences using MATLAB®.
Extra features of this book include:
* More than 30 MATLAB® tutorials, which consist of step-by-step guides toexploring image and video processing techniques using MATLAB®
* Chapters supported by figures, examples, illustrative problems, and exercises
* Useful websites and an extensive list of bibliographical references
This accessible text is ideal for upper-level undergraduate and graduate students in digital image and video processing courses, as well as for engineers, researchers, software developers, practitioners, and anyone who wishes to learn about these increasingly popular topics on their own.
Fundamentals of digital image processing
by
Breckon, Toby
,
Solomon, Chris
in
Digital techniques
,
Image processing
,
Image processing -- Digital techniques
2010,2011
This is an introductory to intermediate level text on the science of image processing, which employs the Matlab programming language to illustrate some of the elementary, key concepts in modern image processing and pattern recognition.
EntropyHub: An open-source toolkit for entropic time series analysis
2021
An increasing number of studies across many research fields from biomedical engineering to finance are employing measures of entropy to quantify the regularity, variability or randomness of time series and image data. Entropy, as it relates to information theory and dynamical systems theory, can be estimated in many ways, with newly developed methods being continuously introduced in the scientific literature. Despite the growing interest in entropic time series and image analysis, there is a shortage of validated, open-source software tools that enable researchers to apply these methods. To date, packages for performing entropy analysis are often run using graphical user interfaces, lack the necessary supporting documentation, or do not include functions for more advanced entropy methods, such as cross-entropy, multiscale cross-entropy or bidimensional entropy. In light of this, this paper introduces EntropyHub , an open-source toolkit for performing entropic time series analysis in MATLAB, Python and Julia. EntropyHub (version 0.1) provides an extensive range of more than forty functions for estimating cross-, multiscale, multiscale cross-, and bidimensional entropy, each including a number of keyword arguments that allows the user to specify multiple parameters in the entropy calculation. Instructions for installation, descriptions of function syntax, and examples of use are fully detailed in the supporting documentation, available on the EntropyHub website– www.EntropyHub.xyz . Compatible with Windows, Mac and Linux operating systems, EntropyHub is hosted on GitHub, as well as the native package repository for MATLAB, Python and Julia, respectively. The goal of EntropyHub is to integrate the many established entropy methods into one complete resource, providing tools that make advanced entropic time series analysis straightforward and reproducible.
Journal Article