Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
45 result(s) for "Engineering Statistical methods Textbooks"
Sort by:
Probability, random variables, and random processes
Probability, Random Variables, and Random Processes is a comprehensive textbook on probability theory for engineers that provides a more rigorous mathematical framework than is usually encountered in undergraduate courses. It is intended for first-year graduate students who have some familiarity with probability and random variables, though not necessarily of random processes and systems that operate on random signals. It is also appropriate for advanced undergraduate students who have a strong mathematical background. The book has the following features: Several appendices include related material on integration, important inequalities and identities, frequency-domain transforms, and linear algebra. These topics have been included so that the book is relatively self-contained. One appendix contains an extensive summary of 33 random variables and their properties such as moments, characteristic functions, and entropy. Unlike most books on probability, numerous figures have been included to clarify and expand upon important points. Over 600 illustrations and MATLAB plots have been designed to reinforce the material and illustrate the various characterizations and properties of random quantities. Sufficient statistics are covered in detail, as is their connection to parameter estimation techniques. These include classical Bayesian estimation and several optimality criteria: mean-square error, mean-absolute error, maximum likelihood, method of moments, and least squares. The last four chapters provide an introduction to several topics usually studied in subsequent engineering courses: communication systems and information theory; optimal filtering (Wiener and Kalman); adaptive filtering (FIR and IIR); and antenna beamforming, channel equalization, and direction finding. This material is available electronically at the companion website. Probability, Random Variables, and Random Processes is the only textbook on probability for engineers that includes relevant background material, provides extensive summaries of key results, and extends various statistical techniques to a range of applications in signal processing.
Probability, Random Variables, and Random Processes
Probability, Random Variables, and Random Processes is a comprehensive textbook on probability theory for engineers that provides a more rigorous mathematical framework than is usually encountered in undergraduate courses. It is intended for first-year graduate students who have some familiarity with probability and random variables, though not necessarily of random processes and systems that operate on random signals. It is also appropriate for advanced undergraduate students who have a strong mathematical background. The book has the following features: * Several appendices include related material on integration, important inequalities and identities, frequency-domain transforms, and linear algebra. These topics have been included so that the book is relatively self-contained. One appendix contains an extensive summary of 33 random variables and their properties such as moments, characteristic functions, and entropy. * Unlike most books on probability, numerous figures have been included to clarify and expand upon important points. Over 600 illustrations and MATLAB plots have been designed to reinforce the material and illustrate the various characterizations and properties of random quantities. * Sufficient statistics are covered in detail, as is their connection to parameter estimation techniques. These include classical Bayesian estimation and several optimality criteria: mean-square error, mean-absolute error, maximum likelihood, method of moments, and least squares. * The last four chapters provide an introduction to several topics usually studied in subsequent engineering courses: communication systems and information theory; optimal filtering (Wiener and Kalman); adaptive filtering (FIR and IIR); and antenna beamforming, channel equalization, and direction finding. This material is available electronically at the companion website. Probability, Random Variables, and Random Processes is the only textbook on probability for engineers that includes relevant background material, provides extensive summaries of key results, and extends various statistical techniques to a range of applications in signal processing.
A Study of the Response Surface Methodology Model with Regression Analysis in Three Fields of Engineering
Researchers conduct experiments to discover factors influencing the experimental subjects, so the experimental design is essential. The response surface methodology (RSM) is a special experimental design used to evaluate factors significantly affecting a process and determine the optimal conditions for different factors. The relationship between response values and influencing factors is mainly established using regression analysis techniques. These equations are then used to generate contour and surface response plots to provide researchers with further insights. The impact of regression techniques on response surface methodology (RSM) model building has not been studied in detail. This study uses complete regression techniques to analyze sixteen datasets from the literature on semiconductor manufacturing, steel materials, and nanomaterials. Whether each variable significantly affected the response value was assessed using backward elimination and a t-test. The complete regression techniques used in this study included considering the significant influencing variables of the model, testing for normality and constant variance, using predictive performance criteria, and examining influential data points. The results of this study revealed some problems with model building in RSM studies in the literature from three engineering fields, including the direct use of complete equations without statistical testing, deletion of variables with p-values above a preset value without further examination, existence of non-normality and non-constant variance conditions of the dataset without testing, and presence of some influential data points without examination. Researchers should strengthen training in regression techniques to enhance the RSM model-building process.
Enhancing SMBus Protocol Education for Embedded Systems Using Generative AI: A Conceptual Framework with DV-GPT
Teaching of embedded systems, including communication protocols such as SMBus, is commonly faced with difficulties providing the students with interactive and personalized, practical learning experiences. To overcome these shortcomings, this report presents a new conceptual framework that exploits generative artificial intelligence (GenAI) via customized DV-GPT. Coupled with prepromises techniques, DV-GPT offers timely targeted support to students and engineers who are studying SMBus protocol design and verification. In contrast to traditional learning, this AI-based tool dynamically adjusts feedback based on the users’ activities, providing greater insight into challenging concepts, including timing synchronization, multi-master arbitration, and error handling. The framework also incorporates the industry de facto standard UVM practices, which helps narrow the gap between education and the professional world. We quantitatively compare with a baseline GPT-4 and show significant improvement in accuracy, specificity, and user satisfaction. The effectiveness and feasibility of the proposed GenAI-enhanced educational approach have been empirically validated through the use of structured student feedback, expert judgment, and statistical analysis. The contribution of this research is a scalable, flexible, interactive model for enhancing embedded systems education that also illustrates how GenAI technologies could find applicability within specialized educational environments.
Methodological Issues of the Fuzzy Set Theory (Generalizing Article)
The theory of fuzziness is an important area of modern theoretical and applied mathematics. The methodology of the theory of fuzziness is a doctrine of organizing activities in the field of development and application of the scientific results of this theory. We discuss some methodological issues of the theory of fuzziness, i.e., individual components of the methodology in the area under consideration. The theory of fuzziness is a science of pragmatic (fuzzy) numbers and sets. Ancient Greek philosopher Eubulides showed that the concepts of “Heap” and “Bald” cannot be described using natural numbers. E. Borel proposed to define a fuzzy set using a membership function. A fundamentally important step was taken by L.A. Zadeh in 1965. He gave the basic definitions of the algebra of fuzzy sets and introduced the operations of intersection, product, union, sum, and negation of fuzzy sets. The main thing he did was demonstration of the possibilities of expanding (“doubling”) mathematics: by replacing the numbers and sets used in mathematics with their fuzzy counterparts, we obtain new mathematical formulations. In the statistics of nonnumerical data, methods of statistical analysis of fuzzy sets have been developed. Specific types of membership functions are often used— interval and triangular fuzzy numbers. The theory of fuzzy sets in a certain sense is reduced to the theory of random sets. We think fuzzy and that is the only reason we understand each other. The paradox of the fuzzy theory is that it is impossible to consistently implement the thesis “everything in the world is fuzzy.” For ordinary fuzzy sets, the argument and values of the membership function are crisp. If they are replaced by fuzzy analogs, then their description will require their own clear arguments and membership functions, and so on ad infinitum. System fuzzy interval mathematics proceeds from the need to take into account the fuzziness of the initial data and the prerequisites of the mathematical model. One of the options for its practical implementation is an automated system-cognitive analysis and Eidos intellectual system.
Big data-driven english teaching for social media: a neural network-based approach
Big data and intelligence artificial have greatly broadened the channels for people to acquire knowledge. The practical application of artificial intelligence technology in college English classrooms can not only help teachers prepare more scientific and advanced teaching plans, and also effectively improve the quality of college English teaching. In this paper, we employ deep neural networks to design a big data-driven English teaching improvement scheme. The proposed method can help teachers and students improve teaching strategies and assist students to learn more accurately and efficiently. First, we analyze the current deficiencies and reasons in English teaching. Second, we analyze the application of artificial intelligence in text corpus construction. Third, we use neural networks to design an intelligent face recognition system for teachers to improve the attendance rate. Forth, by analyzing the students’ writing, we design a writing scoring method based on the neural network model, which can evaluate students’ writing content. Finally, we experimentally evaluate and verify the effectiveness of the proposed method in this paper.
Hydrological Methodology Evolution for Runoff Estimations at Ungauged Sites
This review paper recites fundamentals of historical runoff estimation methodologies that were developed for ungauged drainage basins. Additionally, new approaches are also provided as extensions of modern ones. Early methodologies of runoff calculations were suggested towards the end of the 18th century, when there was no historical record availability for rainfall or runoff. Some of these methodologies have not been cited in the scientific literature, but they have been frequently used by engineers as practical hydrological approaches in different parts of the United States, especially in Kansas. Early hydrologists were aware of the shortcomings, but they were hampered by the shortage of reliable streamflow and rainfall data. These methods did not consider recurrence intervals associated with designs, and their drawbacks originated more from a shortage of useful hydrologic data rather than logical and rational interconnection between the causative and resultant factors. With availability of measured data, early studies considered daily total rainfall amounts for design purposes, which continued until 1935 when reliable rainfall frequency maps were published for durations shorter than 24 h. Frequency analysis advancement in 1940s provided regional flood frequency methods for ungauged sites. The transition to modern frequency-based hydrologic methods started from 1950 onwards. It is the main purpose of this review paper to cite the early and recent simple methodologies for ungauged drainage area rainfall, especially runoff estimation works. This paper provides the logical and rational fundamentals of each approach so that the reader may become acquainted with the evolution of the hydrological studies.
Estimating the Standard Deviation in Quality-Control Applications
In estimating the standard deviation of a normally distributed random variable, a multiple of the sample range is often used instead of the sample standard deviation in view of the range's computational simplicity. Although it is well known that use of the sample standard deviation is more efficient if the sample size exceeds 2, many statistical quality-control textbooks argue that the loss in efficiency when using the sample range to estimate the process standard deviation is very small with relatively small sample sizes. In this paper, we show that this loss in efficiency can be relatively large even for very small sample sizes and thus strongly advise against using range-based methods. We found that some previously published tables of relative efficiencies were either mislabeled or inaccurate. We also make some recommendations when a number of samples have been taken over time.
SNM Radiation Signature Classification Using Different Semi-Supervised Machine Learning Models
The timely detection of special nuclear material (SNM) transfers between nuclear facilities is an important monitoring objective in nuclear nonproliferation. Persistent monitoring enabled by successful detection and characterization of radiological material movements could greatly enhance the nuclear nonproliferation mission in a range of applications. Supervised machine learning can be used to signal detections when material is present if a model is trained on sufficient volumes of labeled measurements. However, the nuclear monitoring data needed to train robust machine learning models can be costly to label since radiation spectra may require strict scrutiny for characterization. Therefore, this work investigates the application of semi-supervised learning to utilize both labeled and unlabeled data. As a demonstration experiment, radiation measurements from sodium iodide (NaI) detectors are provided by the Multi-Informatics for Nuclear Operating Scenarios (MINOS) venture at Oak Ridge National Laboratory (ORNL) as sample data. Anomalous measurements are identified using a method of statistical hypothesis testing. After background estimation, an energy-dependent spectroscopic analysis is used to characterize an anomaly based on its radiation signatures. In the absence of ground-truth information, a labeling heuristic provides data necessary for training and testing machine learning models. Supervised logistic regression serves as a baseline to compare three semi-supervised machine learning models: co-training, label propagation, and a convolutional neural network (CNN). In each case, the semi-supervised models outperform logistic regression, suggesting that unlabeled data can be valuable when training and demonstrating value in semi-supervised nonproliferation implementations.