Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
6,094 result(s) for "Test Number"
Sort by:
The genetics of late maturity alpha-amylase (LMA) in North American spring wheat (Triticum aestivum L.)
Genetic susceptibility to late maturity alpha-amylase (LMA) in wheat (Triticum aestivum L.) results in increased alpha-amylase activity in mature grain when cool conditions occur during late grain maturation. Farmers are forced to sell wheat grain with elevated alpha-amylase at a discount because it has an increased risk of poor end-product quality. This problem can result from either LMA or preharvest sprouting, grain germination on the mother plant when rain occurs before harvest. Whereas preharvest sprouting is a well-understood problem, little is known about the risk LMA poses to North American wheat crops. To examine this, LMA susceptibility was characterized in a panel of 251 North American hard spring wheat lines, representing ten geographical areas. It appears that there is substantial LMA susceptibility in North American wheat since only 27% of the lines showed reproducible LMA resistance following cold-induction experiments. A preliminary genome-wide association study detected six significant marker-trait associations. LMA in North American wheat may result from genetic mechanisms similar to those previously observed in Australian and International Maize and Wheat Improvement Center (CIMMYT) germplasm since two of the detected QTLs, QLMA.wsu.7B and QLMA.wsu.6B, co-localized with previously reported loci. The Reduced height (Rht) loci also influenced LMA. Elevated alpha-amylase levels were significantly associated with the presence of both wild-type and tall height, rht-B1a and rht-D1a, loci in both cold-treated and untreated samples.
A new steganographic algorithm based on coupled chaotic maps and a new chaotic S-box
The art of concealing information by embedding it in a seemingly “innocent” message is called steganography. An appropriate system for steganography is essential to guarantee the safety of the transfer file and, moreover, the size of the attached file is of great importance. Ergodic dynamical systems with confusion guarantee an acceptable level of security for cryptographic systems. Here, we suggest a new steganography algorithm based on a measurable dynamical system and a new chaotic S-box. We use two different chaotic maps at the same time to create the new S-box aimed at providing adequate key space and high security for the encryption. In the encryption stage, the message is encrypted with a new S-box. The capability of S-box and encryption has been confirmed by performance analysis. Our goal in adding this encryption step is to increase security and complicate the process to access the steganographic stage secret message. The pixel position of the cover color image is determined by using chaotic maps in the proposed algorithm, in which, a secret information bit can be hidden. By considering the key role in the security of cryptographic systems, an entropy calculation is presented to determine the chaotic area of the proposed system. The problems of the low security against some existing tests as well as the small key space may lead to the steganography failure, which can be fixed by the proposed algorithm. The contributions of the suggested steganography algorithm are as follows: (1) It allows us to use an ergodic coupled system that provides ample key space. (2) It uses an encryption step with new, well-performance S-boxes that provides high security. (3) The performance of the steganographic design performs better than previous works.
The Relationship Between COVID-19 Cases and COVID-19 Testing: a Panel Data Analysis on OECD Countries
Testing, one of the methods to combat the COVID-19 outbreak, is highly recommended in all countries. Empirical studies on how testing relates to the control of new cases will help highlight the importance of testing in efforts to combat the epidemic. Therefore, this study aims to investigate the relationship between COVID-19 testing and COVID-19 cases. We use panel autoregressive distributed lag analysis to test the effect of COVID-19 test number on the COVID-19 new cases. The data of the study cover the period from March 19, 2020, to May 01, 2020, for 14 OECD countries. Data were obtained from the https://ourworldindata.org/coronavirus website. According to the results, this study shows that increasing the COVID-19 test number will help to reduce new COVID-19 cases. On the other hand, increase in the test number per thousand will probably not contribute to reducing new COVID-19 cases, because countries do not already test by random selection, and even if they do, it will not contribute to detection and isolating of the new cases without identifying risky groups.
Target output distribution and distribution of bias for statistical model validation given a limited number of test data
Simulation model must be validated with experimental data to correctly predict the outputs of engineered systems before they can be used with confidence. While doing so, pointwise comparison between predicted output by simulation model and experimental data for model verification and validation (V&V) is not appropriate since real-world phenomena are not deterministic due to existence of irreducible uncertainty. Thus, the output prediction by a simulation model needs to be represented by a certain probability density function (PDF). Statistical model validation methods are necessary to compare the model prediction and physical test data. The validation of a simulation model entails the acquisition of extraordinarily detailed test data, which is expensive to generate, and practicing engineers can afford only a very limited number of test data. This paper proposes an effective method to validate simulation model by using a target output distribution, which closely approximates the true output distribution. Furthermore, the proposed target output distribution accounts for a biased simulation model with stochastic outputs—specifically, simulation output distribution—using limited numbers of input and output test data. Since limited test data may involve outlier or be sparse, a data quality checking process is proposed to determine whether a given output test data needs to be balanced. If necessary, stratified sampling using cluster analysis is employed to sample balanced test data. Next, Bayesian analysis is used to obtain many possible candidates of target output distributions, from which the one at the posterior median is selected. Then, the distribution of bias can be identified using Monte Carlo convolution. Three engineering examples are used to demonstrate that (1) the developed target output distribution closely approximates the true output distribution and is robust under different sets of test data; (2) the reallocated test dataset by a quality checking process and balance sampling leads to better matching to the true output distribution; and (3) the distribution of bias is effectively used to understand the model’s accuracy and model confidence for comparison study.
A National VS30 Model for South Korea to Combine Nationwide Dense Borehole Measurements With Ambient Seismic Noise Analysis
The average shear‐wave velocity within the top 30 m from the surface, VS30, represents site characteristics including the soil classification and site amplification that are essential information for building codes and seismic design. A novel method to determine a VS30 model based on a composite analysis of borehole standard penetration test numbers (SPT N) and horizontal‐to‐vertical (H/V) spectral ambient noise ratios is introduced. A national VS30 model for South Korea is determined using the method. The shear‐wave velocity structures beneath 20 nationwide broadband seismic stations are determined using the H/V analysis. The SPT N data are collected from 175,619 nationwide densely‐distributed boreholes. The shear‐wave velocity models from SPT N values are calibrated for the local reference velocity models from H/V analysis. A representative relationship between the SPT N values and shear‐wave velocities is introduced. A national VS30 model for South Korea is determined using the calibrated SPT N models at the nationwide boreholes. The VS30 model is verified by comparisons with local field measurements. The proposed model is consistent with the USGS model based on a surface slope analysis. The VS30 structure presents high correlation with geological and topographic features. The VS30 values are low in coastal (low topographic) areas, and high in mountain (high topographic) areas. Apparent linear relationship is observed between VS30 and topography. The western and southeastern coastal regions may be vulnerable to strong seismic shaking. Plain Language Summary Seismic ground motions are an important factor to control the seismic damages. The seismic amplification is highly dependent on the shear‐wave velocities at shallow depths  ≤30 m, VS30. We introduce a novel method to determine a VS30 model based on the standard penetration test numbers (SPT N) from nationwide 175,619 boreholes and horizontal‐to‐vertical spectral ambient noise ratios (H/V ratios) from 20 broadband seismic stations. We determine a VS30 model for South Korea. The shear‐wave velocity structures beneath broadband seismic stations are used for local reference velocity models. The shear‐wave velocity models from SPT N values are calibrated for the local reference velocity models from the seismic stations. We determine a representative relationship between the SPT N values and shear‐wave velocities. We determine a high‐resolution VS30 model using the calibrated SPT N models at boreholes. The proposed model is verified by comparisons with other results. The VS30 structure presents high correlation with geological and topographic characteristics. The VS30 values are low in coastal areas, and high in mountain areas. The western and southeastern coastal regions may be vulnerable to strong ground motions during earthquakes. Key Points A novel method based on SPT N values and H/Vratios is proposed for determination of national VS30 model for South Korea A high‐resolution national VS30 model can be calculated using the calibrated SPT N data at densely distributed boreholes The VS30 model presents high correlation with geological and topographic features
Deep learning model fusion improves lung tumor segmentation accuracy across variable training-to-test dataset ratios
This study aimed to investigate the robustness of a deep learning (DL) fusion model for low training-to-test ratio (TTR) datasets in the segmentation of gross tumor volumes (GTVs) in three-dimensional planning computed tomography (CT) images for lung cancer stereotactic body radiotherapy (SBRT). A total of 192 patients with lung cancer (solid tumor, 118; part-solid tumor, 53; ground-glass opacity, 21) who underwent SBRT were included in this study. Regions of interest in the GTVs were cropped based on GTV centroids from planning CT images. Three DL models, 3D U-Net, V-Net, and dense V-Net, were trained to segment the GTV regions. Nine fusion models were constructed with logical AND, logical OR, and voting of the two or three outputs of the three DL models. TTR was defined as the ratio of the number of cases in a training dataset to that in a test dataset. The Dice similarity coefficients (DSCs) and Hausdorff distance (HD) of the 12 models were assessed with TTRs of 1.00 (training data: validation data: test data = 40:20:40), 0.791 (35:20:45), 0.531 (31:10:59), 0.291 (20:10:70), and 0.116 (10:5:85). The voting fusion model achieved the highest DSCs of 0.829 to 0.798 for all TTRs among the 12 models, whereas the other models showed DSCs of 0.818 to 0.804 for a TTR of 1.00 and 0.788 to 0.742 for a TTR of 0.116, and an HD of 5.40 ± 3.00 to 6.07 ± 3.26 mm better than any single DL models. The findings suggest that the proposed voting fusion model is a robust approach for low TTR datasets in segmenting GTVs in planning CT images of lung cancer SBRT.
Assessment of microbial quality of household water output from desalination systems by the heterotrophic plate count method
Point-of-use household water desalination systems (HWDSs) are becoming popular in Iran because of the deterioration of drinking water. This study aimed to determine the microbial quality of output water from HWDSs in Qom, Iran by using the heterotrophic plate count (HPC) method. Samples of input and output water from 30 HWDSs were collected over a six-month period. Heterotrophic bacteria were tested using the pour plate technique. At the first sampling stage, the HPC level in 23% of samples exceeded the 500 CFU/ml threshold level. On average, for 50% of samples, the HPC level of input samples was 0–10 CFU/ml, for 42% it was 10–100 CFU/ml and for 8% it was 100–500 CFU/ml. For output samples, for 25%, the level of HPC was 0–10 CFU/ml, for 43% it was 10–100 CFU/ml, for 24% it was 100–500 CFU/ml and for 8% it exceeded 500 CFU/ml. For total coliforms the most probable number test was positive for the first and third stages of sampling (3% input samples). The comparison of the averages with national standard values shows that in some cases, the contamination of output water from HWDSs in the city of Qom has been above the standard values.
Evaluation of the Pseudalert^sup ^/Quanti-Tray^sup ^ MPN Test for the Rapid Enumeration of Pseudomonas aeruginosa in Swimming Pool and Spa Pool Waters
This study assessed the performance of a new most probable number test (Pseudalert^sup ^/Quanti-Tray^sup ^) for the enumeration of Pseudomonas aeruginosa from swimming pool and spa pool waters by comparing it to the international and national membrane filtration-based culture methods for P. aeruginosa: ISO 16266:2006 and UK The Microbiology of Drinking Water--Part 8 (MoDW Part 8) which both use Pseudomonas CN agar. The comparison was based on the calculation of mean relative differences between the two methods conducted according to ISO 17994:2014. Using both routine pool water samples (149 from 8 laboratories) and artificially contaminated samples (309 from 7 laboratories), paired counts from each sample and enumeration method were analysed. For routine samples, there were insufficient data for a conclusive assessment, but the data do indicate at least equivalent performance of Pseudalert^sup ^/Quanti-Tray^sup ^ to the reference methods. For the artificially contaminated samples, the data also did not result in a statistically conclusive assessment but did indicate potentially better performance of Pseudalert^sup ^/Quanti-Tray^sup ^. Combining the data from the routine samples and artificially contaminated samples resulted in an ISO 17994 outcome that the two methods were not statistically significantly different. Thus, the Pseudalert^sup ^/Quanti-Tray^sup ^ method is an acceptable alternative to ISO 16266 and MoDW Part 8. The Pseudalert^sup ^/Quanti-Tray^sup ^ method has the advantage in that it does not require confirmation testing, and of providing confirmed counts within 24-28 h incubation compared to 40-48 h or longer for the ISO 16266 and MoDW Part 8 methods.
Validation of the Psychometric Hepatic Encephalopathy Score (PHES) for Identifying Patients with Minimal Hepatic Encephalopathy
Background The psychometric hepatic encephalopathy score (PHES) is a battery of neuropsychological tests used in the diagnosis of minimal hepatic encephalopathy (MHE). Aim The aim of this study was to construct and validate a dataset of normal values for the PHES. Methods Volunteers and patients with cirrhosis with and without low-grade overt hepatic encephalopathy (OHE) were enrolled. All subjects completed the PHES battery, and possible modifying factors were assessed. Formulas to predict expected scores in cirrhotics were constructed, and MHE was diagnosed whenever a deviation of <−4 SDs occurred across the five tests. Results Among the 743 volunteers, age and years of education influenced the scores of all tests. Eighty-four patients with cirrhosis lacked evidence of OHE, whereas 20 had OHE: median PHES were −1 [0 to −3] and −9 [−6.5 to −11.8] ( P  < 0.001), respectively. Thirteen of the 84 patients (15%) with cirrhosis but without OHE had MHE. Patients with MHE were older and less educated than those without MHE (61 ± 8 and 52 ± 10 years old, P  = 0.003; 7 ± 4 and 12 ± 5 years education, P  = 0.002), whereas liver function was not different (MELD, 8 ± 5 and 8 ± 5). A very strong correlation between these norms and those derived from Spain was observed ( r  = 0.964, P  < 0.001). Conclusions PHES performance was mostly influenced by age and education, and expected results in cirrhotics need to be adjusted for these factors. Our validation of Mexican norms for PHES (PHES-Mex) establishes a practical method for assessing MHE and contributes to international attempts to standardize diagnostic protocols for MHE.