Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
202
result(s) for
"Interquartile range"
Sort by:
Influence of dietary factors on the clinical course of ulcerative colitis: a prospective cohort study
2004
Background and aims: The causes of relapses of ulcerative colitis (UC) are unknown. Dietary factors have been implicated in the pathogenesis of UC. The aim of this study was to determine which dietary factors are associated with an increased risk of relapse of UC. Methods: A prospective cohort study was performed with UC patients in remission, recruited from two district general hospitals, who were followed for one year to determine the effect of habitual diet on relapse. Relapse was defined using a validated disease activity index. Nutrient intake was assessed using a food frequency questionnaire and categorised into tertiles. Adjusted odds ratios for relapse were determined using multivariate logistic regression, controlling for non-dietary factors. Results: A total of 191 patients were recruited and 96% completed the study. Fifty two per cent of patients relapsed. Consumption of meat (odds ratio (OR) 3.2 (95% confidence intervals (CI) 1.3–7.8)), particularly red and processed meat (OR 5.19 (95% CI 2.1–12.9)), protein (OR 3.00 (95% CI 1.25–7.19)), and alcohol (OR 2.71 (95% CI 1.1–6.67)) in the top tertile of intake increased the likelihood of relapse compared with the bottom tertile of intake. High sulphur (OR 2.76 (95% CI 1.19–6.4)) or sulphate (OR 2.6 (95% CI 1.08–6.3)) intakes were also associated with relapse and may offer an explanation for the observed increased likelihood of relapse. Conclusions: Potentially modifiable dietary factors, such as a high meat or alcoholic beverage intake, have been identified that are associated with an increased likelihood of relapse for UC patients. Further studies are needed to determine if it is the sulphur compounds within these foods that mediates the likelihood of relapse and if reducing their intake would reduce relapse frequency.
Journal Article
Pulse oximetry for monitoring infants in the delivery room: a review
2007
In early studies, investigators placed the sensor over the right Achilles tendon,20,25–27 the forefoot19 or midfoot.22,28 Later studies found that measurements were obtained fastest from the right hand,15 probably owing to better perfusion, higher blood pressure and oxygenation in preductal vessels.14,29 Preductal readings were significantly higher than postductal readings soon after birth (p<0.05).5,15,22 By 17 min after birth, there was no longer a significant difference between preductal and postductal measurements (p<0.05).5,15,22 HOW QUICKLY CAN AN SPO2 READING BE OBTAINED? Some studies report the range of SpO2 at 1, 5 or 10 min (tables 1 and 2); others report the time taken to reach a predetermined SpO2 (table 3). [...]it may not be appropriate to identify specific SpO2 levels at certain times after birth, which can be used as a trigger to alter an infant’s treatment. Harris et al20 >37 Nellcor N-100 Postductal 32 61 (5)* NA NA Vaginal delivery 44 46 (3)* C/S Toth et al22 ⩾35 Nellcor N-300 Preductal 50 NA 84 (48–99)† 92 (65–99)† 48 spontaneous deliveries,2 vacuum extraction Postductal 78 (42–97)† 89 (62–99)† Rabi et al29 ⩾35 Masimo Radical Preductal 45 NA 87 (80–95)‡ NA Vaginal delivery Calgary (1049 m) 81 (75–83)‡ C/S Calgary (1049 m) Kamlin et al18 ⩾31 Masimo Radical Preductal 175 63 (53–68)‡ 90 (79–91)‡ NA 51 preterm 124 term infants Gonzales and Salirrosas28 >37 Nellcor N-20 Postductal 37 42 (2)* NA NA Cerro de Pasco (4340 m) 131 61 (1)* Lima (150 m) sea level Gungor et al30 >37 Air-Shields Vickers 19040 Preductal 70 69 (0.7)§ 90 (2)§ NA No suction 70 70 (0.7)§ 80 (2)§ 92 (0.4)§ Suction Table 2 Observational studies measuring SpO2 in the first few minutes of life in the delivery room where some infants were treated with 100% oxygen 2 Study Gestation Type of oximeter Sensor location Resuscitation n SpO % Comments 1 min 5 min 10 min CPAP, continuous positive airway pressure; C/S, caesarean section; NA, not available; SpO2, saturation by pulse oximetry. *Mean; †mean (SEM); ‡mean (SD). Toth et al22 ⩾35 Nellcor N-3000 Preductal 50 No infant received oxygen NA NA NA 12 (2–55)* 48 spontaneous deliveries, 2 vacuum extraction Postductal 50 14 (3–55)* Kamlin et al18 ⩾31 Masimo Radical Preductal 175 No infant received oxygen NA NA 5.8 (3.2)† Range 1.3–20.2 NA 51 preterm 124 term infants Kopotic and Lindner25 < 30 Masimo Radical Preductal 15 Infants initially received 100% oxygen then oxygen adjusted according to SpO2measurements NA 4.4(1.9–40)‡ NA NA Vento et al31 > 37 Not reported Not reported 22 Control group NA NA 0.9 (0.4)† NA Non-asphyxiated 55 Air 2.0 (0.7)† Infants with asphyxia 52 100% oxygen 1.8 (0.9)† Rao and Ramji16 ⩾31 Novametrix 515A Preductal 95 Infants with asphyxia randomised to receive air or 100% oxygen during resuscitation 5 (3–7.25)§ NA 7.3 (4.5–11)§ NA Infants with asphyxia enrolled in the Resair 2 study 30 Not reported 2.8 (1.6–4.4)§ NA 4.3 (2.9–5.8)§ NA Non-asphyxiated control group Saugstad et al32 ⩾31 Not reported Not reported 103 Air 1.5 (1.4–1.6)§ NA NA NA Infants with asphyxia enrolled in the Resair 2 study 109 100% oxygen 2.5 (1.9–3.1)§ DOES THE TYPE OF OXIMETER ALTER THE SPO2 RESULTS?
Journal Article
Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range
by
Wan, Xiang
,
Wang, Wenqian
,
Liu, Jiming
in
Algorithms
,
Biomedical Research - statistics & numerical data
,
Computer Simulation
2014
Background
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials.
Methods
In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.’s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials.
Results
We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications.
Conclusions
In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.
Journal Article
Prenatal predictors of mortality in very preterm infants cared for in the Australian and New Zealand Neonatal Network
by
Hutchinson, J
,
Donoghue, D
,
Henderson-Smart, D
in
ANZNN
,
ANZNN, Australian and New Zealand Neonatal Network
,
Australasia
2007
Aim: To identify antenatal and perinatal risk factors for in-hospital mortality of babies born within the Australian and New Zealand Neonatal Network (ANZNN). Methods: Data were collected prospectively as part of the ongoing audit of high-risk infants (birth weight <1500 g or gestation <32 weeks) admitted to all level III neonatal units in Australia and New Zealand. Antenatal and intrapartum factors to 1 min of age were examined in 11 498 infants with gestational age >24 weeks. Risk and protective factors for mortality were derived from logistic regression models fitted to 1998–9 data and validated on 2000–1 data. Results: For the whole cohort of infants born between 1998 and 2001, prematurity was the dominant risk factor, infants born at 25 weeks having 32 times greater odds of death than infants born at 31 weeks. Low birth weight for gestational age also had a dose–response effect: the more growth restricted the infant the greater the risk of mortality; infants below the 3rd centile had eight times greater odds of death than those between the 25th and 75th centiles. Male sex was also a significant risk factor (odds ratio (OR) 1.55, 95% confidence interval (CI) 1.31 to 1.82). Maternal hypertension in pregnancy was protective (OR 0.46, 95% CI 0.36 to 0.50). The predictive model for mortality had an area under the receiver operating characteristic curve of 0.82. Conclusions: Risk of mortality can be predicted with good accuracy with factors up to the 1 min Apgar score. By using gestation rather than birth weight as the main indicator of maturity, these data confirm that weight for gestational age is an independent risk factor for mortality.
Journal Article
Cancer mortality and competing causes of death in older adults with cancer: A prospective, multicentre cohort study (ELCAPA‐19)
by
Brain, Etienne
,
Paillaud, Elena
,
Broussier, Amaury
in
[PHYS.PHYS.PHYS-DATA-AN] Physics [physics]/Physics [physics]/Data Analysis, Statistics and Probability [physics.data-an]
,
[SDV.CAN] Life Sciences [q-bio]/Cancer
,
[SDV.MHEP.GEG] Life Sciences [q-bio]/Human health and pathology/Geriatry and gerontology
2023
Journal Article
Accuracy and precision of test weighing to assess milk intake in newborn infants
2006
Background: Test weighing is commonly used to estimate milk intake in newborn infants. Objective: To assess the accuracy and precision of test weighing in clinical practice. Methods: Infants fed by bottle, cup, or nasogastric tube were weighed before and immediately after feeding by a blinded investigator. Actual milk intake was determined by reading the millilitre scale of the milk container before and after feeding. The accuracy and precision of test weighing was assessed by examining the frequency distribution of the difference between weight change and actual milk intake. Results: Ninety four infants completed the study. The mean difference between weight change and actual milk intake was 1.3 ml, indicating good accuracy. The precision of test weighing, however, was poor: 95% of differences between weight change and actual milk intake ranged from −12.4 to 15 ml. The maximum difference was 30 ml. Imprecision was not influenced by the presence of monitor or oxygen saturation wires, intravenous lines, or vomiting of the infant. Conclusions: Test weighing is an imprecise method for assessing milk intake in young infants. This is probably because infant weighing scales are not sensitive enough to pick up small changes in an infant’s weight after feeding. Because of its unreliability, test weighing should not be used in clinical practice.
Journal Article
Automatic Seizure Detection Based on Morphological Features Using One-Dimensional Local Binary Pattern on Long-Term EEG
2018
Epileptic neurological disorder of the brain is widely diagnosed using the electroencephalography (EEG) technique. EEG signals are nonstationary in nature and show abnormal neural activity during the ictal period. Seizures can be identified by analyzing and obtaining features of EEG signal that can detect these abnormal activities. The present work proposes a novel morphological feature extraction technique based on the local binary pattern (LBP) operator. LBP provides a unique decimal value to a sample point by weighing the binary outcomes after thresholding the neighboring samples with the present sample point. These LBP values assist in capturing the rising and falling edges of the EEG signal, thus providing a morphologically featured discriminating pattern for epilepsy detection. In the present work, the variability in the LBP values is measured by calculating the sum of absolute difference of the consecutive LBP values. Interquartile range is calculated over the preprocessed EEG signal to provide dispersion measure in the signal. For classification purpose, K-nearest neighbor classifier is used, and the performance is evaluated on 896.9 hours of data from CHB-MIT continuous EEG database. Mean accuracy of 99.7% and mean specificity of 99.8% is obtained with average false detection rate of 0.47/h and sensitivity of 99.2% for 136 seizures.
Journal Article
Baseline metabolomic profiles predict cardiovascular events in patients at risk for coronary artery disease
by
Shah, Svati H.
,
Kraus, William E.
,
Haynes, Carol
in
Adult
,
Age Distribution
,
Analysis of Variance
2012
Cardiovascular risk models remain incomplete. Small-molecule metabolites may reflect underlying disease and, as such, serve as novel biomarkers of cardiovascular risk.
We studied 2,023 consecutive patients undergoing cardiac catheterization. Mass spectrometry profiling of 69 metabolites and lipid assessments were performed in fasting plasma. Principal component analysis reduced metabolites to a smaller number of uncorrelated factors. Independent relationships between factors and time-to-clinical events were assessed using Cox modeling. Clinical and metabolomic models were compared using log-likelihood and reclassification analyses.
At median follow-up of 3.1 years, there were 232 deaths and 294 death/myocardial infarction (MI) events. Five of 13 metabolite factors were independently associated with mortality: factor 1 (medium-chain acylcarnitines: hazard ratio [HR] 1.12 [95% CI, 1.04-1.21], P = .005), factor 2 (short-chain dicarboxylacylcarnitines: HR 1.17 [1.05-1.31], P = .005), factor 3 (long-chain dicarboxylacylcarnitines: HR 1.14 [1.05-1.25], P = .002); factor 6 (branched-chain amino acids: HR 0.86 [0.75-0.99], P = .03), and factor 12 (fatty acids: HR 1.19 [1.06-1.35], P = .004). Three factors independently predicted death/MI: factor 2 (HR 1.11 [1.01-1.23], P = .04), factor 3 (HR 1.13 [1.04-1.22], P = .005), and factor 12 (HR 1.18 [1.05-1.32], P = .004). For mortality, 27% of intermediate-risk patients were correctly reclassified (net reclassification improvement 8.8%, integrated discrimination index 0.017); for death/MI model, 11% were correctly reclassified (net reclassification improvement 3.9%, integrated discrimination index 0.012).
Metabolic profiles predict cardiovascular events independently of standard predictors.
Journal Article
Random Oversampling-Based Diabetes Classification via Machine Learning Algorithms
by
Eunice, R. Jennifer
,
Kanaga, E. Grace Mary
,
Andrew, J.
in
Artificial Intelligence
,
Boruta technique
,
Computational Intelligence
2024
Diabetes mellitus is considered one of the main causes of death worldwide. If diabetes fails to be treated and diagnosed earlier, it can cause several other health problems, such as kidney disease, nerve disease, vision problems, and brain issues. Early detection of diabetes reduces healthcare costs and minimizes the chance of serious complications. In this work, we propose an e-diagnostic model for diabetes classification via a machine learning algorithm that can be executed on the Internet of Medical Things (IoMT). The study uses and analyses two benchmarking datasets, the PIMA Indian Diabetes Dataset (PIDD) and the Behavioral Risk Factor Surveillance System (BRFSS) diabetes dataset, to classify diabetes. The proposed model consists of the random oversampling method to balance the range of classes, the interquartile range technique-based outlier detection to eliminate outlier data, and the Boruta algorithm for selecting the optimal features from the datasets. The proposed approach considers ML algorithms such as random forest, gradient boosting models, light gradient boosting classifiers, and decision trees, as they are widely used classification algorithms for diabetes prediction. We evaluated all four ML algorithms via performance indicators such as accuracy,
F
1 score, recall, precision, and AUC-ROC. Comparative analysis of this model suggests that the random forest algorithm outperforms all the remaining classifiers, with the greatest accuracy of 92% on the BRFSS diabetes dataset and 94% accuracy on the PIDD dataset, which is greater than the 3% accuracy reported in existing research. This research is helpful for assisting diabetologists in developing accurate treatment regimens for patients who are diabetic.
Journal Article
Custom Outlier Detection for Electrical Energy Consumption Data Applied in Case of Demand Response in Block of Buildings
by
Ceclan, Andrei
,
Jurj, Dacian I.
,
Bârgăuan, Bogdan
in
Automation
,
baseline electricity consumption
,
data cleaning
2021
The aim of this paper is to provide an extended analysis of the outlier detection, using probabilistic and AI techniques, applied in a demo pilot demand response in blocks of buildings project, based on real experiments and energy data collection with detected anomalies. A numerical algorithm was created to differentiate between natural energy peaks and outliers, so as to first apply a data cleaning. Then, a calculation of the impact in the energy baseline for the demand response computation was implemented, with improved precision, as related to other referenced methods and to the original data processing. For the demo pilot project implemented in the Technical University of Cluj-Napoca block of buildings, without the energy baseline data cleaning, in some cases it was impossible to compute the established key performance indicators (peak power reduction, energy savings, cost savings, CO2 emissions reduction) or the resulted values were far much higher (>50%) and not realistic. Therefore, in real case business models, it is crucial to use outlier’s removal. In the past years, both companies and academic communities pulled their efforts in generating input that consist in new abstractions, interfaces, approaches for scalability, and crowdsourcing techniques. Quantitative and qualitative methods were created with the scope of error reduction and were covered in multiple surveys and overviews to cope with outlier detection.
Journal Article