Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
300 result(s) for "Karimi, Hassan"
Sort by:
An Overview on SARS-CoV-2 (COVID-19) and Other Human Coronaviruses and Their Detection Capability via Amplification Assay, Chemical Sensing, Biosensing, Immunosensing, and Clinical Assays
HighlightsVarious amplification assays and sensing can be applied for the detection of SARS-CoV-2.The outputs of biosensors should be presented quantitatively to obtain more accurate and more accessible results.Developing smaller size platforms is one approach toward applying such phone apps, as well as utilizing LFA, biosensors, and nanobiosensors detection techniques.A novel coronavirus of zoonotic origin (SARS-CoV-2) has recently been recognized in patients with acute respiratory disease. COVID-19 causative agent is structurally and genetically similar to SARS and bat SARS-like coronaviruses. The drastic increase in the number of coronavirus and its genome sequence have given us an unprecedented opportunity to perform bioinformatics and genomics analysis on this class of viruses. Clinical tests like PCR and ELISA for rapid detection of this virus are urgently needed for early identification of infected patients. However, these techniques are expensive and not readily available for point-of-care (POC) applications. Currently, lack of any rapid, available, and reliable POC detection method gives rise to the progression of COVID-19 as a horrible global problem. To solve the negative features of clinical investigation, we provide a brief introduction of the general features of coronaviruses and describe various amplification assays, sensing, biosensing, immunosensing, and aptasensing for the determination of various groups of coronaviruses applied as a template for the detection of SARS-CoV-2. All sensing and biosensing techniques developed for the determination of various classes of coronaviruses are useful to recognize the newly immerged coronavirus, i.e., SARS-CoV-2. Also, the introduction of sensing and biosensing methods sheds light on the way of designing a proper screening system to detect the virus at the early stage of infection to tranquilize the speed and vastity of spreading. Among other approaches investigated among molecular approaches and PCR or recognition of viral diseases, LAMP-based methods and LFAs are of great importance for their numerous benefits, which can be helpful to design a universal platform for detection of future emerging pathogenic viruses.
Evaluation of Antioxidants Using Electrochemical Sensors: A Bibliometric Analysis
The imbalance of oxidation and antioxidant systems in the biological system can lead to oxidative stress, which is closely related to the pathogenesis of many diseases. Substances with antioxidant capacity can effectively resist the harmful damage of oxidative stress. How to measure the antioxidant capacity of antioxidants has essential application value in medicine and food. Techniques such as DPPH radical scavenging have been developed to measure antioxidant capacity. However, these traditional analytical techniques take time and require large instruments. It is a more convenient method to evaluate the antioxidant capacity of antioxidants based on their electrochemical oxidation and reduction behaviors. This review summarizes the evaluation of antioxidants using electrochemical sensors by bibliometrics. The development of this topic was described, and the research priorities at different stages were discussed. The topic was investigated in 1999 and became popular after 2010 and has remained popular ever since. A total of 758 papers were published during this period. In the early stages, electrochemical techniques were used only as quantitative techniques and other analytical techniques. Subsequently, cyclic voltammetry was used to directly study the electrochemical behavior of different antioxidants and evaluate antioxidant capacity. With methodological innovations and assistance from materials science, advanced electrochemical sensors have been fabricated to serve this purpose. In this review, we also cluster the keywords to analyze different investigation directions under the topic. Through co-citation of papers, important papers were analyzed as were how they have influenced the topic. In addition, the author’s country distribution and category distribution were also interpreted in detail. In the end, we also proposed perspectives for the future development of this topic.
Impact of spatial distribution information of rainfall in runoff simulation using deep learning method
Rainfall-runoff modeling is of great importance for flood forecast and water management. Hydrological modeling is the traditional and commonly used approach for rainfall-runoff modeling. In recent years, with the development of artificial intelligence technology, deep learning models, such as the long short-term memory (LSTM) model, are increasingly applied to rainfall-runoff modeling. However, current works do not consider the effect of rainfall spatial distribution information on the results. Focusing on 10 catchments from the Catchment Attributes and Meteorology for Large-Sample Studies (CAMELS) dataset, this study compared the performance of LSTM with different look-back windows (7, 15, 30, 180, 365 d) for future 1 d discharges and for future multi-day simulations (7, 15 d). Secondly, the differences between LSTMs as individual models trained independently in each catchment and LSTMs as regional models were also compared across 10 catchments. All models are driven by catchment mean rainfall data and spatially distributed rainfall data, respectively. The results demonstrate that regardless of whether LSTMs are trained independently in each catchment or trained as regional models, rainfall data with spatial information improves the performance of LSTMs compared to models driven by mean rainfall data. The LSTM as a regional model did not obtain better results than LSTM as an individual model in our study. However, we found that using spatially distributed rainfall data can reduce the difference between LSTM as a regional model and LSTM as an individual model. In summary, (a) adding information about the spatial distribution of the data is another way to improve the performance of LSTM where long-term rainfall records are absent, and (b) understanding and utilizing the spatial distribution information can help improve the performance of deep learning models in runoff simulations.
A Global Path Planner for Safe Navigation of Autonomous Vehicles in Uncertain Environments
Autonomous vehicles (AVs) are considered an emerging technology revolution. Planning paths that are safe to drive on contributes greatly to expediting AV adoption. However, the main barrier to this adoption is navigation under sensor uncertainty, with the understanding that there is no perfect sensing solution for all driving environments. In this paper, we propose a global safe path planner that analyzes sensor uncertainty and determines optimal paths. The path planner has two components: sensor analytics and path finder. The sensor analytics component combines the uncertainties of all sensors to evaluate the positioning and navigation performance of an AV at given locations and times. The path finder component then utilizes the acquired sensor performance and creates a weight based on safety for each road segment. The operation and quality of the proposed path finder are demonstrated through simulations. The simulation results reveal that the proposed safe path planner generates paths that significantly improve the navigation safety in complex dynamic environments when compared to the paths generated by conventional approaches.
Exploring Topological Information Beyond Persistent Homology to Detect Geospatial Objects
Accurate detection of geospatial objects, particularly landslides, is a critical challenge in geospatial data analysis due to the complex nature of the data and the significant consequences of these events. This paper introduces an innovative topological knowledge-based (Topological KB) method that leverages the integration of topological, geometrical, and contextual information to enhance the precision of landslide detection. Topology, a fundamental branch of mathematics, explores the properties of space that are preserved under continuous transformations and focuses on the qualitative aspects of space, studying features like connectivity and exitance of loops/holes. We employed persistent homology (PH) to derive candidate polygons and applied three distinct strategies for landslide detection: without any filters, with geometrical and contextual filters, and a combination of topological with geometrical and contextual filters. Our method was rigorously tested across five different study areas. The experimental results revealed that geometrical and contextual filters significantly improved detection accuracy, with the highest F1 scores achieved when employing these filters on candidate polygons derived from PH. Contrary to our initial hypothesis, the addition of topological information to the detection process did not yield a notable increase in accuracy, suggesting that the initial topological features extracted through PH suffices for accurate landslide characterization. This study advances the field of geospatial object detection by demonstrating the effectiveness of combining geometrical and contextual information and provides a robust framework for accurately mapping landslide susceptibility.
An Artificial Neural Network for Movement Pattern Analysis to Estimate Blood Alcohol Content Level
Impairments in gait occur after alcohol consumption, and, if detected in real-time, could guide the delivery of “just-in-time” injury prevention interventions. We aimed to identify the salient features of gait that could be used for estimating blood alcohol content (BAC) level in a typical drinking environment. We recruited 10 young adults with a history of heavy drinking to test our research app. During four consecutive Fridays and Saturdays, every hour from 8 p.m. to 12 a.m., they were prompted to use the app to report alcohol consumption and complete a 5-step straight-line walking task, during which 3-axis acceleration and angular velocity data was sampled at a frequency of 100 Hz. BAC for each subject was calculated. From sensor signals, 24 features were calculated using a sliding window technique, including energy, mean, and standard deviation. Using an artificial neural network (ANN), we performed regression analysis to define a model determining association between gait features and BACs. Part (70%) of the data was then used as a training dataset, and the results tested and validated using the rest of the samples. We evaluated different training algorithms for the neural network and the result showed that a Bayesian regularization neural network (BRNN) was the most efficient and accurate. Analyses support the use of the tandem gait task paired with our approach to reliably estimate BAC based on gait features. Results from this work could be useful in designing effective prevention interventions to reduce risky behaviors during periods of alcohol consumption.
A Method for Extracting Some Key Terrain Features from Shaded Relief of Digital Terrain Models
Detection of terrain features (ridges, spurs, cliffs, and peaks) is a basic research topic in digital elevation model (DEM) analysis and is essential for learning about factors that influence terrain surfaces, such as geologic structures and geomorphologic processes. Detection of terrain features based on general geomorphometry is challenging and has a high degree of uncertainty, mostly due to a variety of controlling factors on surface evolution in different regions. Currently, there are different computational techniques for obtaining detailed information about terrain features using DEM analysis. One of the most common techniques is numerically identifying or classifying terrain elements where regional topologies of the land surface are constructed by using DEMs or by combining derivatives of DEM. The main drawbacks of these techniques are that they cannot differentiate between ridges, spurs, and cliffs, or result in a high degree of false positives when detecting spur lines. In this paper, we propose a new method for automatically detecting terrain features such as ridges, spurs, cliffs, and peaks, using shaded relief by controlling altitude and azimuth of illumination sources on both smooth and rough surfaces. In our proposed method, we use edge detection filters based on azimuth angle on shaded relief to identify specific terrain features. Results show that the proposed method performs similar to or in some cases better (when detecting spurs than current terrain features detection methods, such as geomorphon, curvature, and probabilistic methods.
Advancing Algorithmic Adaptability in Hyperspectral Anomaly Detection with Stacking-Based Ensemble Learning
Anomaly detection in hyperspectral imaging is crucial for remote sensing, driving the development of numerous algorithms. However, systematic studies reveal a dichotomy where algorithms generally excel at either detecting anomalies in specific datasets or generalizing across heterogeneous datasets (i.e., lack adaptability). A key source of this dichotomy may center on the singular and like biases frequently employed by existing algorithms. Current research lacks experimentation into how integrating insights from diverse biases might counteract problems in singularly biased approaches. Addressing this gap, we propose stacking-based ensemble learning for hyperspectral anomaly detection (SELHAD). SELHAD introduces the integration of hyperspectral anomaly detection algorithms with diverse biases (e.g., Gaussian, density, partition) into a singular ensemble learning model and learns the factor to which each bias should contribute so anomaly detection performance is optimized. Additionally, it introduces bootstrapping strategies into hyperspectral anomaly detection algorithms to further increase robustness. We focused on five representative algorithms embodying common biases in hyperspectral anomaly detection and demonstrated how they result in the previously highlighted dichotomy. Subsequently, we demonstrated how SELHAD learns the interplay between these biases, enabling their collaborative utilization. In doing so, SELHAD transcends the limitations inherent in individual biases, thereby alleviating the dichotomy and advancing toward more adaptable solutions.
Enhancing Hyperspectral Anomaly Detection Algorithm Comparisons: Leveraging Dataset and Algorithm Characteristics
Validating the contributions of new algorithms is a critical step in hyperspectral anomaly detection (HAD) research. Typically, validation involves comparing the performance of a proposed algorithm against other algorithms using a series of benchmark datasets. Despite the longstanding use of this comparison process, little attention has been paid to the characteristics of datasets and algorithms that ensure each algorithm has an equal opportunity of performing well. Characteristics of datasets and algorithms that inadvertently favor one algorithm can skew results, leading to misleading conclusions. To address this issue, this study introduces a feature-centric framework designed to assist in ensuring an unbiased comparison of HAD algorithms. The framework identifies significant correlations between datasets and algorithms by extracting distribution-related features from the datasets and statistically testing them against the algorithmic outcomes. The identified trends are then compared across datasets to ensure that all relevant trends are equally represented, thereby ensuring diversity and validating that no singular algorithm is afforded an inherent advantage. The framework was tested on five algorithms across 14 datasets. The results indicate that multiple measures of variance within the datasets are key drivers of diversity, and these measures accurately predicted algorithmic outcomes for 12 of the 14 datasets. This suggests that the identified trends effectively explain the algorithmic outcomes and highlights the importance of incorporating datasets with a diverse range of variances in comparisons of HAD algorithms.
Reconstruction of Continuous High-Resolution Sea Surface Temperature Data Using Time-Aware Implicit Neural Representation
Accurate climate data at fine spatial resolution are essential for scientific research and the development and planning of crucial social systems, such as energy and agriculture. Among them, sea surface temperature plays a critical role as the associated El Niño–Southern Oscillation (ENSO) is considered a significant signal of the global interannual climate system. In this paper, we propose an implicit neural representation-based interpolation method with temporal information (T_INRI) to reconstruct climate data of high spatial resolution, with sea surface temperature as the research object. Traditional deep learning models for generating high-resolution climate data are only applicable to fixed-resolution enhancement scales. In contrast, the proposed T_INRI method is not limited to the enhancement scale provided during the training process and its results indicate that it can enhance low-resolution input by arbitrary scale. Additionally, we discuss the impact of temporal information on the generation of high-resolution climate data, specifically, the influence of the month from which the low-resolution sea surface temperature data are obtained. Our experimental results indicate that T_INRI is advantageous over traditional interpolation methods under different enhancement scales, and the temporal information can improve T_INRI performance for a different calendar month. We also examined the potential capability of T_INRI in recovering missing grid value. These results demonstrate that the proposed T_INRI is a promising method for generating high-resolution climate data and has significant implications for climate research and related applications.