Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
861 result(s) for "Ahmad, Kamran"
Sort by:
A Hybrid CNN-LSTM Model for Improving Accuracy of Movie Reviews Sentiment Analysis
Nowadays, social media has become a tremendous source of acquiring user’s opinions. With the advancement of technology and sophistication of the internet, a huge amount of data is generated from various sources like social blogs, websites, etc. In recent times, the blogs and websites are the real-time means of gathering product reviews. However, excessive number of blogs on the cloud has enabled the generation of huge volume of information in different forms like attitudes, opinions, and reviews. Therefore, a dire need emerges to find a method to extract meaningful information from big data, classify it into different categories and predict end user’s behaviors or sentiments. Long Short-Term Memory (LSTM) model and Convolutional Neural Network (CNN) model have been applied to different Natural Language Processing (NLP) tasks with remarkable and effective results. The CNN model efficiently extracts higher level features using convolutional layers and max-pooling layers. The LSTM model is capable to capture long-term dependencies between word sequences. In this study, we propose a hybrid model using LSTM and very deep CNN model named as Hybrid CNN-LSTM Model to overcome the sentiment analysis problem. First, we use Word to Vector (Word2Vc) approach to train initial word embeddings. The Word2Vc translates the text strings into a vector of numeric values, computes distance between words, and makes groups of similar words based on their meanings. Afterword embedding is performed in which the proposed model combines set of features that are extracted by convolution and global max-pooling layers with long term dependencies. The proposed model also uses dropout technology, normalization and a rectified linear unit for accuracy improvement. Our results show that the proposed Hybrid CNN-LSTM Model outperforms traditional deep learning and machine learning techniques in terms of precision, recall, f-measure, and accuracy. Our approach achieved competitive results using state-of-the-art techniques on the IMDB movie review dataset and Amazon movie reviews dataset.
A Hybrid Speller Design Using Eye Tracking and SSVEP Brain–Computer Interface
Steady-state visual evoked potentials (SSVEPs) have been extensively utilized to develop brain–computer interfaces (BCIs) due to the advantages of robustness, large number of commands, high classification accuracies, and information transfer rates (ITRs). However, the use of several simultaneous flickering stimuli often causes high levels of user discomfort, tiredness, annoyingness, and fatigue. Here we propose to design a stimuli-responsive hybrid speller by using electroencephalography (EEG) and video-based eye-tracking to increase user comfortability levels when presented with large numbers of simultaneously flickering stimuli. Interestingly, a canonical correlation analysis (CCA)-based framework was useful to identify target frequency with a 1 s duration of flickering signal. Our proposed BCI-speller uses only six frequencies to classify forty-eight targets, thus achieve greatly increased ITR, whereas basic SSVEP BCI-spellers use an equal number of frequencies to the number of targets. Using this speller, we obtained an average classification accuracy of 90.35 ± 3.597% with an average ITR of 184.06 ± 12.761 bits per minute in a cued-spelling task and an ITR of 190.73 ± 17.849 bits per minute in a free-spelling task. Consequently, our proposed speller is superior to the other spellers in terms of targets classified, classification accuracy, and ITR, while producing less fatigue, annoyingness, tiredness and discomfort. Together, our proposed hybrid eye tracking and SSVEP BCI-based system will ultimately enable a truly high-speed communication channel.
Hybrid EEG—Eye Tracker: Automatic Identification and Removal of Eye Movement and Blink Artifacts from Electroencephalographic Signal
Contamination of eye movement and blink artifacts in Electroencephalogram (EEG) recording makes the analysis of EEG data more difficult and could result in mislead findings. Efficient removal of these artifacts from EEG data is an essential step in improving classification accuracy to develop the brain-computer interface (BCI). In this paper, we proposed an automatic framework based on independent component analysis (ICA) and system identification to identify and remove ocular artifacts from EEG data by using hybrid EEG and eye tracker system. The performance of the proposed algorithm is illustrated using experimental and standard EEG datasets. The proposed algorithm not only removes the ocular artifacts from artifactual zone but also preserves the neuronal activity related EEG signals in non-artifactual zone. The comparison with the two state-of-the-art techniques namely ADJUST based ICA and REGICA reveals the significant improved performance of the proposed algorithm for removing eye movement and blink artifacts from EEG data. Additionally, results demonstrate that the proposed algorithm can achieve lower relative error and higher mutual information values between corrected EEG and artifact-free EEG data.
AgriTrust—A Trust Management Approach for Smart Agriculture in Cloud-based Internet of Agriculture Things
Internet of Things (IoT) provides a diverse platform to automate things where smart agriculture is one of the most promising concepts in the field of Internet of Agriculture Things (IoAT). Due to the requirements of more processing power for computations and predictions, the concept of Cloud-based smart agriculture is proposed for autonomic systems. This is where digital innovation and technology helps to improve the quality of life in the area of urbanization expansion. For the integration of cloud in smart agriculture, the system is shown to have security and privacy challenges, and most significantly, the identification of malicious and compromised nodes along with a secure transmission of information between sensors, cloud, and base station (BS). The identification of malicious and compromised node among soil sensors communicating with the BS is a notable challenge in the BS to cloud communications. The trust management mechanism is proposed as one of the solutions providing a lightweight approach to identify these nodes. In this article, we have proposed a novel trust management mechanism to identify malicious and compromised nodes by utilizing trust parameters. The trust mechanism is an event-driven process that computes trust based on the pre-defined time interval and utilizes the previous trust degree to develop an absolute trust degree. The system also maintains the trust degree of a BS and cloud service providers using distinct approaches. We have also performed extensive simulations to evaluate the performance of the proposed mechanism against several potential attacks. In addition, this research helps to create friendlier environments and efficient agricultural productions for the migration of people to the cities.
Fabrication of glipizide loaded polymeric microparticles; in-vitro and in-vivo evaluation
Controlled-release microparticles offer a promising avenue for enhancing patient compliance and minimizing dosage frequency. In this study, we aimed to design controlled-release microparticles of Glipizide utilizing Eudragit S100 and Methocel K 100 M polymers as controlling agents. The microparticles were fabricated through a simple solvent evaporation method, employing various drug-to-polymer ratios to formulate different controlled-release batches labeled as F1 to F5. Evaluation of the microparticles encompassed a range of parameters including flow properties, particle size, morphology, percentage yield, entrapment efficiencies, percent drug loading, and dissolution studies. Additionally, various kinetic models were employed to elucidate the drug release mechanism. Furthermore, difference and similarity factors were utilized to compare the dissolution profiles of the tested formulations with a reference formulation. The compressibility index and angle of repose indicated favorable flow properties of the prepared microparticles, with values falling within the range of 8 to 10 and 25 to 29, respectively. The particle size distribution of the microparticles ranged from 95.3 to 126 μm. Encouragingly, the microparticles exhibited high percent yield (ranging from 66 to 77%), entrapment efficiency (80 to 96%), and percent drug loading (46 to 54%). All formulated batches demonstrated controlled drug release profiles extending up to 12 hours, with glipizide release following an anomalous non-Fickian diffusion pattern. However, the drug release profiles of the reference formulation and various polymeric microparticles did not meet the acceptable limits of difference and similarity factors. In-vivo studies revealed sustained hypoglycemic effects over a 12-hour period, indicating the efficacy of the controlled-release microparticles. Overall, our findings suggest the successful utilization of polymeric materials in designing controlled-release microparticles, thereby reducing dosage frequency and potentially improving patient compliance.
Systematic review of economic evaluations of exercise and physiotherapy for patients treated for breast cancer
PurposeTreatments for breast cancer can lead to chronic musculoskeletal problems. This study aimed to systematically review the evidence surrounding the cost-effectiveness of exercise and physiotherapy interventions aimed at reducing the risk of physical symptoms and functional limitations due to breast cancer treatment.MethodsA systematic review of the cost-effectiveness of exercise and physiotherapy interventions during and following treatment for breast cancer was undertaken according to PRISMA guidelines. Literature searches were carried out in Ovid MEDLINE, Ovid Embase, Web of Science, EconLit, CINAHL, PsycINFO, Scopus and the Cochrane Library. Cost-effectiveness evidence was summarised in a descriptive manner and studies were assessed using quality appraisal tools. The review protocol was registered on PROSPERO.ResultsA total of 7783 articles were identified and seven were included in the final review. Five studies undertook trial-based economic evaluations, whereas two studies conducted economic evaluation based on decision models. One study was a cost-effectiveness analysis (CEA), three undertook stand-alone cost–utility analyses (CUA) and three studies were combined CEAs and CUAs. Three studies reported favourable cost-effectiveness results for different exercise or physiotherapy interventions. In contrast, four studies found that exercise and physiotherapy interventions were not cost-effective on the basis of quality-adjusted life year outcomes.ConclusionsThe evidence surrounding the cost-effectiveness of exercise and physiotherapy interventions for the treatment of breast cancer remains sparse with contrasting conclusions. Future research should particularly aim to broaden the evidence base by disentangling the contributing effects of frequency, intensity, time and type of exercise and physiotherapy interventions on cost-effectiveness outcomes.
Lag synchronization of coupled time-delayed FitzHugh–Nagumo neural networks via feedback control
Synchronization plays a significant role in information transfer and decision-making by neurons and brain neural networks. The development of control strategies for synchronizing a network of chaotic neurons with time delays, different direction-dependent coupling (unidirectional and bidirectional), and noise, particularly under external disturbances, is an essential and very challenging task. Researchers have extensively studied the synchronization mechanism of two coupled time-delayed neurons with bidirectional coupling and without incorporating the effect of noise, but not for time-delayed neural networks. To overcome these limitations, this study investigates the synchronization problem in a network of coupled FitzHugh–Nagumo (FHN) neurons by incorporating time delays, different direction-dependent coupling (unidirectional and bidirectional), noise, and ionic and external disturbances in the mathematical models. More specifically, this study investigates the synchronization of time-delayed unidirectional and bidirectional ring-structured FHN neuronal systems with and without external noise. Different gap junctions and delay parameters are used to incorporate time-delay dynamics in both neuronal networks. We also investigate the influence of the time delays between connected neurons on synchronization conditions. Further, to ensure the synchronization of the time-delayed FHN neuronal networks, different adaptive control laws are proposed for both unidirectional and bidirectional neuronal networks. In addition, necessary and sufficient conditions to achieve synchronization are provided by employing the Lyapunov stability theory. The results of numerical simulations conducted for different-sized multiple networks of time-delayed FHN neurons verify the effectiveness of the proposed adaptive control schemes.
Effect of EOG Signal Filtering on the Removal of Ocular Artifacts and EEG-Based Brain-Computer Interface: A Comprehensive Study
It is a fact that contamination of EEG by ocular artifacts reduces the classification accuracy of a brain-computer interface (BCI) and diagnosis of brain diseases in clinical research. Therefore, for BCI and clinical applications, it is very important to remove/reduce these artifacts before EEG signal analysis. Although, EOG-based methods are simple and fast for removing artifacts but their performance, meanwhile, is highly affected by the bidirectional contamination process. Some studies emphasized that the solution to this problem is low-pass filtering EOG signals before using them in artifact removal algorithm but there is still no evidence on the optimal low-pass frequency limits of EOG signals. In this study, we investigated the optimal EOG signal filtering limits using state-of-the-art artifact removal techniques with fifteen artificially contaminated EEG and EOG datasets. In this comprehensive analysis, unfiltered and twelve different low-pass filtering of EOG signals were used with five different algorithms, namely, simple regression, least mean squares, recursive least squares, REGICA, and AIR. Results from statistical testing of time and frequency domain metrics suggested that a low-pass frequency between 6 and 8 Hz could be used as the most optimal filtering frequency of EOG signals, both to maximally overcome/minimize the effect of bidirectional contamination and to achieve good results from artifact removal algorithms. Furthermore, we also used BCI competition IV datasets to show the efficacy of the proposed framework on real EEG signals. The motor-imagery-based BCI achieved statistically significant high-classification accuracies when artifacts from EEG were removed by using 7 Hz low-pass filtering as compared to all other filterings of EOG signals. These results also validated our hypothesis that low-pass filtering should be applied to EOG signals for enhancing the performance of each algorithm before using them for artifact removal process. Moreover, the comparison results indicated that the hybrid algorithms outperformed the performance of single algorithms for both simulated and experimental EEG datasets.
A Blockchain-Assisted Trusted Clustering Mechanism for IoT-Enabled Smart Transportation System
Vehicular Ad-hoc Network (VANET) is a modern concept of transportation that was formulated by extending Mobile Ad-hoc Networks (MANETs). VANET presents diverse opportunities to modernize transportation to enhance safety, security, and privacy. Direct communication raises various limitations, most importantly, the overhead ratio. The most prominent solution proposed is to divide these nodes into clusters. In this paper, we propose a clustering mechanism that provides security and maintains quality after the cluster formulation based on the pre-defined Quality-of-Service (QoS) parameters. To address potential attacks in the VANET environment, the proposed mechanism uses blockchain to encrypt the trust parameters’ computation. A particular trust degree of a vehicle is evaluated by the base station, encrypted with the blockchain approach, and transmitted toward roadside units (RSUs) for further utilization. The system’s performance is evaluated and compared with the existing approaches. The results show a significant improvement in terms of security and clustering quality.
Alleviating salinity stress in canola (Brassica napus L.) through exogenous application of salicylic acid
Canola, a vital oilseed crop, is grown globally for food and biodiesel. With the enormous demand for growing various crops, the utilization of agriculturally marginal lands is emerging as an attractive alternative, including brackish-saline transitional lands. Salinity is a major abiotic stress limiting growth and productivity of most crops, and causing food insecurity. Salicylic acid (SA), a small-molecule phenolic compound, is an essential plant defense phytohormone that promotes immunity against pathogens. Recently, several studies have reported that SA was able to improve plant resilience to withstand high salinity. For this purpose, a pot experiment was carried out to ameliorate the negative effects of sodium chloride (NaCl) on canola plants through foliar application of SA. Two canola varieties Faisal (V1) and Super (V2) were assessed for their growth performance during exposure to high salinity i.e. 0 mM NaCl (control) and 200 mM NaCl. Three levels of SA (0, 10, and 20 mM) were applied through foliar spray. The experimental design used for this study was completely randomized design (CRD) with three replicates. The salt stress reduced the shoot and root fresh weights up to 50.3% and 47% respectively. In addition, foliar chlorophyll a and b contents decreased up to 61–65%. Meanwhile, SA treatment diminished the negative effects of salinity and enhanced the shoot fresh weight (49.5%), root dry weight (70%), chl. a (36%) and chl. b (67%). Plants treated with SA showed an increased levels of both enzymatic i.e. (superoxide dismutase (27%), peroxidase (16%) and catalase (34%)) and non-enzymatic antioxidants i.e. total soluble protein (20%), total soluble sugar (17%), total phenolic (22%) flavonoids (19%), anthocyanin (23%), and endogenous ascorbic acid (23%). Application of SA also increased the levels of osmolytes i.e. glycine betaine (31%) and total free proline (24%). Salinity increased the concentration of Na + ions and concomitantly decreased the K + and Ca 2+ absorption in canola plants. Overall, the foliar treatments of SA were quite effective in reducing the negative effects of salinity. By comparing both varieties of canola, it was observed that variety V2 (Super) grew better than variety V1 (Faisal). Interestingly, 20 mM foliar application of SA proved to be effective in ameliorating the negative effects of high salinity in canola plants.