Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
44 result(s) for "Garg, Hitendra"
Sort by:
Best Fit DNA-Based Cryptographic Keys: The Genetic Algorithm Approach
DNA (Deoxyribonucleic Acid) Cryptography has revolutionized information security by combining rigorous biological and mathematical concepts to encode original information in terms of a DNA sequence. Such schemes are crucially dependent on corresponding DNA-based cryptographic keys. However, owing to the redundancy or observable patterns, some of the keys are rendered weak as they are prone to intrusions. This paper proposes a Genetic Algorithm inspired method to strengthen weak keys obtained from Random DNA-based Key Generators instead of completely discarding them. Fitness functions and the application of genetic operators have been chosen and modified to suit DNA cryptography fundamentals in contrast to fitness functions for traditional cryptographic schemes. The crossover and mutation rates are reducing with each new population as more keys are passing fitness tests and need not be strengthened. Moreover, with the increasing size of the initial key population, the key space is getting highly exhaustive and less prone to Brute Force attacks. The paper demonstrates that out of an initial 25 × 25 population of DNA Keys, 14 keys are rendered weak. Complete results and calculations of how each weak key can be strengthened by generating 4 new populations are illustrated. The analysis of the proposed scheme for different initial populations shows that a maximum of 8 new populations has to be generated to strengthen all 500 weak keys of a 500 × 500 initial population.
An enhanced deep image model for glaucoma diagnosis using feature-based detection in retinal fundus
This paper proposes a deep image analysis–based model for glaucoma diagnosis that uses several features to detect the formation of glaucoma in retinal fundus. These features are combined with most extracted parameters like inferior, superior, nasal, and temporal region area, and cup-to-disc ratio that overall forms a deep image analysis. This proposed model is exercised to investigate the various aspects related to the prediction of glaucoma in retinal fundus images that help the ophthalmologist in making better decisions for the human eye. The proposed model is presented with the combination of four machine learning algorithms that provide the classification accuracy of 98.60% while other existing models like support vector machine (SVM), K-nearest neighbors (KNN), and Naïve Bayes provide individually with accuracies of 97.61%, 90.47%, and 95.23% respectively. These results clearly demonstrate that this proposed model offers the best methodology to an early diagnosis of glaucoma in retinal fundus.
Deep learning system applicability for rapid glaucoma prediction from fundus images across various data sets
Glaucoma damages the optical nerve, which sends visual pictures to the brain, and results in irreversible vision loss. This chronic infection is the second leading cause of permanent blindness across the world and worsens the purpose of life if not cured at an early stage. Traditional ways of diagnosing glaucoma, however, rely on heavy equipment and highly trained personnel, making it impossible to assess huge populations of individuals. This results in high costs and lengthy wait times. As a result, new methods for diagnosing glaucoma that do not exacerbate these problems need to be investigated. Previously, to detect glaucoma through artificial intelligence, features were extracted manually, which not only consumes a lot of time but is also a tedious task to perform and there is a chance of intra-observer variability. Now, deep learning (DL) techniques can be used to extract features automatically, which was not possible in the traditional methods. In view of the multiple associated problems like limited labeled data, difficulty and cost incurred in building glaucoma fundus photographic datasets, and special hardware requirements, this study assessed the performance of a DL model (s) which are trained in detecting glaucoma from fundus pictures and methods. The objective is to present a versatile DL model which should generate auspicious performance across multiple datasets to meet real-life scenarios instead of generating specific dataset performance and, along with it, take care of these coupled problems. Diverse deep learning techniques are investigated in this empirical study to categorise the fundus images into two classes: normal and glaucomatous. On all these models, fine-tuning with transfer learning is also performed. Three different publicly available benchmark datasets (ACRIMA, ORIGA, and HRF) were used for training and validation. The models were tested not only on DRISHTI-GS (a public dataset) and a private dataset but also on twelve combinations of these five datasets. Extensive experiments are conducted to manifest the effectiveness of the proposed approach, and on the basis of Area under Curve values and computed accuracy values, it is concluded that Inception-ResNet-v2 and Xception models outperform other competitive models. The findings show the potential of this technology in the early identification of glaucoma. This automated diagnosis system has great potential to ultimately reduce the human efforts and precious time of ophthalmologists.
A review on speech separation in cocktail party environment: challenges and approaches
The Cocktail party problem, which is tracing and identifying a specific speaker’s speech while numerous speakers communicate concurrently is one of the crucial problems still to be addressed for automated speech recognition (ASR) and speaker recognition. In this study, we attempt to thoroughly explore traditional methods for speech separation in a cocktail party environment and further analyze traditional single-channel methods for example source-driven methods like Computational Auditory Scene Analysis (CASA), data-driven methods like non-negative matrix factorization (NMF), model-driven methods, customary multi-channel methods such as beamforming, blind source separation for multi-channel and the newly developed deep learning approaches such as meta-learning based methods, self-supervised learning. This paper further accentuates numerous datasets and evaluation metrics in the domain of speech processing & brings out the comparison between traditional methods and methods based on deep learning for speech separation. This study provides a basic understanding and comprehensive knowledge of state-of-the-art researches in the area of speech separation and serves as a brief overview to new researchers.
Hatred and trolling detection transliteration framework using hierarchical LSTM in code-mixed social media text
The paper describes the usage of self-learning Hierarchical LSTM technique for classifying hatred and trolling contents in social media code-mixed data. The Hierarchical LSTM-based learning is a novel learning architecture inspired from the neural learning models. The proposed HLSTM model is trained to identify the hatred and trolling words available in social media contents. The proposed HLSTM systems model is equipped with self-learning and predicting mechanism for annotating hatred words in transliteration domain. The Hindi–English data are ordered into Hindi, English, and hatred labels for classification. The mechanism of word embedding and character-embedding features are used here for word representation in the sentence to detect hatred words. The method developed based on HLSTM model helps in recognizing the hatred word context by mining the intention of the user for using that word in the sentence. Wide experiments suggests that the HLSTM-based classification model gives the accuracy of 97.49% when evaluated against the standard parameters like BLSTM, CRF, LR, SVM, Random Forest and Decision Tree models especially when there are some hatred and trolling words in the social media data.
Privacy Protection of Digital Images Using Watermarking and QR Code-based Visual Cryptography
The increase in information sharing in terms of digital images imposes threats to privacy and personal identity. Digital images can be stolen while in transfer and any kind of alteration can be done very easily. Thus, privacy protection of digital images from attackers becomes very important. Encryption, steganography, watermarking, and visual cryptography techniques to protect digital images have been proposed from time to time. The present paper is focused on the enhancement of privacy protection of digital images utilizing watermarking and a QR code-based expansion-free and meaningful visual cryptography approach which generates visually appealing QR codes for transmitting meaningful shares. The original secret image is processed with a watermark image (copyright logo, signature, and so on), and then halftoning of the watermarked image has been done to limit pixel expansion. Then, the halftoned image has been partitioned using VC into two shares. These shares are embedded with a QR code to make the shares meaningful. Lossless compression has been performed on the QR code-based shares. The compression method employed in visual cryptography would save space and time. The proposed approach keeps the beauty of visual cryptography, i.e., computation-free decryption, and the size of the recovered image the same as the original secret image. The experimental results confirm the effectiveness of the proposed approach.
Acute-on-chronic liver failure: consensus recommendations of the Asian Pacific Association for the Study of the Liver (APASL) 2014
The first consensus report of the working party of the Asian Pacific Association for the Study of the Liver (APASL) set up in 2004 on acute-on-chronic liver failure (ACLF) was published in 2009. Due to the rapid advancements in the knowledge and available information, a consortium of members from countries across Asia Pacific, “APASL ACLF Research Consortium (AARC),” was formed in 2012. A large cohort of retrospective and prospective data of ACLF patients was collated and followed up in this data base. The current ACLF definition was reassessed based on the new AARC data base. These initiatives were concluded on a 2-day meeting in February 2014 at New Delhi and led to the development of the final AARC consensus. Only those statements which were based on the evidence and were unanimously recommended were accepted. These statements were circulated again to all the experts and subsequently presented at the annual conference of the APASL at Brisbane, on March 14, 2014. The suggestions from the delegates were analyzed by the expert panel, and the modifications in the consensus were made. The final consensus and guidelines document was prepared. After detailed deliberations and data analysis, the original proposed definition was found to withstand the test of time and identify a homogenous group of patients presenting with liver failure. Based on the AARC data, liver failure grading, and its impact on the \"Golden therapeutic Window,\" extra-hepatic organ failure and development of sepsis were analyzed. New management options including the algorithms for the management of coagulation disorders, renal replacement therapy, sepsis, variceal bleed, antivirals, and criteria for liver transplantation for ACLF patients were proposed. The final consensus statements along with the relevant background information are presented here.
Image splicing forgery detection: A review
Image splicing forgery is a prevalent form of digital image manipulation where various portions from one or multiple images are combined to create a deceptive image that appears genuine. Detecting image splicing forgery is crucial for verifying the authenticity of an image. Image splicing forgery detection has grown significantly in recent years, with numerous detection approaches proposed in the literature. This paper presents a comprehensive survey and classification of existing image splicing forgery detection approaches, focusing on 2014 to 2023. This study reviews 88 research papers on splicing in the context of image forgery detection. A generalized structure is introduced, outlining the typical stages involved in the detection process. The paper thoroughly reviews the literature, providing an overview of both hand-crafted and advanced detection approaches researchers propose. Benchmark datasets are identified, including their limitations. The objective is to provide a clear and comprehensive understanding of image splicing forgery detection for researchers and practitioners interested in this area. This survey is a valuable resource, offering insights into the field’s current state and highlighting areas for future research and development.
Spoofing detection system for e-health digital twin using EfficientNet Convolution Neural Network
Digital Twin is the mirror image of any living or non-living objects. Digital Twin and Cyber-physical system (CPS) provides a new era for industries especially in the healthcare sector that keeps track of health data of individuals to provide on-demand, fast and efficient services to the users. In the suggested system, various health parameters of the patients are collected through different health instruments, wearable devices that communicate data to the primary database; used for analysis purposes for better diagnosis and training for automated systems. The primary database in a physical object and parallelly maintain virtual object/digital twin of the same in order of analyzing, summarize and mine data for diagnosis, monitoring the patient in real-time. The e-health cloud data need to be protected from unauthorized access by biometric authentication using iris biometric trait. The proposed paper suggested two phases EfficientNet Convolution Neural Network-based framework for identifying the real or spoofed user sample. The proposed system is trained using EfficientNet Convolution Neural Network on different datasets of spoofed and actual iris biometric samples to discriminate the original and spoofed one.
Emperor penguin optimization algorithm- and bacterial foraging optimization algorithm-based novel feature selection approach for glaucoma classification from fundus images
Feature selection is an important component of the machine learning domain, which selects the ideal subset of characteristics relative to the target data by omitting irrelevant data. For a given number of features, there are 2 n possible feature subsets, making it challenging to select the optimal set of features from a dataset via conventional feature selection approaches. We opted to investigate glaucoma infection since the number of individuals with this disease is rising quickly around the world. The goal of this study is to use the feature set (features derived from fundus images of benchmark datasets) to classify images into two classes (infected and normal) and to select the fewest features (feature selection) to achieve the best performance on various efficiency measuring metrics. In light of this, the paper implements and recommends a metaheuristics-based technique for feature selection based on emperor penguin optimization, bacterial foraging optimization, and proposes their hybrid algorithm. From the retinal fundus benchmark images, a total of 36 features were extracted. The proposed technique for selecting features minimizes the number of features while improving classification accuracy. Six machine learning classifiers classify on the basis of a smaller subset of features provided by these three optimization techniques. In addition to the execution time, eight statistically based performance metrics are calculated. The hybrid optimization technique combined with random forest achieves the highest accuracy, up to 0.95410. Because the proposed medical decision support system is effective and ensures trustworthy decision-making for glaucoma screening, it might be utilized by medical practitioners as a second opinion tool, as well as assist overworked expert ophthalmologists and prevent individuals from losing their eyesight.