Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
18 result(s) for "Malik, Kaleem Razzaq"
Sort by:
Leveraging two-dimensional pre-trained vision transformers for three-dimensional model generation via masked autoencoders
Although the Transformer architecture has established itself as the industry standard for jobs involving natural language processing, it still has few uses in computer vision. In vision, attention is used in conjunction with convolutional networks or to replace individual convolutional network elements while preserving the overall network design. Differences between the two domains, such as significant variations in the scale of visual things and the higher granularity of pixels in images compared to words in the text, make it difficult to transfer Transformer from language to vision. Masking autoencoding is a promising self-supervised learning approach that greatly advances computer vision and natural language processing. For robust 2D representations, pre-training with large image data has become standard practice. On the other hand, the low availability of 3D datasets significantly impedes learning high-quality 3D features because of the high data processing cost. We present a strong multi-scale MAE prior training architecture that uses a trained ViT and a 3D representation model from 2D images to let 3D point clouds learn on their own. We employ the adept 2D information to direct a 3D masking-based autoencoder, which uses an encoder-decoder architecture to rebuild the masked point tokens through self-supervised pre-training. To acquire the input point cloud’s multi-view visual characteristics, we first use pre-trained 2D models. Next, we present a two-dimensional masking method that preserves the visibility of semantically significant point tokens. Numerous tests demonstrate how effectively our method works with pre-trained models and how well it generalizes to a range of downstream tasks. In particular, our pre-trained model achieved 93.63% accuracy for linear SVM on ScanObjectNN and 91.31% accuracy on ModelNet40. Our approach demonstrates how a straightforward architecture solely based on conventional transformers may outperform specialized transformer models from supervised learning.
Enhancing intrusion detection: a hybrid machine and deep learning approach
The volume of data transferred across communication infrastructures has recently increased due to technological advancements in cloud computing, the Internet of Things (IoT), and automobile networks. The network systems transmit diverse and heterogeneous data in dispersed environments as communication technology develops. The communications using these networks and daily interactions depend on network security systems to provide secure and reliable information. On the other hand, attackers have increased their efforts to render systems on networks susceptible. An efficient intrusion detection system is essential since technological advancements embark on new kinds of attacks and security limitations. This paper implements a hybrid model for Intrusion Detection (ID) with Machine Learning (ML) and Deep Learning (DL) techniques to tackle these limitations. The proposed model makes use of Extreme Gradient Boosting (XGBoost) and convolutional neural networks (CNN) for feature extraction and then combines each of these with long short-term memory networks (LSTM) for classification. Four benchmark datasets CIC IDS 2017, UNSW NB15, NSL KDD, and WSN DS were used to train the model for binary and multi-class classification. With the increase in feature dimensions, current intrusion detection systems have trouble identifying new threats due to low test accuracy scores. To narrow down each dataset’s feature space, XGBoost, and CNN feature selection algorithms are used in this work for each separate model. The experimental findings demonstrate a high detection rate and good accuracy with a relatively low False Acceptance Rate (FAR) to prove the usefulness of the proposed hybrid model.
Data Compatibility to Enhance Sustainable Capabilities for Autonomous Analytics in IoT
The collection of raw data is based on sensors embedded in devices or the environment for real-time data extraction. Nowadays, the Internet of Things (IoT) environment is used to support autonomous data collection by reducing human involvement. It is hard to analyze such data, especially when working with the sensors in a real-time environment. On using data analytics in IoT with the help of RDF, many issues can be resolved. Resultant data will have a better chance of quality analytics by reforming data into the semantical annotation. Industrial correspondence through data will be improved by the induction of semantics at large due to efficient data capturing, whereas one popular medium of sensors’ data storage is Relational Database (RDB). This study provides a complete automated mechanism to transform from loosely structured data stored in RDB into RDF. These data are obtained from sensors in semantically annotated RDF stores. The given study comprises methodology for improving compatibility by introducing bidirectional transformation between classical DB and RDF data stores to enhance the sustainable capabilities of IoT systems for autonomous analytics. Two case studies, one on weather and another on heart-rate measurement collections through IoT sensors, are used to show the transformation process in operation.
A hybrid steganography framework using DCT and GAN for secure data communication in the big data era
The growth of the internet and big data has spurred the demand for more extensive information hoarding to store and distribute information. In today's digital era, ensuring the security of data transmission is paramount. Advancements in digital technology have facilitated the proliferation of high-resolution graphics over the Internet, raising security concerns and enabling unauthorized access to sensitive data. Researchers have increasingly explored steganography as a reliable method for secure communication because it plays a crucial role in concealing and safeguarding sensitive information. This study introduces a novel and comprehensive steganography framework using the discrete cosine transform (DCT) and the deep learning algorithm, generative adversarial network. By leveraging deep learning techniques in both spatial and frequency domains, the proposed hybrid architecture offers a robust solution for applications requiring high levels of data integrity and security. While conventional steganography methods are typically classified into spatial and transform domains, extensive research and analysis demonstrate that the hybrid approach surpasses individual techniques in performance. The experimental results validate the effectiveness of the proposed steganography approach, showcasing superior visual image quality with a mean square error (MSE) of 93.30%, peak signal-to-noise ratio (PSNR) of 58.27%, root mean squared error (RMSE) of 96.10%, and structural similarity index measure (SSIM) of 94.20%, in comparison to existing leading methodologies. The proposed model achieved reconstruction accuracies of 96.2% using Xu Net and 95.7% with SR Net. By combining DCT with deep learning algorithms, the proposed approach overcomes the limitations of spatial domain methods, offering a more flexible and effective steganography solution. Furthermore, simulation results confirm that the proposed technique outperforms state-of-the-art methods across key performance metrics, including MSE, PSNR, SSIM, and RMSE.
Next-generation diabetes diagnosis and personalized diet-activity management: A hybrid ensemble paradigm
Diabetes, a chronic metabolic condition characterised by persistently high blood sugar levels, necessitates early detection to mitigate its risks. Inadequate dietary choices can contribute to various health complications, emphasising the importance of personalised nutrition interventions. However, real-time selection of diets tailored to individual nutritional needs is challenging because of the intricate nature of foods and the abundance of dietary sources. Because diabetes is a chronic condition, patients with this illness must choose a healthy diet. Patients with diabetes frequently need to visit their doctor and rely on expensive medications to manage their condition. It is challenging to purchase medication for chronic illnesses on a regular basis in underdeveloped nations. Motivated by this concept, we suggest a hybrid model that, rather than depending solely on medication to evade a visit to the doctor, can first anticipate diabetes and then suggest a diet and exercise regimen. This research proposes an optimized approach by harnessing machine learning classifiers, including Random Forest, Support Vector Machine, and XGBoost, to develop a robust framework for accurate diabetes prediction. The study addresses the difficulties in predicting diabetes precisely from limited labeled data and outliers in diabetes datasets. Furthermore, a thorough food and exercise recommender system is unveiled, offering individualized and health-conscious nutrition recommendations based on user preferences and medical information. Leveraging efficient learning and inference techniques, the study achieves a meager error rate of less than 30% using an extensive dataset comprising over 100 million user-rated foods. This research underscores the significance of integrating machine learning classifiers with personalized nutritional recommendations to enhance diabetes prediction and management. The proposed framework has substantial potential to facilitate early detection, provide tailored dietary guidance, and alleviate the economic burden associated with diabetes-related healthcare expenses.
Multiagent Semantical Annotation Enhancement Model for IoT-Based Energy-Aware Data
The Internet of Things (IoT) is involved in dealing with physical items, gadgets, vehicles, structures, and different things that are inserted into hardware, programming, sensors, and system availability, which empowers these items to gather and trade information. Improving extraction of sensor-based data for energy awareness and then annotating it and converting it into semantically enabled form for analyzing results with the use of improved tools and applications are the focus of this research. However, as the amount of real time data gets huge, it becomes difficult to track results when needed at once. Reconciliation of heterogeneous information sources into an interlinked data is a standout among the most pertinent difficulties for some learning based systems these days. This paper forms suitable elements by a methodology for adjustment of heterogeneous sensor-based Web assets, where different tools and applications like weather detection for self-observing and self-diagnostics use dispersed human specialists and learning. The proposed general model uses a capability of the Semantic Web innovation and concentrates on the part of a semantic adjustment of existing broadly utilized models of information representation to Resource Description Framework (RDF) based semantically rich arrangement. This work is valuable for sorting out and inquiry of the detecting information in the Internet of Things.
Secure 3D data hiding through cryptographic steganalysis resistance: reducing geometric inconsistency vulnerabilities
The advent of the metaverse has generated considerable interest in 3D models, although data transfer security continues to be a paramount issue. In the contemporary digital landscape, characterized by ubiquitous internet connectivity and widespread image distribution, the protection of sensitive data within 3D models is becoming increasingly imperative. Protecting private and sensitive information within 3D models has become essential in the current interconnected digital environment, which is marked by pervasive internet access and extensive model sharing. Existing transmission mechanisms are vulnerable to various cyber risks during the transfer of important 3D models via insecure networks. To address the challenges in securing sensitive information embedded in 3D models, this article introduces a contemporary and effective system that combines cryptography with 3D steganography techniques. This study employed AES-128 with cipher block chaining (CBC-IV) and an initialization vector to convert plaintext into ciphertext. The study employed SHA-256, salt, and a 32-bit password to produce the encryption key, creating a fundamental layer of protection. This research used encrypted data within a 3D facial model employing geometric characteristics. This study defined key regions, identified significant vertices, and assessed the importance of each vertex based on geometric characteristics. The present study included data on vertices adjacent to landmarks, which were rounded and augmented using an enlarged scale factor, resulting in a stego 3D model. The performance measurements show how well our method works, with a Peak Signal-to-Noise Ratio (PSNR) of 61.31 dB, a Mean Square Error (MSE) of 3.17, a correlation coefficient of 0.95, and a Region Hausdorff Distance (RHD) of 0.04. Our method attained Number of Pixel Change Rate (NPCR) and Unified Average Changing Intensity (UACI) values of 94.82 and 28.31, respectively, surpassing current methodologies. Our methodology addresses geometric inconsistency issues and adeptly conceals the model’s deformed geometry. In the future, we will investigate blockchain technology alongside 3D model encryption to enhance the security, authenticity, and transparency of safeguarded 3D model data.
A new approach of anomaly detection in shopping center surveillance videos for theft prevention based on RLCNN model
The amount of video data produced daily by today’s surveillance systems is enormous, making analysis difficult for computer vision specialists. It is challenging to continuously search these massive video streams for unexpected accidents because they occur seldom and have little chance of being observed. Contrarily, deep learning-based anomaly detection decreases the need for human labor and has comparably trustworthy decision-making capabilities, hence promoting public safety. In this article, we introduce a system for efficient anomaly detection that can function in surveillance networks with a modest level of complexity. The proposed method starts by obtaining spatiotemporal features from a group of frames. The multi-layer extended short-term memory model can precisely identify continuing unusual activity in complicated video scenarios of a busy shopping mall once we transmit the in-depth features extracted. We conducted in-depth tests on numerous benchmark datasets for anomaly detection to confirm the proposed framework’s functionality in challenging surveillance scenarios. Compared to state-of-the-art techniques, our datasets, UCF50, UCF101, UCFYouTube, and UCFCustomized, provided better training and increased accuracy. Our model was trained for more classes than usual, and when the proposed model, RLCNN, was tested for those classes, the results were encouraging. All of our datasets worked admirably. However, when we used the UCFCustomized and UCFYouTube datasets compared to other UCF datasets, we achieved greater accuracy of 96 and 97, respectively.
Designing an Energy-Aware Mechanism for Lifetime Improvement of Wireless Sensor Networks: a Comprehensive Study
In this paper, we have presented a comprehensive study on designing an aware energy architecture of Wireless Sensor Networks. A summary and modelling of various major techniques in designing the constituents of clustered wireless sensor network architecture are given in detail. In continuation of it, we have also analysed our proposed scheme, Extended-Multilayer Cluster Designing Algorithm (E-MCDA) in a large network. Among its components, algorithms for time slot allocation, minimising the CH competition candidates, and cluster member selection to CH play underpinning roles to achieve the target. These incorporations in MCDA result in minimising transmissions, suppressing the unneeded response of transmissions and near equal size and equal load clusters. We have done extensive simulations in NS2 and evaluate the performance of E-MCDA in energy consumption at various aspects of energy, packets transmission, the number of designed clusters, the number of nodes per cluster and un-clustered nodes. It is found that the proposed mechanism optimistically outperforms the competition with MCDA and EADUC concerning parameters above.
Face recognition with Bayesian convolutional networks for robust surveillance systems
Recognition of facial images is one of the most challenging research issues in surveillance systems due to different problems including varying pose, expression, illumination, and resolution. The robustness of recognition method strongly relies on the strength of extracted features and the ability to deal with low-quality face images. The proficiency to learn robust features from raw face images makes deep convolutional neural networks (DCNNs) attractive for face recognition. The DCNNs use softmax for quantifying model confidence of a class for an input face image to make a prediction. However, the softmax probabilities are not a true representation of model confidence and often misleading in feature space that may not be represented with available training examples. The primary goal of this paper is to improve the efficacy of face recognition systems by dealing with false positives through employing model uncertainty. Results of experimentations on open-source datasets show that 3–4% of accuracy is improved with model uncertainty over the DCNNs and conventional machine learning techniques.