Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
66 result(s) for "Kanwal, Nadia"
Sort by:
Seizure detection from EEG signals using Multivariate Empirical Mode Decomposition
We present a data driven approach to classify ictal (epileptic seizure) and non-ictal EEG signals using the multivariate empirical mode decomposition (MEMD) algorithm. MEMD is a multivariate extension of empirical mode decomposition (EMD), which is an established method to perform the decomposition and time-frequency (T−F) analysis of non-stationary data sets. We select suitable feature sets based on the multiscale T−F representation of the EEG data via MEMD for the classification purposes. The classification is achieved using the artificial neural networks. The efficacy of the proposed method is verified on extensive publicly available EEG datasets.
Data Driven Approach for Eye Disease Classification with Machine Learning
Medical health systems have been concentrating on artificial intelligence techniques for speedy diagnosis. However, the recording of health data in a standard form still requires attention so that machine learning can be more accurate and reliable by considering multiple features. The aim of this study is to develop a general framework for recording diagnostic data in an international standard format to facilitate prediction of disease diagnosis based on symptoms using machine learning algorithms. Efforts were made to ensure error-free data entry by developing a user-friendly interface. Furthermore, multiple machine learning algorithms including Decision Tree, Random Forest, Naive Bayes and Neural Network algorithms were used to analyze patient data based on multiple features, including age, illness history and clinical observations. This data was formatted according to structured hierarchies designed by medical experts, whereas diagnosis was made as per the ICD-10 coding developed by the American Academy of Ophthalmology. Furthermore, the system is designed to evolve through self-learning by adding new classifications for both diagnosis and symptoms. The classification results from tree-based methods demonstrated that the proposed framework performs satisfactorily, given a sufficient amount of data. Owing to a structured data arrangement, the random forest and decision tree algorithms’ prediction rate is more than 90% as compared to more complex methods such as neural networks and the naïve Bayes algorithm.
VQProtect: Lightweight Visual Quality Protection for Error-Prone Selectively Encrypted Video Streaming
Mobile multimedia communication requires considerable resources such as bandwidth and efficiency to support Quality-of-Service (QoS) and user Quality-of-Experience (QoE). To increase the available bandwidth, 5G network designers have incorporated Cognitive Radio (CR), which can adjust communication parameters according to the needs of an application. The transmission errors occur in wireless networks, which, without remedial action, will result in degraded video quality. Secure transmission is also a challenge for such channels. Therefore, this paper’s innovative scheme “VQProtect” focuses on the visual quality protection of compressed videos by detecting and correcting channel errors while at the same time maintaining video end-to-end confidentiality so that the content remains unwatchable. For the purpose, a two-round secure process is implemented on selected syntax elements of the compressed H.264/AVC bitstreams. To uphold the visual quality of data affected by channel errors, a computationally efficient Forward Error Correction (FEC) method using Random Linear Block coding (with complexity of O(k(n−1)) is implemented to correct the erroneous data bits, effectively eliminating the need for retransmission. Errors affecting an average of 7–10% of the video data bits were simulated with the Gilbert–Elliot model when experimental results demonstrated that 90% of the resulting channel errors were observed to be recoverable by correctly inferring the values of erroneous bits. The proposed solution’s effectiveness over selectively encrypted and error-prone video has been validated through a range of Video Quality Assessment (VQA) metrics.
Privacy Preserved Video Summarization of Road Traffic Events for IoT Smart Cities
The purpose of smart surveillance systems for automatic detection of road traffic accidents is to quickly respond to minimize human and financial losses in smart cities. However, along with the self-evident benefits of surveillance applications, privacy protection remains crucial under any circumstances. Hence, to ensure the privacy of sensitive data, European General Data Protection Regulation (EU-GDPR) has come into force. EU-GDPR suggests data minimisation and data protection by design for data collection and storage. Therefore, for a privacy-aware surveillance system, this paper targets the identification of two areas of concern: (1) detection of road traffic events (accidents), and (2) privacy preserved video summarization for the detected events in the surveillance videos. The focus of this research is to categorise the traffic events for summarization of the video content, therefore, a state-of-the-art object detection algorithm, i.e., You Only Look Once (YOLOv5), has been employed. YOLOv5 is trained using a customised synthetic dataset of 600 annotated accident and non-accident video frames. Privacy preservation is achieved in two steps, firstly, a synthetic dataset is used for training and validation purposes, while, testing is performed on real-time data with an accuracy from 55% to 85%. Secondly, the real-time summarized videos (reduced video duration to 42.97% on average) are extracted and stored in an encrypted format to avoid un-trusted access to sensitive event-based data. Fernet, a symmetric encryption algorithm is applied to the summarized videos along with Diffie–Hellman (DH) key exchange algorithm and SHA256 hash algorithm. The encryption key is deleted immediately after the encryption process, and the decryption key is generated at the system of authorised stakeholders, which prevents the key from a man-in-the-middle (MITM) attack.
Real-Time, Content-Based Communication Load Reduction in the Internet of Multimedia Things
There is an increasing number of devices available for the Internet of Multimedia Things (IoMT). The demands these ever-more complex devices make are also increasing in terms of energy efficiency, reliability, quality-of-service guarantees, higher data transfer rates, and general security. The IoMT itself faces challenges when processing and storing massive amounts of data, transmitting it over low bandwidths, bringing constrained resources to bear and keeping power consumption under check. This paper’s research focuses on an efficient video compression technique to reduce that communication load, potentially generated by diverse camera sensors, and also improve bit-rates, while ensuring accuracy of representation and completeness of video data. The proposed method applies a video content-based solution, which, depending on the motion present between consecutive frames, decides on whether to send only motion information or no frame information at all. The method is efficient in terms of limiting the data transmitted, potentially conserving device energy, and reducing latencies by means of negotiable processing overheads. Data are also encrypted in the interests of confidentiality. Video quality measurements, along with a good number of Quality-of-Service measurements demonstrated the value of the load reduction, as is also apparent from a comparison with other related methods.
FAB: Fast Angular Binary Descriptor for Matching Corner Points in Video Imagery
Image matching is a fundamental step in several computer vision applications where the requirement is fast, accurate, and robust matching of images in the presence of different transformations. Detection and more importantly description of low-level image features proved to be a more appropriate choice for this purpose, such as edges, corners, or blobs. Modern descriptors use binary values to store neighbourhood information of feature points for matching because binary descriptors are fast to compute and match. This paper proposes a descriptor called Fast Angular Binary (FAB) descriptor that illustrates the neighbourhood of a corner point using a binary vector. It is different from conventional descriptors because of selecting only the useful neighbourhood of corner point instead of the whole circular area of specific radius. The descriptor uses the angle of corner points to reduce the search space and increase the probability of finding an accurate match using binary descriptor. Experiments show that FAB descriptor’s performance is good, but the calculation and matching time is significantly less than BRIEF, the best known binary descriptor, and AMIE, a descriptor that uses entropy and average intensities of informative part of a corner point for the description.
Matching corners using the informative arc
Corners are important features in images because they typically delimit the boundaries of regions or objects. For real-time applications, it is essential that corners are detected and matched reliably and rapidly. This study presents two related descriptors which are compatible with standard corner detectors and able to be computed and matched at video rate: one encodes the entire region within a corner, whereas the other describes only the region within an object. The advantage of encoding only the region within an object is demonstrated. The noise stability of the descriptors is assessed and compared with that of the popular binary robust independent elementary feature (BRIEF) descriptor, and the matching performances of the descriptors are compared on video sequences from hand-held cameras and the PETS2012 database. A statistical analysis shows that performance is indistinguishable from BRIEF.
Augmented reality applications for cultural heritage using Kinect
This paper explores the use of data from the Kinect sensor for performing augmented reality, with emphasis on cultural heritage applications. It is shown that the combination of depth and image correspondences from the Kinect can yield a reliable estimate of the location and pose of the camera, though noise from the depth sensor introduces an unpleasant jittering of the rendered view. Kalman filtering of the camera position was found to yield a much more stable view. Results show that the system is accurate enough for in situ augmented reality applications. Skeleton tracking using Kinect data allows the appearance of participants to be augmented, and together these facilitate the development of cultural heritage applications.
Sensor fusion of camera, GPS and IMU using fuzzy adaptive multiple motion models
A tracking system that will be used for augmented reality applications has two main requirements: accuracy and frame rate. The first requirement is related to the performance of the pose estimation algorithm and how accurately the tracking system can find the position and orientation of the user in the environment. Accuracy problems of current tracking devices, considering that they are low-cost devices, cause static errors during this motion estimation process. The second requirement is related to dynamic errors (the end-to-end system delay, occurring because of the delay in estimating the motion of the user and displaying images based on this estimate). This paper investigates combining the vision-based estimates with measurements from other sensors, GPS and IMU, in order to improve the tracking accuracy in outdoor environments. The idea of using Fuzzy Adaptive Multiple Models was investigated using a novel fuzzy rule-based approach to decide on the model that results in improved accuracy and faster convergence for the fusion filter. Results show that the developed tracking system is more accurate than a conventional GPS–IMU fusion approach due to additional estimates from a camera and fuzzy motion models. The paper also presents an application in cultural heritage context running at modest frame rates due to the design of the fusion algorithm.
Evaluation Method, Dataset Size or Dataset Content: How to Evaluate Algorithms for Image Matching?
Most vision papers have to include some evaluation work in order to demonstrate that the algorithm proposed is an improvement on existing ones. Generally, these evaluation results are presented in tabular or graphical forms. Neither of these is ideal because there is no indication as to whether any performance differences are statistically significant. Moreover, the size and nature of the dataset used for evaluation will obviously have a bearing on the results, and neither of these factors are usually discussed. This paper evaluates the effectiveness of commonly used performance characterization metrics for image feature detection and description for matching problems and explores the use of statistical tests such as McNemar’s test and ANOVA as better alternatives.