Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
160
result(s) for
"data integrity enhancement"
Sort by:
Spatial Resolution and Data Integrity Enhancement of Microwave Radiometer Measurements Using Total Variation Deconvolution and Bilateral Fusion Technique
2022
Passive multi-frequency microwave sensors are indispensable instruments for worldwide environmental monitoring. However, they often suffer from the issues of poor spatial resolution and the original land–sea transition zone data are contaminated severely. Conventional analytical deconvolution methods enhance the spatial resolution at the expense of noise amplification and Gibbs fluctuations in the land–sea transition zone. In order to enhance the spatial resolution as well as simultaneously enhance the integrity of the Microwave Radiometer data, a method based on Total Variation deconvolution, Bilateral Filter, and data fusion (TVBF+) is proposed. Our method substantially improves data integrity and obtains similar enhanced resolution compared to existing methods. Experiments performed using both simulated and actual microwave radiation Imager (MWRI) data demonstrate the method’s robustness and effectiveness.
Journal Article
The Threat of Algocracy: Reality, Resistance and Accommodation
2016
One of the most noticeable trends in recent years has been the increasing reliance of public decision-making processes (bureaucratic, legislative and legal) on algorithms, i.e. computer-programmed step-by-step instructions for taking a given set of inputs and producing an output. The question raised by this article is whether the rise of such algorithmic governance creates problems for the moral or political legitimacy of our public decision-making processes. Ignoring common concerns with data protection and privacy, it is argued that algorithmic governance does pose a significant threat to the legitimacy of such processes. Modelling my argument on Estlund’s threat of epistocracy, I call this the ‘threat of algocracy’. The article clarifies the nature of this threat and addresses two possible solutions (named, respectively, ‘resistance’ and ‘accommodation’). It is argued that neither solution is likely to be successful, at least not without risking many other things we value about social decision-making. The result is a somewhat pessimistic conclusion in which we confront the possibility that we are creating decision-making processes that constrain and limit opportunities for human participation.
Journal Article
A DWT-SVD based robust digital watermarking for medical image security
by
Kafi, Redouane
,
Euschi, Salah
,
Zermi, Narima
in
Access control
,
Color image watermarking
,
Confidentiality
2021
•The proposed approach is a hybrid scheme based on a discrete wavelet transforms and singular value decomposition.•The proposed approach being blind, the watermarked image is sufficient to identify the patient.•The integrity of the extracted information can be verified by comparing the hash embedded in the image.•Our approach allows insertion of a large amount of data while preserving a reasonable imperceptibility.•In terms of robustness, the two variants are resistant to filtering and compression attacks while maintaining a good NCC.
In this work, we propose a blind watermarking approach for medical image protection. In this approach, the watermark will be constituted of the Electronic Patient Record and the image acquisition data. In order to enhance the security and guarantee the data integrity, the Electronic Patient Record hash will be added to the watermark. The integration process is based on a DWT-SVD combination, a DWT is applied to the retinal image, then, an SVD is applied to the LL sub-band. The watermark will be then integrated into the least significant bits of the S component obtained by combining the parity of the successive coefficients. Experimental results for imperceptibility and robustness show that the proposed scheme maintains a high quality watermarked image and remains highly robust against several conventional attacks.
Journal Article
Enhanced lung image segmentation using deep learning
by
Gite, Shilpa
,
Mishra, Abhinav
,
Kotecha, Ketan
in
Accuracy
,
Algorithms
,
Artificial Intelligence
2023
With the advances in technology, assistive medical systems are emerging with rapid growth and helping healthcare professionals. The proactive diagnosis of diseases with artificial intelligence (AI) and its aligned technologies has been an exciting research area in the last decade. Doctors usually detect tuberculosis (TB) by checking the lungs’ X-rays. Classification using deep learning algorithms is successfully able to achieve accuracy almost similar to a doctor in detecting TB. It is found that the probability of detecting TB increases if classification algorithms are implemented on segmented lungs instead of the whole X-ray. The paper’s novelty lies in detailed analysis and discussion of U-Net + + results and implementation of U-Net + + in lung segmentation using X-ray. A thorough comparison of U-Net + + with three other benchmark segmentation architectures and segmentation in diagnosing TB or other pulmonary lung diseases is also made in this paper. To the best of our knowledge, no prior research tried to implement U-Net + + for lung segmentation. Most of the papers did not even use segmentation before classification, which causes data leakage. Very few used segmentations before classification, but they only used U-Net, which U-Net + + can easily replace because accuracy and mean_iou of U-Net + + are greater than U-Net accuracy and mean_iou , discussed in results, which can minimize data leakage. The authors achieved more than 98% lung segmentation accuracy and mean_iou 0.95 using U-Net + + , and the efficacy of such comparative analysis is validated.
Journal Article
Enhancing security in instant messaging systems with a hybrid SM2, SM3, and SM4 encryption framework
by
Juanatas, Roben A.
,
Lu, He-Jun
,
Abisado, Mideth B.
in
Algorithms
,
Communication
,
Computer and Information Sciences
2025
With the rapid integration of instant messaging systems (IMS) into critical domains such as finance, public services, and enterprise operations, ensuring the confidentiality, integrity, and availability of communication data has become a pressing concern. Existing IMS security solutions commonly employ traditional public-key cryptography, centralized authentication servers, or single-layer encryption, each of which is susceptible to single-point failures and provides only limited resistance against sophisticated attacks. This study addresses the research gap regarding the complementary advantages of SM2, SM3, and SM4 algorithms, as well as hybrid collaborative security schemes in IMS security. This paper presents a hybrid encryption security framework that combines the SM2, SM3, and SM4 algorithms to address emerging threats in IMS. The proposed framework adopts a decentralized architecture with certificateless authentication and performs all encryption and decryption operations on the client side, eliminating reliance on centralized servers and mitigating single-point failure risks. It further enforces an encrypt-before-store policy to enhance data security at the storage layer. The framework integrates SM2 for key exchange and authentication, SM4 for message encryption, and SM3 for integrity verification, forming a multi-layer defense mechanism capable of countering Man-in-the-Middle (MITM) attacks, credential theft, database intrusions, and other vulnerabilities. Experimental evaluations demonstrate the system’s strong security performance and communication efficiency: SM2 achieves up to 642 times faster key generation and 2.2 times faster decryption compared to RSA-3072; SM3 improves hashing performance by up to 11.5% over SHA-256; and SM4 delivers up to 22% higher encryption efficiency than AES-256 for small data blocks. These results verify the proposed framework’s practicality and performance advantages in lightweight, real-time IMS applications.
Journal Article
Mathematical analysis of histogram equalization techniques for medical image enhancement: a tutorial from the perspective of data loss
by
Patel, Rachit
,
Roy, Santanu
,
Bhalla, Kanika
in
Brain cancer
,
Colorectal cancer
,
Computer Communication Networks
2024
This tutorial demonstrates a novel mathematical analysis of histogram equalization techniques and its application in medical image enhancement. In this paper, conventional Global Histogram Equalization (GHE), Contrast Limited Adaptive Histogram Equalization (CLAHE), Histogram Specification (HS) and Brightness Preserving Dynamic Histogram Equalization (BPDHE) are re-investigated by a novel mathematical analysis. All these HE methods are widely employed by researchers in image processing and medical image diagnosis domain, however, this has been observed that these HE methods have significant limitation of data loss. In this paper, a mathematical proof is given that any kind of Histogram Equalization method is inevitable of data loss, because any HE method is a non-linear method. All these Histogram Equalization methods are implemented on two different datasets, they are, brain tumor MRI image dataset and colorectal cancer H and E-stained histopathology image dataset. Pearson Correlation Coefficient (PCC) and Structural Similarity Index Matrix (SSIM) both are found in the range of 0.6-0.95 for overall all HE methods. Moreover, those results are compared with Reinhard method which is a linear contrast enhancement method. The experimental results suggest that Reinhard method outperformed any HE methods for medical image enhancement. Furthermore, a popular CNN model VGG-16 is implemented, on the MRI dataset in order to prove that there is a direct correlation between less accuracy and data loss.
Journal Article
A Novel Blockchain-Based Deepfake Detection Method Using Federated and Deep Learning Models
by
Navimipour, Nima Jafari
,
Dag, Hasan
,
Talebi, Samira
in
Artificial Intelligence
,
Artificial neural networks
,
Automation
2024
In recent years, the proliferation of deep learning (DL) techniques has given rise to a significant challenge in the form of deepfake videos, posing a grave threat to the authenticity of media content. With the rapid advancement of DL technology, the creation of convincingly realistic deepfake videos has become increasingly prevalent, raising serious concerns about the potential misuse of such content. Deepfakes have the potential to undermine trust in visual media, with implications for fields as diverse as journalism, entertainment, and security. This study presents an innovative solution by harnessing blockchain-based federated learning (FL) to address this issue, focusing on preserving data source anonymity. The approach combines the strengths of SegCaps and convolutional neural network (CNN) methods for improved image feature extraction, followed by capsule network (CN) training to enhance generalization. A novel data normalization technique is introduced to tackle data heterogeneity stemming from diverse global data sources. Moreover, transfer learning (TL) and preprocessing methods are deployed to elevate DL performance. These efforts culminate in collaborative global model training zfacilitated by blockchain and FL while maintaining the utmost confidentiality of data sources. The effectiveness of our methodology is rigorously tested and validated through extensive experiments. These experiments reveal a substantial improvement in accuracy, with an impressive average increase of 6.6% compared to six benchmark models. Furthermore, our approach demonstrates a 5.1% enhancement in the area under the curve (AUC) metric, underscoring its ability to outperform existing detection methods. These results substantiate the effectiveness of our proposed solution in countering the proliferation of deepfake content. In conclusion, our innovative approach represents a promising avenue for advancing deepfake detection. By leveraging existing data resources and the power of FL and blockchain technology, we address a critical need for media authenticity and security. As the threat of deepfake videos continues to grow, our comprehensive solution provides an effective means to protect the integrity and trustworthiness of visual media, with far-reaching implications for both industry and society. This work stands as a significant step toward countering the deepfake menace and preserving the authenticity of visual content in a rapidly evolving digital landscape.
Journal Article
Data recovery algorithm based on generative adversarial networks in crowd sensing Internet of Things
by
Shi, Yushi
,
Zhang, Xiaoqi
,
Hu, Qiaohong
in
Algorithms
,
Artificial neural networks
,
Computer Science
2023
Internet of Things has developed quickly to share data from billions of physical devices. Completeness of data is important especially in crowd sensing Internet of Things. How to recover the lost data is a fundamental operation to utilize the coming of Internet of Things. Existing data recovery algorithms depend heavy on the accuracy distribution of environmental data and result in bad performance when reconstructing the lost data. This paper introduces a data recovery algorithm based on generative adversarial networks. The convolution neural network is used as the basic model of this algorithm. We add a restore network to reload the unlost data after recovery in this algorithm. The algorithm mainly includes two parts: (1) training process, in which all the collected sensory data are used to train the proposed generative adversarial networks model and (2) data recovery process, in which the lost data is recovered by using the trained generator. We use random loss dataset and periodic loss dataset to validate the data recovery performance. Finally, these two cases can verify that the recovery algorithm based on generative adversarial network is more enhanced compared with the comparison experiment under the three metrics of mean square error, mean absolute error, and R-square. The results show that our proposed algorithm can obtain data that are reliable and thus improve the performance of data recovery.
Journal Article
Data security and consumer trust in FinTech innovation in Germany
2018
PurposeThe purpose of this study is to empirically analyse the key factors that influence the adoption of financial technology innovation in the country Germany. The advancement of mobile devices and their usage have increased the uptake of financial technology (FinTech) innovation. Financial sectors and startups see FinTech as a gateway to increase business opportunities, but mobile applications and other technology platforms must be launched to explore such opportunities. Mobile application security threats have increased tremendously and have become a challenge for both users and FinTech innovators. In this paper, the authors empirically inspect the components that influence the expectations of both users and organizations to adopt FinTech, such as customer trust, data security, value added, user interface design and FinTech promotion. The empirical results definitely confirm that data security, customer trust and the user design interface affect the adoption of FinTech. Existing studies have used the Technology Acceptance Model (TAM) to address this issue. The outcomes of this study can be used to improve the performance of FinTech strategies and enable banks to achieve economies of scale for global intensity.Design/methodology/approachIn this paper, the authors empirically consider factors that influence the expectations of both users and organizations in adopting FinTech, such as customer trust, data security, value added, the user design interface and FinTech promotion. The results confirm that customer trust, data security and the user design interface affect the adoption of FinTech. This research proposes a model called “Intention to adopt FinTech in Germany,” constructs of which were developed based on the TAM and five additional components, as identified. The outcomes of this study can be used to improve the performance of FinTech strategies and enable banks to achieve economies of scale for global intensity.FindingsThe authors demonstrated that the number of mobile users in Germany is rapidly increasing; yet the adoption of FinTech is extremely sluggish. It is intriguing to reckon that 99 per cent of respondents had mobile devices, but only 10 per cent recognized FinTech. Further, it is significantly discouraging to perceive that only 10 of the 209 respondents had ever used FinTech services, representing under 1 per cent of the surveyed respondents. It is obvious that the FinTech incubators and banks offering FinTech services need to persuade their customers regarding the usefulness and value added advantages of FinTech. This study has been carried out to determine the key factors that influence and provoke FinTech adoption.Research limitations/implicationsThere are a few limitations in this study. Initially, this study focuses on FinTech implementation in Germany and not the whole of Europe. In addition, demographic and regional factors could be consolidated to inspect their particular impact on the intention to use FinTech services, particularly among younger users with a high interest in technology. Without these constraints, the authors could have gathered additional data for a more robust result and obtained new knowledge to further upgrade polices to enhance the FinTech adoption process. Future analysts can assist exploration of this topic by altering determinants in the unified theory of acceptance and use of technology model. Additionally, because the cluster sampling technique was used, the reported outcomes are not 100 per cent generalized to the German population. To accomplish a complete generalization, a basic random sampling strategy for the whole population is essential. The authors could also alleviate some limitations by examining how online vendors are performing with regard to FinTech to satisfy the needs of customers via case studies.Practical implicationsThis study was conducted in Germany and might have produced different results if held in other countries, as technology acceptance is different in a different environment. For instance, the authors suspect that the results would be somewhat different, were the research to be conducted in the United Kingdom, where take-up of FinTech appears to be far greater than in Germany. Therefore, the authors’ results are only generalized for the country of Germany and not other geographical areas. Furthermore, respondents may have been influenced by past experiences about FinTech usage which might have led them to neglect to answer some questions. In spite of this, this study did not consider the influence of moderating variables such as age, education and FinTech services experience. The authors also neglected social impact and control factors, as their corresponding items disregarded the instrument dependability. Accordingly, the authors could not quantify social impact and control factors on FinTech use.Social implicationsThe outcomes of this study can be used to improve the performance of FinTech strategies and enable banks to accomplish economies of scale for global intensity. The authors do hope that this paper will serve to encourage FinTech innovators in their approach to FinTech and enable FinTech researchers to use past work with more prominent certainty, resulting in rigid hypothesis improvement in the future.Originality/valueA considerable amount of revenue has been invested in the information technology (IT) infrastructure of banks to enhance their performance, but investment in IT remains a substantial risk regarding the return on investment (Carlson, 2015). Most banks and financial organizations around the globe are engaging in an extreme pressure from their customers and competitors to enhance IT.
Journal Article
Securing online integrity: a hybrid approach to deepfake detection and removal using Explainable AI and Adversarial Robustness Training
by
Paulchamy, B.
,
Maheshwari, R. Uma
in
Accuracy
,
Adversarial attacks
,
Adversarial Robustness Training (ART)
2024
As deepfake technology becomes increasingly sophisticated, the proliferation of manipulated images presents a significant threat to online integrity, requiring advanced detection and mitigation strategies. Addressing this critical challenge, our study introduces a pioneering approach that integrates Explainable AI (XAI) with Adversarial Robustness Training (ART) to enhance the detection and removal of deepfake content. The proposed methodology, termed XAI-ART, begins with the creation of a diverse dataset that includes both authentic and manipulated images, followed by comprehensive preprocessing and augmentation. We then employ Adversarial Robustness Training to fortify the deep learning model against adversarial manipulations. By incorporating Explainable AI techniques, our approach not only improves detection accuracy but also provides transparency in model decision-making, offering clear insights into how deepfake content is identified. Our experimental results underscore the effectiveness of XAI-ART, with the model achieving an impressive accuracy of 97.5% in distinguishing between genuine and manipulated images. The recall rate of 96.8% indicates that our model effectively captures the majority of deepfake instances, while the F1-Score of 97.5% demonstrates a well-balanced performance in precision and recall. Importantly, the model maintains high robustness against adversarial attacks, with a minimal accuracy reduction to 96.7% under perturbations.
Journal Article