Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
624 result(s) for "Spatial regularization"
Sort by:
Deep Convolutional Neural Networks with Spatial Regularization, Volume and Star-Shape Priors for Image Segmentation
Deep Convolutional Neural Networks (DCNNs) can well extract the features from natural images. However, the classification functions in the existing network architecture of CNNs are simple and lack capabilities to handle important spatial information as have been done by many well-known traditional variational image segmentation models. Priors such as spatial regularization, volume prior and shapes priors cannot be handled by existing DCNNs. We propose a novel Soft Threshold Dynamics (STD) framework which can integrate many spatial priors of the classic variational models into the DCNNs for image segmentation. The novelty of our method is to interpret the softmax activation function as a dual variable in a variational problem, and thus many spatial priors can be imposed in the dual space. From this viewpoint, we can build a STD based framework which can enable the outputs of DCNNs to have many special priors such as spatial regularization, volume preservation and star-shape prior. The proposed method is a general mathematical framework and it can be applied to any image segmentation DCNNs with a softmax classification layer. To show the efficiency of our method, we applied it to the popular DeepLabV3+ image segmentation network, and the experiments results show that our method can work efficiently on data-driven image segmentation DCNNs.
Deciphering spatial domains from spatially resolved transcriptomics through spatially regularized deep graph networks
Background Recent advancements in spatially resolved transcriptomics (SRT) have opened up unprecedented opportunities to explore gene expression patterns within spatial contexts. Deciphering spatial domains is a critical task in spatial transcriptomic data analysis, aiding in the elucidation of tissue structural heterogeneity and biological functions. However, existing spatial domain detection methods ignore the consistency of expression patterns and spatial arrangements between spots, as well as the severe gene dropout phenomenon present in SRT data, resulting in suboptimal performance in identifying tissue spatial heterogeneity. Results In this paper, we introduce a novel framework, spatially regularized deep graph networks (SR-DGN), which integrates gene expression profiles with spatial information to learn spatially-consistent and informative spot representations. Specifically, SR-DGN employs graph attention networks (GAT) to adaptively aggregate gene expression information from neighboring spots, considering local expression patterns between spots. In addition, the spatial regularization constraint ensures the consistency of neighborhood relationships between physical and embedded spaces in an end-to-end manner. SR-DGN also employs cross-entropy (CE) loss to model gene expression states, effectively mitigating the impact of noisy gene dropouts. Conclusions Experimental results demonstrate that SR-DGN outperforms state-of-the-art methods in spatial domain identification across SRT data from different sequencing platforms. Moreover, SR-DGN is capable of recovering known microanatomical structures, yielding clearer low-dimensional visualizations and more accurate spatial trajectory inferences.
A New Regularization for Deep Learning-Based Segmentation of Images with Fine Structures and Low Contrast
Deep learning methods have achieved outstanding results in many image processing and computer vision tasks, such as image segmentation. However, they usually do not consider spatial dependencies among pixels/voxels in the image. To obtain better results, some methods have been proposed to apply classic spatial regularization, such as total variation, into deep learning models. However, for some challenging images, especially those with fine structures and low contrast, classical regularizations are not suitable. We derived a new regularization to improve the connectivity of segmentation results and make it applicable to deep learning. Our experimental results show that for both deep learning methods and unsupervised methods, the proposed method can improve performance by increasing connectivity and dealing with low contrast and, therefore, enhance segmentation results.
Using 3D spatial correlations to improve the noise robustness of multi component analysis of 3D multi echo quantitative T2 relaxometry data
We present a computationally feasible and iterative multi-voxel spatially regularized algorithm for myelin water fraction (MWF) reconstruction. This method utilizes 3D spatial correlations present in anatomical/pathological tissues and underlying B1+-inhomogeneity or flip angle inhomogeneity to enhance the noise robustness of the reconstruction while intrinsically accounting for stimulated echo contributions using T2-distribution data alone. Simulated data and in vivo data acquired using 3D non-selective multi-echo spin echo (3DNS-MESE) were used to compare the reconstruction quality of the proposed approach against those of the popular algorithm (the method by Prasloski et al.) and our previously proposed 2D multi-slice spatial regularization spatial regularization approach. We also investigated whether the inter-sequence correlations and agreements improved as a result of the proposed approach. MWF-quantifications from two sequences, 3DNS-MESE vs 3DNS-gradient and spin echo (3DNS-GRASE), were compared for both reconstruction approaches to assess correlations and agreements between inter-sequence MWF-value pairs. MWF values from whole-brain data of six volunteers and two multiple sclerosis patients are being reported as well. In comparison with competing approaches such as Prasloski's method or our previously proposed 2D multi-slice spatial regularization method, the proposed method showed better agreements with simulated truths using regression analyses and Bland-Altman analyses. For 3DNS-MESE data, MWF-maps reconstructed using the proposed algorithm provided better depictions of white matter structures in subcortical areas adjoining gray matter which agreed more closely with corresponding contrasts on T2-weighted images than MWF-maps reconstructed with the method by Prasloski et al. We also achieved a higher level of correlations and agreements between inter-sequence (3DNS-MESE vs 3DNS-GRASE) MWF-value pairs. The proposed algorithm provides more noise-robust fits to T2-decay data and improves MWF-quantifications in white matter structures especially in the sub-cortical white matter and major white matter tract regions. •An accurate determination of T2 distribution in short T2 pool regime is not feasible, given the shortest achievable echo time is ∼ 7–10 ms.•The voxel wise estimation of myelin water fraction value depends on three scalar values rather than entire T2 distributions (refer to appendix F).•The proposed method utilizes 3D spatial correlations present in in-vivo tissues and underlying effective B1+-inhomogeneity map and iteratively refines both maps to enhance the noise robustness of the reconstruction.•Using measured B1+-map to correct for stimulated echo contributions would be suboptimal for reasons mentioned in the discussion section.
A Multiscale Hierarchical Model for Sparse Hyperspectral Unmixing
Due to the complex background and low spatial resolution of the hyperspectral sensor, observed ground reflectance is often mixed at the pixel level. Hyperspectral unmixing (HU) is a hot-issue in the remote sensing area because it can decompose the observed mixed pixel reflectance. Traditional sparse hyperspectral unmixing often leads to an ill-posed inverse problem, which can be circumvented by spatial regularization approaches. However, their adoption has come at the expense of a massive increase in computational cost. In this paper, a novel multiscale hierarchical model for a method of sparse hyperspectral unmixing is proposed. The paper decomposes HU into two domain problems, one is in an approximation scale representation based on resampling the method’s domain, and the other is in the original domain. The use of multiscale spatial resampling methods for HU leads to an effective strategy that deals with spectral variability and computational cost. Furthermore, the hierarchical strategy with abundant sparsity representation in each layer aims to obtain the global optimal solution. Both simulations and real hyperspectral data experiments show that the proposed method outperforms previous methods in endmember extraction and abundance fraction estimation, and promotes piecewise homogeneity in the estimated abundance without compromising sharp discontinuities among neighboring pixels. Additionally, compared with total variation regularization, the proposed method reduces the computational time effectively.
Spatially informed voxelwise modeling for naturalistic fMRI experiments
Voxelwise modeling (VM) is a powerful framework to predict single voxel responses evoked by a rich set of stimulus features present in complex natural stimuli. However, because VM disregards correlations across neighboring voxels, its sensitivity in detecting functional selectivity can be diminished in the presence of high levels of measurement noise. Here, we introduce spatially-informed voxelwise modeling (SPIN-VM) to take advantage of response correlations in spatial neighborhoods of voxels. To optimally utilize shared information, SPIN-VM performs regularization across spatial neighborhoods in addition to model features, while still generating single-voxel response predictions. We demonstrated the performance of SPIN-VM on a rich dataset from a natural vision experiment. Compared to VM, SPIN-VM yields higher prediction accuracies and better capture locally congruent information representations across cortex. These results suggest that SPIN-VM offers improved performance in predicting single-voxel responses and recovering coherent information representations. •A novel spatially informed voxelwise modeling (SPIN-VM) technique is proposed.•Correlations across neighboring voxels are leveraged during estimation of functional selectivity.•Compared to VM, SPIN-VM offers improved accuracy in predicting single-voxel BOLD responses.•SPIN-VM is more sensitive in revealing coherent information representations across cortex.•SPIN-VM is a powerful method for modeling fMRI data from naturalistic experiments.
An improved kernel correlation filter for complex scenes target tracking
Video tracking technology employed to achieve efficient and accurate tracking of targets in complex scenes has often been one of the challenges to be tackled. When the target is in a complex scene similar to target interference, it will still create a series of issues, such as template drift although the current target tracking has achieved quality results in terms of accuracy, robustness, and speed. We propose an improved kernel correlation filter algorithm in response to this problem. We introduced a regularization matrix and fused properties of HOG and CN to train an improved kernel correlation filter. Furthermore, an independent scale filter is employed to regulate the scale adaptively. We have introduced a re-detection module to prevent the issue of the kernel correlation filter algorithm relying mainly on the maximum response value.A considerable number of experiments have been conducted on the aforementioned improvements. The algorithm’s average tracking accuracy can attain 85.8%, in the OTB2015 dataset and its running speed can attain 198FPS. The algorithm’s EAO, accuracy, and robustness, on the VOT2016 dataset, can attain 0.303, 0.553, and 0.932, respectively.Experiments demonstrate that our algorithm has satisfactory accuracy and robustness, and satisfies the real-time effect.
Adaptive Spatial-Temporal Regularization for Correlation Filters Based Visual Object Tracking
Recently, Discriminative Correlation Filters (DCF) have shown excellent performance in visual object tracking. The correlation for a computing response map can be conducted efficiently in Fourier domain by Discrete Fourier Transform (DFT) of inputs, where the DFT of an image has symmetry on the Fourier domain. To enhance the robustness and discriminative ability of the filters, many efforts have been devoted to optimizing the learning process. Regularization methods, such as spatial regularization or temporal regularization, used in existing DCF trackers aim to enhance the capacity of the filters. Most existing methods still fail to deal with severe appearance variations—in particular, the large scale and aspect ratio changes. In this paper, we propose a novel framework that employs adaptive spatial regularization and temporal regularization to learn reliable filters in both spatial and temporal domains for tracking. To alleviate the influence of the background and distractors to the non-rigid target objects, two sub-models are combined, and multiple features are utilized for learning of robust correlation filters. In addition, most DCF trackers that applied 1-dimensional scale space search method suffered from appearance changes, such as non-rigid deformation. We proposed a 2-dimensional scale space search method to find appropriate scales to adapt to large scale and aspect ratio changes. We perform comprehensive experiments on four benchmarks: OTB-100, VOT-2016, VOT-2018, and LaSOT. The experimental results illustrate the effectiveness of our tracker, which achieved a competitive tracking performance. On OTB-100, our tracker achieved a gain of 0.8% in success, compared to the best existing DCF trackers. On VOT2018, our tracker outperformed the top DCF trackers with a gain of 1.1% in Expected Average Overlap (EAO). On LaSOT, we obtained a gain of 5.2% in success, compared to the best DCF trackers.
Robust UAV Target Tracking Algorithm Based on Saliency Detection
Due to their high efficiency and real-time performance, discriminant correlation filtering (DCF) trackers have been widely applied in unmanned aerial vehicle (UAV) tracking. However, the robustness of existing trackers is still poor when facing complex scenes, such as background clutter, occlusion, camera motion, and scale variations. In response to this problem, this paper proposes a robust UAV target tracking algorithm based on saliency detection (SDBCF). Using saliency detection methods, the DCF tracker is optimized in three aspects to enhance the robustness of the tracker in complex scenes: feature fusion, filter-model construct, and scale-estimation methods improve. Firstly, this article analyzes the features from both spatial and temporal dimensions, evaluates the representational and discriminative abilities of different features, and achieves adaptive feature fusion. Secondly, this paper constructs a dynamic spatial regularization term using a mask that fits the target, and integrates it with a second-order differential regularization term into the DCF framework to construct a novel filter model, which is solved using the ADMM method. Next, this article uses saliency detection to supervise the aspect ratio of the target, and trains a scale filter in the continuous domain to improve the tracker’s adaptability to scale variations. Finally, comparative experiments were conducted with various DCF trackers on three UAV datasets: UAV123, UAV20L, and DTB70. The DP and AUC scores of SDBCF on the three datasets were (71.5%, 58.9%), (63.0%, 57.8%), and (72.1%, 48.4%), respectively. The experimental results indicate that SDBCF achieves a superior performance.
Polarimetric Contextual Classification of PolSAR Images Using Sparse Representation and Superpixels
In recent years, sparse representation-based techniques have shown great potential for pattern recognition problems. In this paper, the problem of polarimetric synthetic aperture radar (PolSAR) image classification is investigated using sparse representation-based classifiers (SRCs). We propose to take advantage of both polarimetric information and contextual information by combining sparsity-based classification methods with the concept of superpixels. Based on polarimetric feature vectors constructed by stacking a variety of polarimetric signatures and a superpixel map, two strategies are considered to perform polarimetric-contextual classification of PolSAR images. The first strategy starts by classifying the PolSAR image with pixel-wise SRC. Then, spatial regularization is imposed on the pixel-wise classification map by using majority voting within superpixels. In the second strategy, the PolSAR image is classified by taking superpixels as processing elements. The joint sparse representation-based classifier (JSRC) is employed to combine the polarimetric information contained in feature vectors and the contextual information provided by superpixels. Experimental results on real PolSAR datasets demonstrate the feasibility of the proposed approaches. It is proven that the classification performance is improved by using contextual information. A comparison with several other approaches also verifies the effectiveness of the proposed approach.