Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
86 result(s) for "De Vleeschouwer, Christophe"
Sort by:
On the Importance of Diversity When Training Deep Learning Segmentation Models with Error-Prone Pseudo-Labels
The key to training deep learning (DL) segmentation models lies in the collection of annotated data. The annotation process is, however, generally expensive in human resources. Our paper leverages deep or traditional machine learning methods trained on a small set of manually labeled data to automatically generate pseudo-labels on large datasets, which are then used to train so-called data-reinforced deep learning models. The relevance of the approach is demonstrated in two applicative scenarios that are distinct both in terms of task and pseudo-label generation procedures, enlarging the scope of the outcomes of our study. Our experiments reveal that (i) data reinforcement helps, even with error-prone pseudo-labels, (ii) convolutional neural networks have the capability to regularize their training with respect to labeling errors, and (iii) there is an advantage to increasing diversity when generating the pseudo-labels, either by enriching the manual annotation through accurate annotation of singular samples, or by considering soft pseudo-labels per sample when prior information is available about their certainty.
Poly-cam: high resolution class activation map for convolutional neural networks
The demand for explainable AI continues to rise alongside advancements in deep learning technology. Existing methods such as convolutional neural networks often struggle to accurately pinpoint the image features justifying a network’s prediction due to low-resolution saliency maps (e.g., CAM), smooth visualizations from perturbation-based techniques, or numerous isolated peaky spots in gradient-based approaches. In response, our work seeks to merge information from earlier and later layers within the network to create high-resolution class activation maps that not only maintain a level of competitiveness with previous art in terms of insertion-deletion faithfulness metrics but also significantly surpass it regarding the precision in localizing class-specific features.
Cross-Domain Data Augmentation for Deep-Learning-Based Male Pelvic Organ Segmentation in Cone Beam CT
For prostate cancer patients, large organ deformations occurring between radiotherapy treatment sessions create uncertainty about the doses delivered to the tumor and surrounding healthy organs. Segmenting those regions on cone beam CT (CBCT) scans acquired on treatment day would reduce such uncertainties. In this work, a 3D U-net deep-learning architecture was trained to segment bladder, rectum, and prostate on CBCT scans. Due to the scarcity of contoured CBCT scans, the training set was augmented with CT scans already contoured in the current clinical workflow. Our network was then tested on 63 CBCT scans. The Dice similarity coefficient (DSC) increased significantly with the number of CBCT and CT scans in the training set, reaching 0.874 ± 0.096 , 0.814 ± 0.055 , and 0.758 ± 0.101 for bladder, rectum, and prostate, respectively. This was about 10% better than conventional approaches based on deformable image registration between planning CT and treatment CBCT scans, except for prostate. Interestingly, adding 74 CT scans to the CBCT training set allowed maintaining high DSCs, while halving the number of CBCT scans. Hence, our work showed that although CBCT scans included artifacts, cross-domain augmentation of the training set was effective and could rely on large datasets available for planning CT scans.
Involvement of human ribosomal proteins in nucleolar structure and p53-dependent nucleolar stress
The nucleolus is a potent disease biomarker and a target in cancer therapy. Ribosome biogenesis is initiated in the nucleolus where most ribosomal (r-) proteins assemble onto precursor rRNAs. Here we systematically investigate how depletion of each of the 80 human r-proteins affects nucleolar structure, pre-rRNA processing, mature rRNA accumulation and p53 steady-state level. We developed an image-processing programme for qualitative and quantitative discrimination of normal from altered nucleolar morphology. Remarkably, we find that uL5 (formerly RPL11) and uL18 (RPL5) are the strongest contributors to nucleolar integrity. Together with the 5S rRNA, they form the late-assembling central protuberance on mature 60S subunits, and act as an Hdm2 trap and p53 stabilizer. Other major contributors to p53 homeostasis are also strictly late-assembling large subunit r-proteins essential to nucleolar structure. The identification of the r-proteins that specifically contribute to maintaining nucleolar structure and p53 steady-state level provides insights into fundamental aspects of cell and cancer biology. The nucleolus is a specialized functional domain of the nucleus where ribosome biogenesis is initiated and also implicated in a p53-dependent anti-tumor surveillance. Here the authors use a quantitative imaging approach to detail the role of each ribosomal protein on the structural integrity of the nucleolus and p53 homeostasis.
Efficient and robust bitstream processing in binarised neural networks
In the neural network context, used in a variety of applications, binarised networks, which describe both weights and activations as single‐bit binary values, provide computationally attractive solutions. A lightweight binarised neural network system can be constructed using only logic gates and counters together with a two‐valued activation function unit. However, binarised neural networks represent the weights and the neuron outputs with only one bit, making them sensitive to bit‐flipping errors. Binarised weights and neurons are manipulated by the utilisation of bitstream processing with regard to stochastic computing to cope with this error sensitivity. Stochastic computing is shown to provide robustness for bit errors on data while being built on a hardware structure, whose implementation is simplified by a novel subtraction‐free implementation of the neuron activation.
Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks
Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.
Consensus-based trajectory estimation for ball detection in calibrated cameras systems
This paper considers the detection of the ball in team sport scenes observed with still or motion-compensated calibrated cameras. Foreground masks do provide primary cues to identify circular moving objects in the scene, but are shown to be too noisy to achieve reliable detections of weakly contrasted balls, especially when a single viewpoint is available, as often desired for reduced deployment cost. In those cases, trajectory analysis has been shown to provide valuable complementary information to differentiate true and false positives among the candidates detected by the foreground mask(s). In this paper, we focus on the detection of ball trajectory segments, exclusively from visual cues, without considering semantic reasoning about team play to connect those segments into long trajectories. We revisit several recent works, mainly the ones that we presented in Amit Kumar et al. (ICDSC, 1 ), Parisot and De Vleeschouwer (ICME, 2 ), and introduce a publicly available dataset to compare them. We conclude that randomized consensus-based methods are competitive compared to the alternative deterministic graph-based solutions, while offering the additional advantage to naturally extend to the cost-effective single-view scenario. As an original contribution, we also introduce a procedure to efficiently clean up the foreground mask in correlation-based methods like (Amit Kumar et al. in ICDSC, 1 ) and a nonlinear rank-order filter to merge the foreground cues from multiple viewpoints. We also derive recommendations regarding the camera positioning and the buffering needs of a real-time acquisition system.
Use of the iNo score to discriminate normal from altered nucleolar morphology, with applications in basic cell biology and potential in human disease diagnostics
Ribosome biogenesis is initiated in the nucleolus, a cell condensate essential to gene expression, whose morphology informs cancer pathologists on the health status of a cell. Here, we describe a protocol for assessing, both qualitatively and quantitatively, the involvement of trans-acting factors in the nucleolar structure. The protocol involves use of siRNAs to deplete cells of factors of interest, fluorescence imaging of nucleoli in an automated high-throughput platform, and use of dedicated software to determine an index of nucleolar disruption, the iNo score. This scoring system is unique in that it integrates the five most discriminant shape and textural features of the nucleolus into a parametric equation. Determining the iNo score enables both qualitative and quantitative factor classification with prediction of function (functional clustering), which to our knowledge is not achieved by competing approaches, as well as stratification of their effect (severity of defects) on nucleolar structure. The iNo score has the potential to be useful in basic cell biology (nucleolar structure–function relationships, mitosis, and senescence), developmental and/or organismal biology (aging), and clinical practice (cancer, viral infection, and reproduction). The entire protocol can be completed within 1 week.
Specifying the graphic characteristics of words that influence children’s handwriting
Research about the development of the graphomotor side of writing is very scarce. The goal of this study was to gain a better understanding of what constitutes graphic complexity of written material by determining the impact of graphic characteristics on handwriting production. In this end, the pen stroke of cursive handwriting was precisely described through an algorithm of detection of seven graphic characteristics: the number of angles, turn backs, curves in X and Y, pen-ups and modified links. Twenty typically developing children in grade 2 completed a single-word dictation task, composed of 48 items, on a digital writing tablet. All 48 words were regular words, highly frequent for second graders, and varied in terms of their graphic structure. Their handwriting production for each word was assessed in terms of both legibility and speed. A general linear mixed model was run to determine the impact of each graphic characteristic on handwriting performance. In agreement with our hypothesis, the results showed that words have different levels of graphic complexity. The following characteristics, in order of importance, significantly influenced negatively handwriting legibility: modified links, angles, curves, pen-ups and length. Regarding speed, the angles were the only characteristic that made children slow down while handwriting. These findings represent novelty in the field of research on writing. Unlike the usual approaches, it focused on investigating the graphic complexity at the word-level. It offers for the first time a universal classification of the graphic characteristics of words and it enables the quantification of the graphic complexity of words.
Parity Bit Replenishment for JPEG 2000-Based Video Streaming
This paper envisions coding with side information to design a highly scalable video codec. To achieve fine-grained scalability in terms of resolution, quality, and spatial access as well as temporal access to individual frames, the JPEG 2000 coding algorithm has been considered as the reference algorithm to encode INTRA information, and coding with side information has been envisioned to refresh the blocks that change between two consecutive images of a video sequence. One advantage of coding with side information compared to conventional closed-loop hybrid video coding schemes lies in the fact that parity bits are designed to correct stochastic errors and not to encode deterministic prediction errors. This enables the codec to support some desynchronization between the encoder and the decoder, which is particularly helpful to adapt on the fly pre-encoded content to fluctuating network resources and/or user preferences in terms of regions of interest. Regarding the coding scheme itself, to preserve both quality scalability and compliance to the JPEG 2000 wavelet representation, a particular attention has been devoted to the definition of a practical coding framework able to exploit not only the temporal but also spatial correlation among wavelet subbands coefficients, while computing the parity bits on subsets of wavelet bit-planes. Simulations have shown that compared to pure INTRA-based conditional replenishment solutions, the addition of the parity bits option decreases the transmission cost in terms of bandwidth, while preserving access flexibility.