Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
75
result(s) for
"adaptive sample selection"
Sort by:
Scattering-Point-Guided Oriented RepPoints for Ship Detection
by
Huang, Lijia
,
Zhao, Weishan
,
Yan, Chaobao
in
adaptive sample selection
,
Adaptive sampling
,
Algorithms
2024
Ship detection finds extensive applications in fisheries management, maritime rescue, and surveillance. However, detecting nearshore targets in SAR images is challenging due to land scattering interference and non-axisymmetric ship shapes. Existing SAR ship detection models struggle to adapt to oriented ship detection in complex nearshore environments. To address this, we propose an oriented-reppoints target detection scheme guided by scattering points in SAR images. Our method deeply integrates SAR image target scattering characteristics and designs an adaptive sample selection scheme guided by target scattering points. This incorporates scattering position features into the sample quality measurement scheme, providing the network with a higher-quality set of proposed reppoints. We also introduce a novel supervised guidance paradigm that uses target scattering points to guide the initialization of reppoints, mitigating the influence of land scattering interference on the initial reppoints quality. This achieves adaptive feature learning, enhancing the quality of the initial reppoints set and the performance of object detection. Our method has been extensively tested on the SSDD and HRSID datasets, where we achieved mAP scores of 89.8% and 80.8%, respectively. These scores represent significant improvements over the baseline methods, demonstrating the effectiveness and robustness of our approach. Additionally, our method exhibits strong anti-interference capabilities in nearshore detection and has achieved state-of-the-art performance.
Journal Article
Salient object segmentation based on depth-aware image layering
2019
This paper proposes an efficient salient object segmentation method via depth-aware image layering. First, based on the multiscale region segmentation results of an input color image, the depth consistency integration is utilized to generate the image pre-segmentation result. Then, under the guidance of the depth histogram division, the pre-segmented regions are divided into several different layers to differentiate salient object regions and background regions. Finally, an adaptive sample update and selection method based on layered image regions is used to select appropriate training samples for salient object segmentation. The depth information of the image is fully utilized in each step of the entire framework. Experimental results on two public datasets demonstrate that the proposed method achieves the better performance than the state-of-the-art depth-aware salient object segmentation methods.
Journal Article
No regret sample selection with noisy labels
by
Suehiro, Daiki
,
Mitsuo, Nariaki
,
Uchida, Seiichi
in
Accuracy
,
Adaptive sampling
,
Artificial Intelligence
2024
Deep neural networks (DNNs) suffer from noisy-labeled data because of the risk of overfitting. To avoid the risk, in this paper, we propose a novel DNN training method with sample selection based on adaptive
k
-set selection, which selects
k
(<
n
, where
n
is the number of training samples) samples with a small noise-risk from the whole
n
noisy training samples at each epoch. It has the strong advantage of guaranteeing the performance of the selection theoretically. Roughly speaking, a regret, which is defined by the difference between the actual selection and the best selection, of the proposed method is theoretically bounded, even though the best selection is unknown until the end of all epochs. The experimental results on multiple noisy-labeled datasets demonstrate that our sample selection strategy works effectively in the DNN training; in fact, the proposed method achieved the best or the second-best performance among state-of-the-art methods, while requiring a significantly lower computational cost.
Journal Article
VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS
2010
We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is \"small\" relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method.
Journal Article
On the Adaptive Elastic-Net with a Diverging Number of Parameters
2009
We consider the problem of model selection and estimation in situations where the number of parameters diverges with the sample size. When the dimension is high, an ideal method should have the oracle property [J. Amer Statist. Assoc. 96 (2001) 1348-1360] and [Ann. Statist. 32 (2004) 928-961] which ensures the optimal large sample performance. Furthermore, the highdimensionality often induces the collinearity problem, which should be properly handled by the ideal method. Many existing variable selection methods fail to achieve both goals simultaneously. In this paper, we propose the adaptive elastic-net that combines the strengths of the quadratic regularization and the adaptively weighted lasso shrinkage. Under weak regularity conditions, we establish the oracle property of the adaptive elastic-net. We show by simulations that the adaptive elastic-net deals with the collinearity problem better than the other oracle-like methods, thus enjoying much improved finite sample performance.
Journal Article
Multi-arm multi-stage (MAMS) randomised selection designs: impact of treatment selection rules on the operating characteristics
by
Pinkney, Thomas
,
Choodari-Oskooei, Babak
,
Handley, Kelly
in
Adaptive Clinical Trials as Topic
,
Adaptive trial designs
,
Analysis
2024
Background
Multi-arm multi-stage (MAMS) randomised trial designs have been proposed to evaluate multiple research questions in the confirmatory setting. In designs with several interventions, such as the 8-arm 3-stage ROSSINI-2 trial for preventing surgical wound infection, there are likely to be strict limits on the number of individuals that can be recruited or the funds available to support the protocol. These limitations may mean that not all research treatments can continue to accrue the required sample size for the definitive analysis of the primary outcome measure at the final stage. In these cases, an additional treatment selection rule can be applied at the early stages of the trial to restrict the maximum number of research arms that can progress to the subsequent stage(s).
This article provides guidelines on how to implement treatment selection within the MAMS framework. It explores the impact of treatment selection rules, interim lack-of-benefit stopping boundaries and the timing of treatment selection on the operating characteristics of the MAMS selection design.
Methods
We outline the steps to design a MAMS selection trial. Extensive simulation studies are used to explore the maximum/expected sample sizes, familywise type I error rate (FWER), and overall power of the design under both binding and non-binding interim stopping boundaries for lack-of-benefit.
Results
Pre-specification of a treatment selection rule reduces the maximum sample size by approximately 25% in our simulations. The familywise type I error rate of a MAMS selection design is smaller than that of the standard MAMS design with similar design specifications without the additional treatment selection rule. In designs with strict selection rules - for example, when only one research arm is selected from 7 arms - the final stage significance levels can be relaxed for the primary analyses to ensure that the overall type I error for the trial is not underspent. When conducting treatment selection from several treatment arms, it is important to select a large enough subset of research arms (that is, more than one research arm) at early stages to maintain the overall power at the pre-specified level.
Conclusions
Multi-arm multi-stage selection designs gain efficiency over the standard MAMS design by reducing the overall sample size. Diligent pre-specification of the treatment selection rule, final stage significance level and interim stopping boundaries for lack-of-benefit are key to controlling the operating characteristics of a MAMS selection design. We provide guidance on these design features to ensure control of the operating characteristics.
Journal Article
VARIABLE SELECTION IN LINEAR MIXED EFFECTS MODELS
2012
This paper is concerned with the selection and estimation of fixed and random effects in linear mixed effects models. We propose a class of nonconcave penalized profile likelihood methods for selecting and estimating important fixed effects. To overcome the difficulty of unknown covariance matrix of random effects, we propose to use a proxy matrix in the penalized profile likelihood. We establish conditions on the choice of the proxy matrix and show that the proposed procedure enjoys the model selection consistency where the number of fixed effects is allowed to grow exponentially with the sample size. We further propose a group variable selection strategy to simultaneously select and estimate important random effects, where the unknown covariance matrix of random effects is replaced with a proxy matrix. We prove that, with the proxy matrix appropriately chosen, the proposed procedure can identify all true random effects with asymptotic probability one, where the dimension of random effects vector is allowed to increase exponentially with the sample size. Monte Carlo simulation studies are conducted to examine the finite-sample performance of the proposed procedures. We further illustrate the proposed procedures via a real data example.
Journal Article
A Population Genomic Assessment of Three Decades of Evolution in a Natural Drosophila Population
2022
Abstract
Population genetics seeks to illuminate the forces shaping genetic variation, often based on a single snapshot of genomic variation. However, utilizing multiple sampling times to study changes in allele frequencies can help clarify the relative roles of neutral and non-neutral forces on short time scales. This study compares whole-genome sequence variation of recently collected natural population samples of Drosophila melanogaster against a collection made approximately 35 years prior from the same locality—encompassing roughly 500 generations of evolution. The allele frequency changes between these time points would suggest a relatively small local effective population size on the order of 10,000, significantly smaller than the global effective population size of the species. Some loci display stronger allele frequency changes than would be expected anywhere in the genome under neutrality—most notably the tandem paralogs Cyp6a17 and Cyp6a23, which are impacted by structural variation associated with resistance to pyrethroid insecticides. We find a genome-wide excess of outliers for high genetic differentiation between old and new samples, but a larger number of adaptation targets may have affected SNP-level differentiation versus window differentiation. We also find evidence for strengthening latitudinal allele frequency clines: northern-associated alleles have increased in frequency by an average of nearly 2.5% at SNPs previously identified as clinal outliers, but no such pattern is observed at random SNPs. This project underscores the scientific potential of using multiple sampling time points to investigate how evolution operates in natural populations, by quantifying how genetic variation has changed over ecologically relevant timescales.
Journal Article
Psychometrics Behind Computerized Adaptive Testing
2015
The paper provides a survey of 18 years’ progress that my colleagues, students (both former and current) and I made in a prominent research area in Psychometrics—Computerized Adaptive Testing (CAT). We start with a historical review of the establishment of a large sample foundation for CAT. It is worth noting that the asymptotic results were derived under the framework of Martingale Theory, a very theoretical perspective of Probability Theory, which may seem unrelated to educational and psychological testing. In addition, we address a number of issues that emerged from large scale implementation and show that how theoretical works can be helpful to solve the problems. Finally, we propose that CAT technology can be very useful to support individualized instruction on a mass scale. We show that even paper and pencil based tests can be made adaptive to support classroom teaching.
Journal Article
Sure independence screening for ultrahigh dimensional feature space
2008
Variable selection plays an important role in high dimensional statistical modelling which nowadays appears in many areas and is key to various scientific discoveries. For problems of large scale or dimensionality p, accuracy of estimation and computational cost are two top concerns. Recently, Candes and Tao have proposed the Dantzig selector using L₁-regularization and showed that it achieves the ideal risk up to a logarithmic factor log(p). Their innovative procedure and remarkable result are challenged when the dimensionality is ultrahigh as the factor log(p) can be large and their uniform uncertainty principle can fail. Motivated by these concerns, we introduce the concept of sure screening and propose a sure screening method that is based on correlation learning, called sure independence screening, to reduce dimensionality from high to a moderate scale that is below the sample size. In a fairly general asymptotic framework, correlation learning is shown to have the sure screening property for even exponentially growing dimensionality. As a methodological extension, iterative sure independence screening is also proposed to enhance its finite sample performance. With dimension reduced accurately from high to below sample size, variable selection can be improved on both speed and accuracy, and can then be accomplished by a well-developed method such as smoothly clipped absolute deviation, the Dantzig selector, lasso or adaptive lasso. The connections between these penalized least squares methods are also elucidated.
Journal Article