Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
50
result(s) for
"Kim Jungtaek"
Sort by:
Bayesian optimization with approximate set kernels
2021
We propose a practical Bayesian optimization method over sets, to minimize a black-box function that takes a set as a single input. Because set inputs are permutation-invariant, traditional Gaussian process-based Bayesian optimization strategies which assume vector inputs can fall short. To address this, we develop a Bayesian optimization method with set kernel that is used to build surrogate functions. This kernel accumulates similarity over set elements to enforce permutation-invariance, but this comes at a greater computational cost. To reduce this burden, we propose two key components: (i) a more efficient approximate set kernel which is still positive-definite and is an unbiased estimator of the true set kernel with upper-bounded variance in terms of the number of subsamples, (ii) a constrained acquisition function optimization over sets, which uses symmetry of the feasible region that defines a set input. Finally, we present several numerical experiments which demonstrate that our method outperforms other methods.
Journal Article
Interfacial Solar Evaporator - Physical Principles and Fabrication Methods
by
Cho, Seong Ho
,
Kim, Jungtaek
,
Choi, Hanseul
in
Atoms & subatomic particles
,
Carbon
,
Clean energy
2021
Production of fresh water based on a renewable energy source is one of the most important global challenges for mankind due to ever-accelerating climate changes. Solar thermal evaporation shows promise for overcoming the water scarcity problem by utilizing solar energy, the most abundant and clean energy source. To enhance the performance of solar evaporators, interfacial solar evaporators have been introduced, which harness solar energy onto the water surface. To enable energy conversion and water evaporation at the interfaces of a solar evaporator, multi-scale heat and water transport have been investigated. Furthermore, various light-absorbing materials and system configurations have been studied to achieve the theoretical maximum performance. The fundamental physics of the interfacial solar evaporator, including thermal and water transport, and a broad range of interfacial solar evaporator devices in terms of the fabrication techniques and its structures are reviewed.
Journal Article
Long-Term Field Observation of the Power Generation and System Temperature of a Roof-Integrated Photovoltaic System in South Korea
by
Kim, Jungtaek
,
Jang, Seokjin
,
Azhar, Muhammad Hanif Ainun
in
Aging
,
Air pollution
,
Cadmium telluride
2023
A miniature house roof-integrated photovoltaic (PV) system in South Korea was monitored for 2.5 years. System performance was evaluated through power generation, solar irradiance, and system temperature. The comparison of each month’s power generation and solar irradiance revealed a parallel correlation over the entire observation period. The internal module temperature was almost always higher than the roof rear and module rear temperatures by 1–2 and 1–5 °C, respectively, while the temperature behind the PV modules was the lowest among the three temperatures, showing that the installation of PV modules as a roofing system does not affect the temperature of the roofing system. The system temperatures affected the power conversion efficiency; a maximum of 11.42% was achieved when the system temperatures were the lowest, and a minimum of 5.24% was achieved when the system temperatures were the highest. Hence, half of the anticipated generated power was lost due to the temperature fluctuation. Overall, installing PV modules as an entire roofing system is possible with this configuration due to the minimum effect on the roof temperature. However, PV system temperature control is essential for maintaining the power generation performance of the PV modules.
Journal Article
Beyond Regrets: Geometric Metrics for Bayesian Optimization
2024
Bayesian optimization is a principled optimization strategy for a black-box objective function. It shows its effectiveness in a wide variety of real-world applications such as scientific discovery and experimental design. In general, the performance of Bayesian optimization is reported through regret-based metrics such as instantaneous, simple, and cumulative regrets. These metrics only rely on function evaluations, so that they do not consider geometric relationships between query points and global solutions, or query points themselves. Notably, they cannot discriminate if multiple global solutions are successfully found. Moreover, they do not evaluate Bayesian optimization's abilities to exploit and explore a search space given. To tackle these issues, we propose four new geometric metrics, i.e., precision, recall, average degree, and average distance. These metrics allow us to compare Bayesian optimization algorithms considering the geometry of both query points and global optima, or query points. However, they are accompanied by an extra parameter, which needs to be carefully determined. We therefore devise the parameter-free forms of the respective metrics by integrating out the additional parameter. Finally, we validate that our proposed metrics can provide more delicate interpretation of Bayesian optimization, on top of assessment via the conventional metrics.
Density Ratio Estimation-based Bayesian Optimization with Semi-Supervised Learning
2024
Bayesian optimization has attracted huge attention from diverse research areas in science and engineering, since it is capable of efficiently finding a global optimum of an expensive-to-evaluate black-box function. In general, a probabilistic regression model is widely used as a surrogate function to model an explicit distribution over function evaluations given an input to estimate and a training dataset. Beyond the probabilistic regression-based methods, density ratio estimation-based Bayesian optimization has been suggested in order to estimate a density ratio of the groups relatively close and relatively far to a global optimum. Developing this line of research further, supervised classifiers are employed to estimate a class probability for the two groups instead of a density ratio. However, the supervised classifiers used in this strategy are prone to be overconfident for known knowledge on global solution candidates. Supposing that we have access to unlabeled points, e.g., predefined fixed-size pools, we propose density ratio estimation-based Bayesian optimization with semi-supervised learning to solve this challenge. Finally, we show the empirical results of our methods and several baseline methods in two distinct scenarios with unlabeled point sampling and a fixed-size pool and analyze the validity of our proposed methods in diverse experiments.
Exploiting Preferences in Loss Functions for Sequential Recommendation via Weak Transitivity
2024
A choice of optimization objective is immensely pivotal in the design of a recommender system as it affects the general modeling process of a user's intent from previous interactions. Existing approaches mainly adhere to three categories of loss functions: pairwise, pointwise, and setwise loss functions. Despite their effectiveness, a critical and common drawback of such objectives is viewing the next observed item as a unique positive while considering all remaining items equally negative. Such a binary label assignment is generally limited to assuring a higher recommendation score of the positive item, neglecting potential structures induced by varying preferences between other unobserved items. To alleviate this issue, we propose a novel method that extends original objectives to explicitly leverage the different levels of preferences as relative orders between their scores. Finally, we demonstrate the superior performance of our method compared to baseline objectives.
Noise-Adaptive Confidence Sets for Linear Bandits and Application to Bayesian Optimization
2024
Adapting to a priori unknown noise level is a very important but challenging problem in sequential decision-making as efficient exploration typically requires knowledge of the noise level, which is often loosely specified. We report significant progress in addressing this issue for linear bandits in two respects. First, we propose a novel confidence set that is `semi-adaptive' to the unknown sub-Gaussian parameter \\(\\sigma_*^2\\) in the sense that the (normalized) confidence width scales with \\(\\sqrt{d\\sigma_*^2 + \\sigma_0^2}\\) where \\(d\\) is the dimension and \\(\\sigma_0^2\\) is the specified sub-Gaussian parameter (known) that can be much larger than \\(\\sigma_*^2\\). This is a significant improvement over \\(\\sqrt{d\\sigma_0^2}\\) of the standard confidence set of Abbasi-Yadkori et al. (2011), especially when \\(d\\) is large or \\(\\sigma_*^2=0\\). We show that this leads to an improved regret bound in linear bandits. Second, for bounded rewards, we propose a novel variance-adaptive confidence set that has much improved numerical performance upon prior art. We then apply this confidence set to develop, as we claim, the first practical variance-adaptive linear bandit algorithm via an optimistic approach, which is enabled by our novel regret analysis technique. Both of our confidence sets rely critically on `regret equality' from online learning. Our empirical evaluation in diverse Bayesian optimization tasks shows that our proposed algorithms demonstrate better or comparable performance compared to existing methods.
On Uncertainty Estimation by Tree-based Surrogate Models in Sequential Model-based Optimization
2022
Sequential model-based optimization sequentially selects a candidate point by constructing a surrogate model with the history of evaluations, to solve a black-box optimization problem. Gaussian process (GP) regression is a popular choice as a surrogate model, because of its capability of calculating prediction uncertainty analytically. On the other hand, an ensemble of randomized trees is another option and has practical merits over GPs due to its scalability and easiness of handling continuous/discrete mixed variables. In this paper we revisit various ensembles of randomized trees to investigate their behavior in the perspective of prediction uncertainty estimation. Then, we propose a new way of constructing an ensemble of randomized trees, referred to as BwO forest, where bagging with oversampling is employed to construct bootstrapped samples that are used to build randomized trees with random splitting. Experimental results demonstrate the validity and good performance of BwO forest over existing tree-based models in various circumstances.
Generalized Neural Sorting Networks with Error-Free Differentiable Swap Functions
2024
Sorting is a fundamental operation of all computer systems, having been a long-standing significant research topic. Beyond the problem formulation of traditional sorting algorithms, we consider sorting problems for more abstract yet expressive inputs, e.g., multi-digit images and image fragments, through a neural sorting network. To learn a mapping from a high-dimensional input to an ordinal variable, the differentiability of sorting networks needs to be guaranteed. In this paper we define a softening error by a differentiable swap function, and develop an error-free swap function that holds a non-decreasing condition and differentiability. Furthermore, a permutation-equivariant Transformer network with multi-head attention is adopted to capture dependency between given inputs and also leverage its model capacity with self-attention. Experiments on diverse sorting benchmarks show that our methods perform better than or comparable to baseline methods.
Practical Bayesian Optimization with Threshold-Guided Marginal Likelihood Maximization
2020
We propose a practical Bayesian optimization method using Gaussian process regression, of which the marginal likelihood is maximized where the number of model selection steps is guided by a pre-defined threshold. Since Bayesian optimization consumes a large portion of its execution time in finding the optimal free parameters for Gaussian process regression, our simple, but straightforward method is able to mitigate the time complexity and speed up the overall Bayesian optimization procedure. Finally, the experimental results show that our method is effective to reduce the execution time in most of cases, with less loss of optimization quality.