Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
A Hierarchical Signal-to-Policy Learning Framework for Risk-Aware Portfolio Optimization
by
Chang, Kuo-Chu
, Yu, Jiayang
in
Asset allocation
/ Conditional Value-at-Risk (CVaR)
/ Decision making
/ Deep learning
/ explainable artificial intelligence (XAI)
/ hierarchical reinforcement learning
/ Neural networks
/ Optimization
/ Portfolio management
/ portfolio optimization
/ SHAP values
/ XGBoost
2026
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
A Hierarchical Signal-to-Policy Learning Framework for Risk-Aware Portfolio Optimization
by
Chang, Kuo-Chu
, Yu, Jiayang
in
Asset allocation
/ Conditional Value-at-Risk (CVaR)
/ Decision making
/ Deep learning
/ explainable artificial intelligence (XAI)
/ hierarchical reinforcement learning
/ Neural networks
/ Optimization
/ Portfolio management
/ portfolio optimization
/ SHAP values
/ XGBoost
2026
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
A Hierarchical Signal-to-Policy Learning Framework for Risk-Aware Portfolio Optimization
by
Chang, Kuo-Chu
, Yu, Jiayang
in
Asset allocation
/ Conditional Value-at-Risk (CVaR)
/ Decision making
/ Deep learning
/ explainable artificial intelligence (XAI)
/ hierarchical reinforcement learning
/ Neural networks
/ Optimization
/ Portfolio management
/ portfolio optimization
/ SHAP values
/ XGBoost
2026
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
A Hierarchical Signal-to-Policy Learning Framework for Risk-Aware Portfolio Optimization
Journal Article
A Hierarchical Signal-to-Policy Learning Framework for Risk-Aware Portfolio Optimization
2026
Request Book From Autostore
and Choose the Collection Method
Overview
This study proposes a hierarchical signal-to-policy learning framework for risk-aware portfolio optimization that integrates model-based return forecasting, explainable machine learning, and deep reinforcement learning (DRL) within a unified architecture. In the first stage, next-period returns are estimated using gradient-boosted tree models, and SHAP-based feature attributions are extracted to provide transparent, factor-level explanations of the predictive signals. In the second stage, a Proximal Policy Optimization (PPO) agent incorporates both predictive forecasts and explanatory signals into its state representation and learns dynamic allocation policies under a mean–CVaR reward function that explicitly penalizes tail risk while controlling trading frictions. By separating signal extraction from policy learning, the proposed architecture allows the use of economically interpretable predictive signals to incorporate into the policy’s state representation while preserving the flexibility and adaptability of reinforcement learning. Empirical evaluations on U.S. sector ETFs and Dow Jones Industrial Average constituents show that the hierarchical framework delivers higher and stable out-of-sample risk-adjusted returns relative to both a single-layer DRL agent trained solely on technical indicators, a mean–CVaR optimized portfolio using the same parameters used in the proposed hierarchical model and standard equal weight as well as index-based benchmarks. These results demonstrate that integrating explainable predictive signals with risk-sensitive reinforcement learning improves the robustness and stability of data-driven portfolio strategies.
This website uses cookies to ensure you get the best experience on our website.